-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sparsity_loss = entropy #38
Comments
I had the same problem. I'm not sure why, but I retrained without changing anything, and it worked. May be a problem in the initialization and optimization of neural network parameters, with randomness? |
I also encountered the same problem on the self-made data set. Is there something wrong with the data? |
I think the problem is caused by weights = (x - voxel_min_vertex)/(voxel_max_vertex-voxel_min_vertex) of Hash_encoding!You can change it into weights = (x - voxel_min_vertex)/(voxel_max_vertex-voxel_min_vertex+1e-6) and try it again! |
When I run this code with the synthetic Lego dataset, it works fine. But when I run it with the llff dataset, I encounter the following issue:
python run_nerf.py --config configs/fren.txt --finest_res 512 --log2_hashmap_size 19 --lrate 0.01 --lrate_decay 10 0 0.0010004043579101562 [00:00<?, ?it/s] [1/1]
[99%]
c:\users\nezo\desktop\3d\hashnerf-pytorch\run_nerf.py(379)raw2outputs() ->
sparsity_loss = entropy
Could you please tell me the reason for this issue?
The text was updated successfully, but these errors were encountered: