Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sparsity_loss = entropy #38

Open
wtj-zhong opened this issue Aug 27, 2023 · 4 comments
Open

sparsity_loss = entropy #38

wtj-zhong opened this issue Aug 27, 2023 · 4 comments

Comments

@wtj-zhong
Copy link

When I run this code with the synthetic Lego dataset, it works fine. But when I run it with the llff dataset, I encounter the following issue:
python run_nerf.py --config configs/fren.txt --finest_res 512 --log2_hashmap_size 19 --lrate 0.01 --lrate_decay 10 0 0.0010004043579101562 [00:00<?, ?it/s] [1/1]
[99%]
c:\users\nezo\desktop\3d\hashnerf-pytorch\run_nerf.py(379)raw2outputs() ->
sparsity_loss = entropy
Could you please tell me the reason for this issue?

@THUROI0787
Copy link

I had the same problem. I'm not sure why, but I retrained without changing anything, and it worked. May be a problem in the initialization and optimization of neural network parameters, with randomness?

@zshuai9508
Copy link

I also encountered the same problem on the self-made data set. Is there something wrong with the data?

@Fjzd
Copy link

Fjzd commented Nov 17, 2023

I think the problem is caused by weights = (x - voxel_min_vertex)/(voxel_max_vertex-voxel_min_vertex) of Hash_encoding!You can change it into weights = (x - voxel_min_vertex)/(voxel_max_vertex-voxel_min_vertex+1e-6) and try it again!

@wuzuyin
Copy link

wuzuyin commented Nov 23, 2023

May I ask everyone, why do we add this loss function and what is its purpose?
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants