You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I want to create GS with 8GB RAM, but it fails because out of memory (1171 images only?)
/opt/photogrammetry/hierarchical-3d-gaussians/scene/gaussian_model.py:339: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)
self.anchors = torch.from_numpy(vals).long().cuda()
Training progress: 0%| | 0/15000 [00:00<?, ?it/s][ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [16/08 09:15:08]
[16/08 09:15:08]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [16/08 09:15:08]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [16/08 09:15:08]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [16/08 09:15:08]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [16/08 09:15:08]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [16/08 09:15:08]
[16/08 09:15:08]
/opt/photogrammetry/hierarchical-3d-gaussians/gaussian_renderer/__init__.py:223: UserWarning: torch.range is deprecated and will be removed in a future release because its behavior is inconsistent with Python's range builtin. Instead, use torch.arange, which produces values in [start, end).
skybox_inds = torch.range(pc._xyz.size(0) - pc.skybox_points, pc._xyz.size(0)-1, device="cuda").long()
Training progress: 0%|▏ | 20/15000 [00:03<46:33, 5.36it/s, Loss=0.1069218, Size=3769644, Peak memory=6941308928]Traceback (most recent call last):
File "/opt/photogrammetry/hierarchical-3d-gaussians/train_post.py", line 241, in <module>
training(lp.extract(args), op.extract(args), pp.extract(args), args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
File "/opt/photogrammetry/hierarchical-3d-gaussians/train_post.py", line 142, in training
loss.backward()
File "/home/user/miniconda3/envs/hierarchical_3d_gaussians/lib/python3.12/site-packages/torch/_tensor.py", line 525, in backward
torch.autograd.backward(
File "/home/user/miniconda3/envs/hierarchical_3d_gaussians/lib/python3.12/site-packages/torch/autograd/__init__.py", line 267, in backward
_engine_run_backward(
File "/home/user/miniconda3/envs/hierarchical_3d_gaussians/lib/python3.12/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 692.00 MiB. GPU
Can I somehow use rescaling to --r 8, how to pass more arguments?
The text was updated successfully, but these errors were encountered:
Hi,
The hierarchy post optimization is quite memory intensive (we observed up to 16GB GPU memory usage).
If you are using scripts/full_train.py, add --extra_training_args '-r 8' to rescale your images in the dataloader for all training scripts.
Memory usage can be further lowered by increasing the densify_grad_threshold, which will reduce the number of primitives, e.g. --extra_training_args '-r 8 --densify_grad_threshold 0.02'. This will harm quality.
If I want to create GS with 8GB RAM, but it fails because out of memory (1171 images only?)
Can I somehow use rescaling to --r 8, how to pass more arguments?
The text was updated successfully, but these errors were encountered: