79758004

Date: 2025-09-07 08:18:14
Score: 1.5
Natty:
Report link

Use with torch.no_grad(): during inference to avoid storing gradients and use mixed precision (torch.cuda.amp) to cut memory usage.

torch.cuda.empty_cache() does not “kill” memory still referenced by active objects. To truly free GPU memory- del unused variables, call gc.collect() and torch.cuda.empty_cache()

Reasons:
  • No code block (0.5):
  • Low reputation (1):
Posted by: Afrin Jaman