79199642

Date: 2024-11-18 10:59:35
Score: 0.5
Natty:
Report link

check if the loss before loss.backward() requires grad by printing loss.requires_grad. If not you should check in the loss calculation function if:

From what I see, your function in detect.py convert tensor to numpy and python, which break the gradient chain. That should be why your loss doesn't require grad.

Reasons:
  • Has code block (-0.5):
  • Contains question mark (0.5):
  • Low reputation (0.5):
Posted by: MinhNH