79480723

Date: 2025-03-03 10:14:50
Score: 2
Natty:
Report link

This was solved by passing allow_unused=True and materialize_grads=True to grad. That is:

d_loss_params = grad(loss, model.parameters(), retain_graph=True, allow_unused=True, materialize_grads=True)

See discussion on https://discuss.pytorch.org/t/gradient-of-loss-that-depends-on-gradient-of-network-with-respect-to-parameters/217275 for more info.

Reasons:
  • Probably link only (1):
  • Low length (0.5):
  • Has code block (-0.5):
  • Self-answer (0.5):
  • Low reputation (0.5):
Posted by: user2978125