79694713

Date: 2025-07-08 19:03:28
Score: 0.5
Natty:
Report link

I ran into the same issue and @rok answer worked for me. However, I wanted to avoid dropping the last batch. According to this thread, the issue seems to be related to parallel and distributed/multi gpu training. Removing this call to nn.DataParallel worked for me without needing to set add drop_last=True in the DataLoader:

model = nn.DataParallel(model)
Reasons:
  • Whitelisted phrase (-1): worked for me
  • Has code block (-0.5):
  • User mentioned (1): @rok
  • Low reputation (1):
Posted by: Kyle Seaman