79303911

Date: 2024-12-23 18:25:34
Score: 0.5
Natty:
Report link

There isn't here enough information to suggest a solution. Yet, there are common problems that would lead to that behaviour.

I'll describe what seems to me the most important, and some possible solutions.

  1. It was suggested in a comment that you check normalisation. But why?

The CIFAR10 data has pixel from 0 to 255. I take that you use PyTorch.

With this in mind, the standard transforms.toTensor takes them to [0,1.0]. If you remove the mean (with Normalise) to .5 mean, and .5 std, then the values end up between -1 and 1, from the formula Z = (X - mu)/std

But the exit activation that you use, sigmoid, has (0, 1) range.

This would explain black pixels for the ones that would have negative values, but not necessarily the positive ones.

If you want to keep a normalisation between -1 and 1, then just use tanh, which is in the correct range.

  1. Note that you seem to have the same values for all channels, which means they are tightly correlated. Using Batchnormalisation and Dropout2d would be useful to help with stability of training and correlation between channels. I find those 2 quite important, in conjunction with the next paragraph.

Using gradient clipping or normalisation could also help in the training process. It is also important that you try Adam with a reasonably small learning rate. Adam may not get as far as a well configured SGD, but it can sometimes be more robust, so take it as a "to try" advice.

  1. It is also possible, but I find it less likely, that very strong initial signal that takes pixels to -inf and +inf giving the colour pattern that you observe. However I find it less likely because the shapes are correctly detected, and 1 and 2 are more likely to help.
Reasons:
  • Blacklisted phrase (0.5): why?
  • Long answer (-1):
  • Has code block (-0.5):
  • Contains question mark (0.5):
  • Low reputation (1):
Posted by: mister nobody