There isn't here enough information to suggest a solution. Yet, there are common problems that would lead to that behaviour.
I'll describe what seems to me the most important, and some possible solutions.
The CIFAR10 data has pixel from 0 to 255. I take that you use PyTorch.
With this in mind, the standard transforms.toTensor takes them to [0,1.0]. If you remove the mean (with Normalise) to .5 mean, and .5 std, then
the values end up between -1 and 1, from the formula Z = (X - mu)/std
But the exit activation that you use, sigmoid, has (0, 1) range.
This would explain black pixels for the ones that would have negative values, but not necessarily the positive ones.
If you want to keep a normalisation between -1 and 1, then just use tanh, which is in the correct range.
Using gradient clipping or normalisation could also help in the training process. It is also important that you try Adam with a reasonably small learning rate. Adam may not get as far as a well configured SGD, but it can sometimes be more robust, so take it as a "to try" advice.