So after Craig updated his answer, I have understood what is up. It is kinda shameful... but I have been using 1 << 16
as my filler byte in png_set_add_alpha
, the 1 << 16
is equal to 65536
,
which in binary is 000000001 00000000 00000000
. Note the last 2 null bytes. the png_set_add_alpha
takes the png_uint32
as filler byte, but anyway, the maximum depth of alpha channel is 16 (2 bytes), and exactly 2 of this bytes were 0. Yes, because I did the 1 << 16
and for some reason thought that I will get 0xFF
. So libpng
understood it as convert paletted image to rgb and set its alpha to 0 so it will be transparent. What a shame.
Craig used 0xFE
in his answer, it should be 0xFF
(larger by 1) for maximum opacity, anyway. Thank you Craig for your dedication and help, you did a great job!