Numbers are stored in binary, as dyadic fractions. Decimal fractions, in general, are not representable as binary fractions. When parsing the numbers in the code, from text to binary, rounding is applied. It may go in different directions for the two .9 constants, so that the rounding error is not cancelled but emphasized in the difference.
Look at any introduction to floating point numbers, the "Related" side bar has some famous ones, like Is floating-point math broken?