In Java things like this can happen easily. From what I know, there might be two main reasons for your problem:
The original WINDOWS-1251
in your XML data is lost during serialization or deserialization. This might be because the way your converter(WINDOWS-1251) is interpretating the byte sequences in Cyrillic Characters differs from the way UTF-8(The default Java encoder) does. Therefore, Java instinctively chose the default encoder that might not be able to do the job. If this is the root of your problem, please consider the following suggestions: You can either try to decode JSON message using two versions, picking the one that can handle the respective Cyrillic byte sequence, or you could try to use an universal Unicode converter, as this post that I found suggests: https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/character-encoding
Another obvious reason that you should consider is that WINDOWS-1251
is not the best encoder to consider for what are you trying to accomplish here. Take a look at this documentation: Making object JSON serializable with regular encoder - this is another stack overflow post that discusses handling custom encoders.
Similar Questions that might give you a clue:
How to detect encoding mismatch
How to handle jackson deserialization error for all kinds of data mismatch in spring boot