I had to solve this problem so I could maintain atomic apply of messages in Kafka that represented replicated database transactions. My requirements were stricter in that I needed total ordering of the original source sequence of messages for my customers and to maintain the context of where transaction boundaries occurred.
I designed the following technology to do this and it is in use at a large number of our financial industry customers who need to ensure exactly once characteristics and perfect ordering, along with maintaining transaction atomicity.
https://www.ibm.com/docs/en/idr/11.4.0?topic=kafka-transactionally-consistent-consumer
I gave a talk at the Kafka conference describing the internals of the approach… I think Confluent has it posted somewhere….
Ah here…
It is essentially a two phase commit approach to the problem utilizing the information in callbacks to record which exact message is the next to return, and injecting into the meta topic information about when commits occurred on the source.