Looks like the night is darkest before dawn - I finally managed to crack a working version, which then allowed me to finally make sense of all the stuff that was confusing me.
The heart of my confusions was this: transactions in JDBC (and anything that builds on it) and reactive clients are completely incompatible and cannot be interchanged. This is because, fundamentally, they go through entirely different database connections, managed by entirely different pools and clients:
Jdbc goes through the regular blocking client (io.quarkus:quarkus-jdbc-postgresql
) that is managed by Agroal
reactive clients go through Vert.x, which has it's own connections and its own pool
As a consequence:
@Transactional
annotations have no effect on reactive clients, and neither do all other mechanism that do essentially the same thing, e.g. QuarkusTransaction
transactions on reactive clients (pool.withTransaction
) have no effect on JDBC queries (such as those done via datasource.connection.use { ... }
)
Crucially, nothing can be done about that - fundamentally, a transaction is owned by the connection, and the reactive and jdbc clients both hold their own, which are not compatible - io.vertx.sqlclient.impl.Connection
vs java.sql.Connection
(could perhaps be done in theory by somehow hacking out the raw socket information from one and injecting it into the other, but that's definitely not what's done out of the box)
Now, a big reason for this confusion was what is said in the docs on transactions and reactive extensions, as from that, it seemed like these two worlds are interoperable. However, this only applied to reactive pipelines using JDBC connections, and NOT to reactive pipelines using reactive clients. For pipelines using JDBC connections, and only those, the JDBC transaction is propagated via context propagation so its lifecycle matches the lifecycle of the reactive pipeline, not the function from which it is returned.
Another source of confusion: for the reactive client specifically, if you want to perform multiple operations within the reactive transaction, you need to manually pass around the connection - unlike with JDBC (and everything that builds on it, such as JPA, Hibernate, etc.) there's no behind-the-scenes magic that extracts the connection from some place. I think this could be done in theory, but it's not done in practice, and this key difference is not really emphasized in the docs.
Given that, the answers to my questions are:
If I want to use reactive clients, it would be somewhere between cumbersome and impossible to return a Multi
, since I have to use .withTransaction { }
. I could, theoretically, just use connection.begin()
, but then the client would need to call .commit
manually, which would make the API pretty cumbersome. I haven't tried exposing a Multi
with normal JDBC, but my gut says that should be doable given the builtin context propagation.
Testing it via INSERT
is fine, just as long as that INSERT
is executed in the same connection as the one that was opened in the previous step, which implies using the same method as the previous point does (reactive or JDBC). For reactive clients, that additionally means passing along the Connection
, for JDBC, this can be taken care of e.g. via @Transactional
annotations.
No, I cannot support both, at least not via a single API. I need to either either go full reactive client, or full JDBC. As stated in the previous point, that implies how I have to do the INSERT.
Yes, I am constrained in how I do this - either full reactive, or full JDBC, as explained in the previous points.
Hope this helps any wanderers that stumble upon this.