It looks like you're using the API intended for interactive queries?
druid.server.http.maxSubqueryRows
is a guardrail on the interactive API to defend against floods from child query processes (including historicals). Floods happen when those child processes return more than a defined number of rows each, for example, when doing a GROUP BY on a high-cardinality column.
You may want to see this video on how this engine works.
I'd recommend you switch to using the other API for this query, which adopts asynchronous tasks to query storage directly, rather than the API you're using, which pre-fetches data to historicals and uses fan-out / fan-in to the broker process - which is where you have the issue.
You can see an example in the Python notebook here.
(Also noting that Druid 31 includes an experimental new engine called Dart. It's not GA.)