Yes , ~1.2 minutes to read ~10M rows into Python is normal, and you’re already close to the practical limits. At that scale, the bottleneck isn’t the database but the combination of network transfer + Python needing to allocate millions of objects.
You can shave some time off with driver tweaks (server-side cursor, larger fetch sizes, binary protocol), but you won’t get a 5×–10× speedup unless you change the data handling model entirely (e.g., Arrow/Polars/NumPy to avoid Python object creation).