When reading a Delta table, think of each partition as a task you can run in parallel. A good starting point is to set executors based on your cluster cores (around 4–5 cores per executor) and tweak spark.sql.shuffle.partitions to keep things smooth. Also, watch out for tiny or skewed partitions—they can slow things down even if autoscaling is on.