Good question. Lets say you have a table with a billion rows in it, inserted over the last 15 years, but your application only really ever needs to access the last few months worth of data. Let's say you divide up your table into partitions and run your queries on just the one with the most recent data. An index helps find individual rows, but range based queries still do a full table scan. If you limit these within a specific partition you greatly reduce the amount of data that must be processed. Even something like an indexed WHERE clause involves additional IO operations, you're pulling an index from disk and scanning it. With partitioning you don't need to pull from disk. In the example you know beforehand that you're only interested in the most recent few months of data, so this is an improvement. Partitioning improves cache utilization since operations are limited to pulling the same smaller subset of data into cache. Also, the partitions indexes themselves will be smaller and run faster. Remember the index still has to be loaded into memory, for very large tables (like a billion rows) it can be a substantial amount of IO just to traverse an index. And infact you can setup local indexes on particular partitions. It also makes backup / restoration / archiving / data deletion easier since you can do things like drop an old partition, which takes a very minimal amount of resources compared with deleting all the individual rows.