I had the challenge of facing a similar problem for planning downtime. But had to go forth with it anyway. My DB has relatively low workloads.
With a table of 22 columns, 300 GB of data and 25 million rows, this took about 20 minutes for me on AWS on Aurora RDS postgresql db3.t.large instance.
A similar setup I did in a DEV instance that had about 200k rows, with db3.t.medium instance, it took 8 seconds.
I also had to kill nearly all the connections to the DB temporarily to ensure the command runs without problems. Hope that provides a small benchmark.