79827024

Date: 2025-11-21 23:46:29
Score: 3
Natty:
Report link

I had the challenge of facing a similar problem for planning downtime. But had to go forth with it anyway. My DB has relatively low workloads.

With a table of 22 columns, 300 GB of data and 25 million rows, this took about 20 minutes for me on AWS on Aurora RDS postgresql db3.t.large instance.

A similar setup I did in a DEV instance that had about 200k rows, with db3.t.medium instance, it took 8 seconds.

I also had to kill nearly all the connections to the DB temporarily to ensure the command runs without problems. Hope that provides a small benchmark.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Me too answer (2.5): facing a similar problem
  • Low reputation (0.5):
Posted by: stanley_manley