Azure Database - I'm including here SQL Database, SQL Elastic Pool and MySQL Flexible Server - scaling cannot be performed real-time because it has a downtime. It can range from a few seconds, to a few hours depending on the size of your workload (Microsof Expresses this downtime in terms of "minutes per GB" in some of their articles).
See this post from 2017 where they describe downtimes of up to 6hours with ~250GB databases:
How do you automatically scale-up and down a single Azure database?
You probably know where I'm trying to get here. You automatically scale-up and down on your own. You need to either build your own tools or do it manually. There is no built-in support for this (and with reason).
I have to say that lately for Azure Sql Pools we are seeing extremely fast tier scaling (i.e < 1 min) with databases in the range of 100-200GB, so probaly the Azure team has come great lengths to improve changing tiers since 2017...
For MySQL FLexible Server I've seen it's almost never < than 4-5 minutes, even for small servers. But this is a very new service, I am sure it will get better with time.
The fact that you have this downtime is probably why Azure did not add out of the box autoscaling, providing users metrics and API's so they can choose when and how to scale according to their business needs and applications. Again, depending on your bussiness case and workload, those downtimes might be tolerable if properly handled (at specific times of the day, etc.)
I.e. for our development and staging environments we are using this (disclaimer, I built it):
https://github.com/david-garcia-garcia/azureautoscalerapp
and have setup rules that cater to our staging environment needs: pool scales automaticaly between 20DTU and 800DTU according to real usage. DTU's are scaled to a minimum of 50 between 6:00 and 18:00 to reduce disruption. Provisioned storage also scales and downscales automatically (in the staging pools we get databases added and removed automatically all the time, some are small, others several hundred GB's).
It does have a downtime, but it is so small, that properly educating our QA team allowed us to cut more than in half our MSSQL costs.
- Resources:
myserver_pools:
ResourceId: "/subscriptions/xxx/resourceGroups/mygroup/providers/Microsoft.Sql/servers/myserver/pool/{.*}"
Frequency: 5m
ScalingConfigurations:
Baseline:
ScaleDownLockWindowMinutes: 50
ScaleUpAllowWindowMinutes: 50
Metrics:
dtu_consumption_percent:
Name: dtu_consumption_percent
Window: 00:05
storage_used:
Name: storage_used
Window: 00:05
TimeWindow:
Days: All
Months: All
StartTime: "00:00"
EndTime: "23:59"
TimeZone: Romance Standard Time
ScalingRules:
autoadjust:
ScalingStrategy: Autoadjust
Dimension: Dtu
ScaleUpCondition: "(data) => data.Metrics[\"dtu_consumption_percent\"].Values.Select(i => i.Average).Take(3).Average() > 85" # Average DTU > 85% for 3 minutes
ScaleDownCondition: "(data) => data.Metrics[\"dtu_consumption_percent\"].Values.Select(i => i.Average).Take(5).Average() < 60" # Average DTU < 60% for 5 minutes
ScaleUpTarget: "(data) => data.NextDimensionValue(1)" # You could actually specificy DTU number manually, and system will find closes valid tier
ScaleDownTarget: "(data) => data.PreviousDimensionValue(1)" # You could actually specificy DTU number manually, and system will find closes valid tier
ScaleUpCooldownSeconds: 180
ScaleDownCoolDownSeconds: 3600
DimensionValueMax: "800"
DimensionValueMin: "50"
TimeWindow:
Days: ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"]
Months: All
StartTime: "06:00"
EndTime: "17:00"
TimeZone: Romance Standard Time
ScalingRules:
# Warm up things for office hours
minimum_office_hours:
ScalingStrategy: Fixed
Dimension: Dtu
ScaleTarget: "(data) => (50).ToString()"
# Always have a 100Gb or 25% extra space, whatever is greater.
fixed:
ScalingStrategy: Fixed
Dimension: MaxDataBytes
ScaleTarget: "(data) => (Math.Max(data.Metrics[\"storage_used\"].Values.First().Average.Value + (100.1*1024*1024*1024), data.Metrics[\"storage_used\"].Values.First().Average.Value * 1.25)).ToString()"