Yes, this setting is simple and effective.
It is just that the ratio you choose may be too small, and the final learning rate may be almost 0, causing the model to converge too early. For example, start_lr = 1e-2, ratio = 1e-4 ➜ final_lr = 1e-6.
Perhaps you can increase the range of ratio a little bit, for example
ratio = trial.suggest_loguniform("lr_ratio", 1e-2, 0.5)
You can make appropriate adjustments according to your experimental situation.
Please refer to it, thank you.