If I remember correctly this helped me:
If I delete the model, I can reassign the GPU memory.
# model_1 training
del model_1
# model_2 training works
If I try to keep the model, the deep copy retains the connection to the GPU, and I cannot use assigned GPU memory.
import copy
# model_1 training
model_1_save = copy.deepcopy(model_1)
del model_1
# model_2 training memory error
If I want to use the first model later, and train a second model on a GPU :
# model_1 training
model_1.to("cpu")
# model_2 training works
model_2.to("cpu")
model_1.to("cuda")
# model_1 continuing training works