Based on the information provided, it is difficult to draw any definitive conclusions. Several factors could potentially contribute to this issue.
One possible reason is overfitting to the training data. The model may be too specialized to the training set, which can lead to poor generalization. The regularization (l2 term) appears to be relatively high for such a "simple model".
Another potential cause could be that the training data is unrepresentative.
Distribution shift of the data between Train and Test
Ultimately, it is challenging to determine the exact cause due to the limited amount of information shared.