I would recommend to freeze your parameter in the base_model, otherwise they would be modified by the training process. Since your last layer are not train, they probably have a negative impact on the pretrained model.
If you one to finetune them after the training on your custom layers, you can unfreeze the base_model layers and train for one or two epochs with a very low learning rate.
Anyway, it is possible that the base_model does not work well with your data, as it is trained for a completely different task with data of other distribution, namely RBG pictures instead of brain tumor data.
See: https://keras.io/guides/transfer_learning/#transfer-learning-amp-finetuning