I saw from your post, that you haven't tried regularization techniques, regularization always helps in decreasing bias in the model. Further the following intuition will help you more in gaining a better understanding:
Data imbalance is more about feature imbalance than class imbalance.
Figure (showing a model on 1d feature vector) explains that oversampling or generating data points using KNN (SMOTE) will only add data points near the vicinity of already known data space and won't include the data points from unknown sample space shown.
Based on above Intuition, there are a number of things you can do decrease your model biasness:
You can collect artificial random data points from the sample space using SMOTE (by using a very large value of k), but to assign them a label, you can train a different model probably a heavy neural net model like ResNet18 with real data and use it to assign a label for artificial data points. (Heavy models are able to capture complex patterns)
Under-sampling: Remove the data at the far end boundary of feature dimension. Determine the outlier score by calculating the average Euclidean distance of each data point from all other data points and remove the highest ones. This will decrease the bias in the dataset.