As established when this question was posted, the in-house feature importance can not provide this information. However, it is possible to use outside explainers to extract it.
I have used the Shap TreeExplainer this way:
Train XGBClassifer with several cohorts.
Pass the trained classifier to a Shapley TreeExplainer.
Run the explainer using a test dataset that consists of only one class.
We still need to extract the feature importance for each class separately using separate test datasets, but the model itself remains a multi-classifier.
It works because the feature importance is based on the test dataset. If you use a test dataset of only one class, you will get the importance related to that class. Any explainer that uses a test dataset should work this way as well.