79567141

Date: 2025-04-10 16:16:52
Score: 1.5
Natty:
Report link

As established when this question was posted, the in-house feature importance can not provide this information. However, it is possible to use outside explainers to extract it.

I have used the Shap TreeExplainer this way:

  1. Train XGBClassifer with several cohorts.

  2. Pass the trained classifier to a Shapley TreeExplainer.

  3. Run the explainer using a test dataset that consists of only one class.

We still need to extract the feature importance for each class separately using separate test datasets, but the model itself remains a multi-classifier.

It works because the feature importance is based on the test dataset. If you use a test dataset of only one class, you will get the importance related to that class. Any explainer that uses a test dataset should work this way as well.

Reasons:
  • Long answer (-0.5):
  • No code block (0.5):
  • Unregistered user (0.5):
  • Low reputation (1):
Posted by: MaRoWi