MLflow provides a module called mlflow.shap that can be used to automatically calculate and log a Shapley feature importance plot using the SHAP library. The mlflow.shap.log_explanation function takes a predict function and a sample of input data, and computes the SHAP values and base values for each feature. It then logs the following artifacts to the current run:
Base values
SHAP values (computed using shap.KernelExplainer)
Summary bar plot (shows the average impact of each feature on model output) The mlflow.shap.log_explanation function can be used with any model that has a predict function, such as scikit-learn, xgboost, etc1234 References:
mlflow.shap — MLflow 2.9.1 documentation
Extract feature importance from a mlflow 1.9 PyFuncModel model
load MLFlow model and plot feature importance with feature names
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit