We are in Beta and we are offering 50% off! Use code BETATESTER at checkout.
You can access a significantly larger sample of the platform's content for free by logging in with your Gmail account. Sign in now to explore.

Model interpretability

Machine Learning Easy Seen in real interview

What are some ways you can provide reasoning for ML outcomes? (i.e., interpret ML predictions?)

Some of the things we could do include:

  • Choose simpler models (logistic regression coefficients, shallow decision trees, etc)

  • Partial dependency plots (these are plots that estimate the marginal effect of each feature while keeping all other features constant).

  • Use Shap values to explain how specific decisions are made (perticularly useful for observation-specific interpretability).


Topics

Interpretability
Similar questions

Provide feedback