You can access a significantly larger sample of the platform's content for free by logging in with your Gmail account. Sign in now to explore.

What are some ways you can provide reasoning for ML outcomes? (i.e., interpret ML predictions?)

Some of the things we could do include:

Choose simpler models (logistic regression coefficients, shallow decision trees, etc)

Partial dependency plots (these are plots that estimate the marginal effect of each feature while keeping all other features constant).

Use Shap values to explain how specific decisions are made (perticularly useful for observation-specific interpretability).

Interpretability

- Analyze prediction error Medium (Prediction error, Bias-variance tradeoff, Diagnostics, Learning curves)
- Correlation with binary variables Easy (Correlation, Hypothesis testing, Point-biserial correlation coefficient )
- Multicollinearity Medium (Multicollinearity, Linear regression)
- Not enough data to train a model Easy (Data limitations)
- Discretization drawbacks Easy (Categorical variables, Discretization)