Skip to main content

SHAP & LIME

SHAP theory (Shapley values from game theory), TreeSHAP, KernelSHAP, DeepSHAP, LIME (local linear approximations), anchors, and when to use each

~50 min
Listen to this lesson

SHAP & LIME: Explaining Individual Predictions

Feature importance tells you which features matter globally, but it does not explain why the model made a specific prediction. SHAP and LIME are the two most important methods for local (instance-level) explanations.

This lesson covers both methods in depth, their theoretical foundations, variants, and practical guidance on when to use each.

Global vs Local Explanations

Global explanations (PDP, permutation importance) describe the model's overall behavior. Local explanations (SHAP, LIME) explain a single prediction. Both are essential: global methods reveal patterns; local methods justify individual decisions. SHAP uniquely bridges both by providing local explanations that aggregate to global importance.

SHAP: SHapley Additive exPlanations

SHAP is based on Shapley values from cooperative game theory (1953, Lloyd Shapley, Nobel Prize 2012). The idea: treat features as "players" in a cooperative game and fairly distribute the "payout" (prediction) among them.

Shapley Value Properties (Axioms)

Shapley values are the only attribution method that satisfies all four desirable properties:

PropertyMeaning
EfficiencyFeature contributions sum exactly to the prediction minus the baseline
SymmetryFeatures with identical contributions get equal attribution
DummyFeatures that never change the prediction get zero attribution
LinearityFor a sum of models, Shapley values add linearly

Computing Shapley Values

For each feature i, the Shapley value is the average marginal contribution of feature i across all possible subsets of other features:

phi_i = sum over all subsets S not containing i:
    [|S|! * (|F|-|S|-1)! / |F|!] * [f(S union {i}) - f(S)]

This is exponentially expensive (2^n subsets for n features), so SHAP uses efficient approximations.

SHAP Variants

VariantBest forSpeedExactness
TreeSHAPTree models (RF, XGBoost, LightGBM)Very fast (polynomial)Exact
KernelSHAPAny modelSlow (sampling-based)Approximate
DeepSHAPDeep learning (PyTorch/TF)Fast (backprop-based)Approximate
LinearSHAPLinear modelsInstantExact

python
1import numpy as np
2import shap
3from sklearn.datasets import fetch_california_housing
4from sklearn.ensemble import GradientBoostingRegressor
5from sklearn.model_selection import train_test_split
6
7# --- Setup ---
8housing = fetch_california_housing()
9X, y = housing.data, housing.target
10feature_names = housing.feature_names
11
12X_train, X_test, y_train, y_test = train_test_split(
13    X, y, test_size=0.2, random_state=42
14)
15
16gbr = GradientBoostingRegressor(
17    n_estimators=200, max_depth=4, random_state=42
18)
19gbr.fit(X_train, y_train)
20
21# --- TreeSHAP (exact, fast for tree models) ---
22explainer = shap.TreeExplainer(gbr)
23shap_values = explainer.shap_values(X_test[:100])
24
25print("=== SHAP Analysis ===")
26print(f"Base value (E[f(x)]): {explainer.expected_value:.4f}")
27print(f"SHAP values shape: {shap_values.shape}")
28
29# --- Explain a single prediction ---
30idx = 0
31prediction = gbr.predict(X_test[idx:idx+1])[0]
32print(f"\n--- Explaining prediction for sample {idx} ---")
33print(f"Predicted price: ${prediction * 100000:.0f}")
34print(f"Base value:      ${explainer.expected_value * 100000:.0f}")
35print(f"\nFeature contributions:")
36
37contributions = list(zip(feature_names, shap_values[idx], X_test[idx]))
38contributions.sort(key=lambda x: abs(x[1]), reverse=True)
39
40for name, shap_val, feat_val in contributions:
41    direction = "+" if shap_val > 0 else ""
42    print(f"  {name:>12} = {feat_val:>8.2f}  ->  "
43          f"{direction}{shap_val * 100000:>+10.0f}")
44
45# Verify efficiency property: contributions sum to prediction - base
46total = sum(sv for _, sv, _ in contributions)
47print(f"\nSum of SHAP values: {total:.4f}")
48print(f"Prediction - base:  {prediction - explainer.expected_value:.4f}")
49print(f"Match: {abs(total - (prediction - explainer.expected_value)) < 0.001}")
50
51# --- Global importance from SHAP ---
52print("\n=== Global Feature Importance (mean |SHAP|) ===")
53global_importance = np.abs(shap_values).mean(axis=0)
54for name, imp in sorted(zip(feature_names, global_importance),
55                         key=lambda x: x[1], reverse=True):
56    bar = "#" * int(imp * 30)
57    print(f"  {name:>12}: {imp:.4f}  {bar}")

LIME: Local Interpretable Model-agnostic Explanations

LIME explains individual predictions by fitting a simple, interpretable model (usually linear regression) in the local neighborhood of the instance being explained.

How LIME Works

1. Perturb the instance: generate many similar samples by slightly modifying features 2. Weight the perturbed samples by their proximity to the original instance (using a kernel) 3. Predict with the black-box model on all perturbed samples 4. Fit a simple linear model on the weighted (perturbed input, black-box prediction) pairs 5. The linear model's coefficients are the local feature attributions

LIME Strengths and Weaknesses

StrengthsWeaknesses
Truly model-agnosticExplanations can be unstable (vary across runs)
Intuitive (linear approximation)Sensitive to kernel width and neighborhood size
Works with any data type (tabular, text, images)Does not satisfy Shapley axioms
Fast for single explanationsCannot guarantee consistency

Anchors: Rule-Based Explanations

Anchors extend LIME by finding if-then rules that "anchor" the prediction. An anchor is a sufficient condition: if the anchor conditions hold, the prediction is (almost) guaranteed to be the same, regardless of other features.

Example: "IF income > $50k AND employment = 'full-time' THEN loan approved (precision: 97%)"

python
1import numpy as np
2import lime
3import lime.lime_tabular
4from sklearn.datasets import fetch_california_housing
5from sklearn.ensemble import GradientBoostingRegressor
6from sklearn.model_selection import train_test_split
7
8# --- Setup ---
9housing = fetch_california_housing()
10X, y = housing.data, housing.target
11feature_names = list(housing.feature_names)
12
13X_train, X_test, y_train, y_test = train_test_split(
14    X, y, test_size=0.2, random_state=42
15)
16
17gbr = GradientBoostingRegressor(
18    n_estimators=200, max_depth=4, random_state=42
19)
20gbr.fit(X_train, y_train)
21
22# --- Create LIME explainer ---
23lime_explainer = lime.lime_tabular.LimeTabularExplainer(
24    training_data=X_train,
25    feature_names=feature_names,
26    mode="regression",
27    random_state=42,
28)
29
30# --- Explain a single prediction ---
31idx = 0
32instance = X_test[idx]
33prediction = gbr.predict(instance.reshape(1, -1))[0]
34
35explanation = lime_explainer.explain_instance(
36    instance,
37    gbr.predict,
38    num_features=8,
39    num_samples=5000,
40)
41
42print(f"=== LIME Explanation for Sample {idx} ===")
43print(f"Predicted price: ${prediction * 100000:.0f}")
44print(f"\nLocal model intercept: {explanation.intercept[0]:.4f}")
45print(f"Local model R2: {explanation.score:.4f}")
46print(f"\nFeature contributions:")
47
48for feature, weight in explanation.as_list():
49    direction = "INCREASES" if weight > 0 else "DECREASES"
50    print(f"  {feature:<35} -> {direction} price by "
51          f"${abs(weight) * 100000:.0f}")
52
53# --- Compare LIME stability ---
54print("\n=== LIME Stability Check ===")
55explanations = []
56for seed in range(5):
57    lime_exp_i = lime.lime_tabular.LimeTabularExplainer(
58        X_train, feature_names=feature_names,
59        mode="regression", random_state=seed
60    )
61    exp_i = lime_exp_i.explain_instance(
62        instance, gbr.predict, num_features=5, num_samples=2000
63    )
64    top_features = [f for f, w in exp_i.as_list()[:3]]
65    explanations.append(top_features)
66    print(f"  Run {seed}: {top_features}")
67
68# Check consistency
69all_top1 = [e[0] for e in explanations]
70consistent = len(set(all_top1)) == 1
71print(f"\nTop feature consistent across runs: {consistent}")
72print("Note: LIME can produce different explanations across runs.")
73print("SHAP is more stable due to its theoretical guarantees.")

SHAP vs LIME: When to Use Each

Use SHAP when you need theoretically grounded, consistent explanations and have tree-based models (TreeSHAP is fast and exact). Use LIME when you need quick, intuitive explanations for any model type, especially for text and image data where LIME's perturbation approach is natural. Use Anchors when stakeholders prefer simple if-then rules.

SHAP Visualization Types

SHAP provides several powerful visualizations:

Summary Plot (Beeswarm)

Shows SHAP values for all features across all samples. Each dot is one sample; x-axis is SHAP value; color indicates feature value. Reveals both importance and effect direction.

Dependence Plot

Scatter plot of one feature's value (x-axis) vs its SHAP value (y-axis), colored by an interacting feature. Reveals non-linear effects and interactions.

Waterfall Plot

For a single prediction, shows how each feature pushes the prediction from the base value to the final prediction. The most intuitive plot for explaining individual decisions.

Force Plot

Compact visualization showing features pushing the prediction higher (red) or lower (blue) from the base value. Useful for dashboards and reports.