Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

SHAP Model Interpretability

Skill Verifiziert Aktiv

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

Zweck

To enable users to understand and explain machine learning model predictions and behavior using SHAP values, facilitating debugging, bias analysis, and transparent AI implementation.

Funktionen

  • Compute SHAP values for diverse model types
  • Generate various SHAP visualizations (waterfall, beeswarm, etc.)
  • Provide workflows for debugging, fairness analysis, and feature engineering
  • Explain model predictions and feature importance
  • Integrate with MLOps tools and production deployment

Anwendungsfälle

  • Explaining why a model made a specific prediction
  • Visualizing overall feature importance and impact
  • Debugging model behavior and identifying errors
  • Analyzing model fairness and bias across different groups
  • Improving features based on interpretability insights

Nicht-Ziele

  • Training models (focus is on explaining pre-trained models)
  • Performing model deployment beyond providing explanation strategies
  • Providing alternative explainability methods beyond SHAP
  • Automating model debugging without user intervention

Workflow

  1. Select the appropriate SHAP explainer based on model type
  2. Compute SHAP values for the model and dataset
  3. Visualize results using appropriate plots (e.g., waterfall for individual, beeswarm for global)
  4. Interpret explanations to understand feature impact, interactions, or bias
  5. Apply insights for debugging, feature engineering, or model comparison

Praktiken

  • Model Interpretability
  • Explainable AI
  • Machine Learning Debugging
  • Fairness Analysis

Voraussetzungen

  • Python 3.11+ (3.12+ recommended)
  • uv package manager
  • SHAP Python library
  • Relevant ML library (e.g., XGBoost, TensorFlow, PyTorch)

Installation

npx skills add K-Dense-AI/claude-scientific-skills

Führt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.

Qualitätspunktzahl

Verifiziert
100 /100
Analysiert 1 day ago

Vertrauenssignale

Letzter Commit3 days ago
Sterne21k
LizenzMIT
Status
Quellcode ansehen

Ähnliche Erweiterungen

AlterLab SHAP

99

Part of the AlterLab Academic Skills suite. Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

Skill
AlterLab-IEU

TimesFM Forecasting

100

Zero-shot time series forecasting with Google's TimesFM foundation model. Use for any univariate time series (sales, sensors, energy, vitals, weather) without training a custom model. Supports CSV/DataFrame/array inputs with point forecasts and prediction intervals. Includes a preflight system checker script to verify RAM/GPU before first use.

Skill
K-Dense-AI

Molfeat

99

Molecular featurization for ML (100+ featurizers). ECFP, MACCS, descriptors, pretrained models (ChemBERTa), convert SMILES to features, for QSAR and molecular ML.

Skill
K-Dense-AI

OraClaw Forecast

100

Zeitreihenprognose für KI-Agenten. ARIMA- und Holt-Winters-Vorhersagen mit Konfidenzintervallen. Prognostizieren Sie Umsatz, Traffic, Preise oder beliebige sequentielle Daten. Inferenz unter 5 ms.

Skill
Whatsonyourmind

Arize Evaluator

100

Handles LLM-as-judge evaluation workflows on Arize including creating/updating evaluators, running evaluations on spans or experiments, managing tasks, trigger-run operations, column mapping, and continuous monitoring. Use when the user mentions create evaluator, LLM judge, hallucination, faithfulness, correctness, relevance, run eval, score spans, score experiment, trigger-run, column mapping, continuous monitoring, or improve evaluator prompt.

Skill
github

PyTorch Lightning

100

Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.

Skill
K-Dense-AI