跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

SHAP Model Interpretability

技能 已验证 活跃

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

目的

To enable users to understand and explain machine learning model predictions and behavior using SHAP values, facilitating debugging, bias analysis, and transparent AI implementation.

功能

  • Compute SHAP values for diverse model types
  • Generate various SHAP visualizations (waterfall, beeswarm, etc.)
  • Provide workflows for debugging, fairness analysis, and feature engineering
  • Explain model predictions and feature importance
  • Integrate with MLOps tools and production deployment

使用场景

  • Explaining why a model made a specific prediction
  • Visualizing overall feature importance and impact
  • Debugging model behavior and identifying errors
  • Analyzing model fairness and bias across different groups
  • Improving features based on interpretability insights

非目标

  • Training models (focus is on explaining pre-trained models)
  • Performing model deployment beyond providing explanation strategies
  • Providing alternative explainability methods beyond SHAP
  • Automating model debugging without user intervention

工作流

  1. Select the appropriate SHAP explainer based on model type
  2. Compute SHAP values for the model and dataset
  3. Visualize results using appropriate plots (e.g., waterfall for individual, beeswarm for global)
  4. Interpret explanations to understand feature impact, interactions, or bias
  5. Apply insights for debugging, feature engineering, or model comparison

实践

  • Model Interpretability
  • Explainable AI
  • Machine Learning Debugging
  • Fairness Analysis

先决条件

  • Python 3.11+ (3.12+ recommended)
  • uv package manager
  • SHAP Python library
  • Relevant ML library (e.g., XGBoost, TensorFlow, PyTorch)

安装

npx skills add K-Dense-AI/claude-scientific-skills

通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。

质量评分

已验证
100 /100
about 14 hours ago 分析

信任信号

最近提交3 days ago
星标21k
许可证MIT
状态
查看源代码

类似扩展

AlterLab SHAP

99

Part of the AlterLab Academic Skills suite. Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

技能
AlterLab-IEU

TimesFM Forecasting

100

Zero-shot time series forecasting with Google's TimesFM foundation model. Use for any univariate time series (sales, sensors, energy, vitals, weather) without training a custom model. Supports CSV/DataFrame/array inputs with point forecasts and prediction intervals. Includes a preflight system checker script to verify RAM/GPU before first use.

技能
K-Dense-AI

Molfeat

99

Molecular featurization for ML (100+ featurizers). ECFP, MACCS, descriptors, pretrained models (ChemBERTa), convert SMILES to features, for QSAR and molecular ML.

技能
K-Dense-AI

OraClaw Forecast

100

AI 代理的时间序列预测。ARIMA 和 Holt-Winters 预测(含置信区间)。预测收入、流量、价格或任何序列数据。推理延迟低于 5 毫秒。

技能
Whatsonyourmind

Arize Evaluator

100

Handles LLM-as-judge evaluation workflows on Arize including creating/updating evaluators, running evaluations on spans or experiments, managing tasks, trigger-run operations, column mapping, and continuous monitoring. Use when the user mentions create evaluator, LLM judge, hallucination, faithfulness, correctness, relevance, run eval, score spans, score experiment, trigger-run, column mapping, continuous monitoring, or improve evaluator prompt.

技能
github

PyTorch Lightning

100

Deep learning framework (PyTorch Lightning). Organize PyTorch code into LightningModules, configure Trainers for multi-GPU/TPU, implement data pipelines, callbacks, logging (W&B, TensorBoard), distributed training (DDP, FSDP, DeepSpeed), for scalable neural network training.

技能
K-Dense-AI