Review Data Analysis
Skill Verifiziert AktivReview a data analysis for quality, correctness, and reproducibility. Covers data quality assessment, assumption checking, model validation, data leakage detection, and reproducibility verification. Use when reviewing a colleague's analysis before publication, validating an ML pipeline before production deployment, auditing a report for regulatory or business decision-making, or performing a second-analyst review in a regulated environment.
To ensure the quality, correctness, and reproducibility of data analyses by providing a comprehensive checklist and procedural guidance for reviewers.
Funktionen
- Data quality assessment
- Assumption checking for statistical methods
- Data leakage detection patterns
- Model performance validation
- Reproducibility verification checklist
- Constructive review feedback generation
Anwendungsfälle
- Reviewing colleague's analysis before publication
- Validating ML pipelines before production
- Auditing reports for regulatory or business decisions
- Performing second-analyst reviews in regulated environments
Nicht-Ziele
- Performing the data analysis itself
- Automated fixing of analysis issues (focus is on review and feedback)
- Reviewing non-data-driven reports or documents
Workflow
- Assess Data Quality
- Check Assumptions
- Detect Data Leakage
- Validate Model Performance
- Assess Reproducibility
- Write the Review
Praktiken
- Data Quality Assessment
- Statistical Assumption Checking
- Data Leakage Detection
- Model Validation
- Reproducibility Verification
Voraussetzungen
- Analysis code (scripts, notebooks, or pipeline definitions)
- Analysis output (results, tables, figures, model metrics)
- Optional: Raw data or data dictionary
- Optional: Analysis plan or protocol
- Optional: Target audience and decision context
Documentation
- info:Configuration & parameter referenceThe SKILL.md details inputs and the procedure but does not explicitly document default values or precedence order for any configuration parameters, as none are apparent.
Practical Utility
- info:Usage examplesWhile the SKILL.md outlines detailed checks and expected outcomes, it lacks specific, ready-to-use end-to-end examples demonstrating input, invocation, and output for a complete review scenario.
Installation
/plugin install agent-almanac@pjt222-agent-almanacQualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Review Pull Request
100Review a pull request end-to-end using GitHub CLI. Covers diff analysis, commit history review, CI/CD check verification, severity-leveled feedback (blocking/suggestion/nit/praise), and gh pr review submission. Use when a pull request is assigned for review, performing a self-review before requesting others' input, conducting a second review after feedback is addressed, or auditing a merged PR for post-merge quality assessment.
Pm Strategic Review
100End-of-quarter strategic review in narrative style with a bets scorecard. Use when someone says "quarter review", "strategic review", "what happened last quarter", "quarterly retro", "bets scorecard", "review our bets", "end of quarter report".
PM Review
100Pre-review quality gate that checks any PM artifact (PRD, strategy doc, one-pager, brief) against Head of Product standards. Scores problem clarity, metrics quality, scope discipline, and compliance awareness. Acts as "the HoP reviewing your work before the real HoP sees it." Use when someone says "review this", "check this PRD", "is this ready for review", "quality check", "does this meet the bar", or "pre-review my spec".
Karpathy Coder
100Use when writing, reviewing, or committing code to enforce Karpathy's 4 coding principles — surface assumptions before coding, keep it simple, make surgical changes, define verifiable goals. Triggers on "review my diff", "check complexity", "am I overcomplicating this", "karpathy check", "before I commit", or any code quality concern where the LLM might be overcoding.
Fit Drift Diffusion Model
100Fit cognitive drift-diffusion models (Ratcliff DDM) to reaction time and accuracy data with parameter estimation (drift rate, boundary separation, non-decision time), model comparison, and parameter recovery validation. Use when modeling binary decision-making with reaction time data, estimating cognitive parameters from experimental data, comparing sequential sampling model variants, or decomposing speed-accuracy tradeoff effects into latent cognitive components.
Product Analytics Setup
99How to actually instrument product analytics correctly. Event taxonomy, property design, naming conventions, schema versioning, identity stitching, funnel design, retention cohorts, North Star metric selection, dashboard hygiene, instrumentation debt, and the failure modes that produce data nobody trusts. Triggers on product analytics setup, event taxonomy, tracking plan, instrumentation, schema versioning, North Star metric, retention cohorts, funnel design, naming conventions, instrument new feature, audit existing analytics, dashboard reconciliation, instrumentation debt, Mixpanel setup, Amplitude setup, PostHog setup, warehouse-native analytics. Also triggers when the team has data but cannot trust it, or when designing instrumentation for a new feature, or when auditing an existing setup that has drifted.