Measure Experiment Design
技能 已验证 活跃Designs an A/B test or experiment with clear hypothesis, variants, success metrics, sample size, and duration. Use when planning experiments to validate product changes or test hypotheses.
Designs an A/B test or experiment with clear hypothesis, variants, success metrics, sample size, and duration.
功能
- Designs A/B tests with clear hypotheses
- Defines variants (control and treatment)
- Selects primary and secondary metrics
- Calculates sample size and estimates duration
- Documents targeting, allocation, and success criteria
使用场景
- Use when planning experiments to validate product changes
- Use when testing hypotheses that require quantitative validation
- Use to establish data-driven decision-making culture
- Use to define experiment parameters before launch
非目标
- Running the actual A/B test
- Analyzing live experiment data
- Implementing the product changes being tested
安装
请先添加 Marketplace
/plugin marketplace add product-on-purpose/pm-skills/plugin install pm-skills@pm-skills-marketplace质量评分
已验证类似扩展
Experiment Design
99A discipline for designing experiments (A/B tests, multivariate, holdouts) so the results actually answer the question you asked. Hypothesis writing, sample size, duration, segment analysis, interpretation, decision-making, and the common failure modes that produce confidently wrong shipping decisions.
Brainstorm Experiments New
100Design lean startup experiments (pretotypes) for a new product. Creates XYZ hypotheses and suggests low-effort validation methods like landing pages, explainer videos, and pre-orders. Use when validating a new product idea, creating pretotypes, or testing market demand.
OraClaw Bandit
99A/B 测试和功能优化,适用于 AI 代理。使用多臂老虎机和上下文老虎机(LinUCB)自动选择最佳选项。无需数据仓库——直接从请求运行
Experiment Designer
99Use when planning product experiments, writing testable hypotheses, estimating sample size, prioritizing tests, or interpreting A/B outcomes with practical statistical rigor.
Experimentation Platform Orchestrator
98A platform decision framework for experimentation. When to use Statsig vs PostHog vs GrowthBook vs Optimizely vs Amplitude vs Eppo vs Kameleoon. How to migrate between them. How to coordinate when multi-platform is genuinely warranted. The decisions that compound for years and the ones you can defer. Triggers on which experimentation platform, choose Statsig vs PostHog, evaluate experimentation tools, switch experimentation platform, migrate from Optimizely, consolidate experimentation tools, multi-platform experimentation, experimentation platform decision, ab test platform selection, feature flag platform vs experiment platform, warehouse-native experiments, vendor lock-in experimentation. Also triggers when a team is asking about cost, governance, or migration cost across experimentation tools, or when an evaluation is starting.
Ab Test Setup
98When the user wants to plan, design, or implement an A/B test or experiment, or build a growth experimentation program. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," "hypothesis," "should I test this," "which version is better," "test two versions," "statistical significance," "how long should I run this test," "growth experiments," "experiment velocity," "experiment backlog," "ICE score," "experimentation program," or "experiment playbook." Use this whenever someone is comparing two approaches and wants to measure which performs better, or when they want to build a systematic experimentation practice. For tracking implementation, see analytics-tracking. For page-level conversion optimization, see page-cro.