Ab Test Setup
Skill Verifiziert AktivWhen the user wants to plan, design, or implement an A/B test or experiment, or build a growth experimentation program. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," "hypothesis," "should I test this," "which version is better," "test two versions," "statistical significance," "how long should I run this test," "growth experiments," "experiment velocity," "experiment backlog," "ICE score," "experimentation program," or "experiment playbook." Use this whenever someone is comparing two approaches and wants to measure which performs better, or when they want to build a systematic experimentation practice. For tracking implementation, see analytics-tracking. For page-level conversion optimization, see page-cro.
To empower users to conduct effective A/B tests and build systematic growth experimentation programs that yield statistically valid and actionable results.
Funktionen
- Structured hypothesis framework
- Guidance on test types and sample size calculation
- Methodology for metric selection and variant design
- Step-by-step instructions for running and analyzing tests
- Framework for building a continuous experimentation program
Anwendungsfälle
- Planning a new A/B test for a website feature
- Designing variants for a landing page experiment
- Determining the required sample size for a growth experiment
- Building a systematic practice for growth experimentation
Nicht-Ziele
- Implementing analytics tracking for tests
- Performing page-level conversion optimization outside of A/B testing scope
- Focusing on ad creative generation or SEO audits
Installation
Zuerst Marketplace hinzufügen
/plugin marketplace add coreyhaines31/marketingskills/plugin install marketingskills@marketingskillsQualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Measure Experiment Design
100Designs an A/B test or experiment with clear hypothesis, variants, success metrics, sample size, and duration. Use when planning experiments to validate product changes or test hypotheses.
Experiment Design
99A discipline for designing experiments (A/B tests, multivariate, holdouts) so the results actually answer the question you asked. Hypothesis writing, sample size, duration, segment analysis, interpretation, decision-making, and the common failure modes that produce confidently wrong shipping decisions.
Experimentation Platform Orchestrator
98A platform decision framework for experimentation. When to use Statsig vs PostHog vs GrowthBook vs Optimizely vs Amplitude vs Eppo vs Kameleoon. How to migrate between them. How to coordinate when multi-platform is genuinely warranted. The decisions that compound for years and the ones you can defer. Triggers on which experimentation platform, choose Statsig vs PostHog, evaluate experimentation tools, switch experimentation platform, migrate from Optimizely, consolidate experimentation tools, multi-platform experimentation, experimentation platform decision, ab test platform selection, feature flag platform vs experiment platform, warehouse-native experiments, vendor lock-in experimentation. Also triggers when a team is asking about cost, governance, or migration cost across experimentation tools, or when an evaluation is starting.
Data Warehouse Experimentation
97Running experiments out of the data warehouse instead of via dedicated experiment platforms. SQL-based assignment, exposure logging discipline, metric definitions in dbt models, statistical analysis in SQL or Python, variance reduction with CUPED, sequential testing, and the operational tradeoffs vs platforms like Statsig and Optimizely. Triggers on warehouse-native experimentation, run experiments in BigQuery, run experiments in Snowflake, dbt experiments, SQL t-test, CUPED variance reduction, exposure log, sample ratio mismatch, sequential testing, mSPRT, doubly robust estimation, build vs buy experimentation. Also triggers when the team is choosing between platform and warehouse, building warehouse-native experiment infrastructure, auditing one, or running an experiment with a custom metric the platform cannot handle.
Agent Analytics
97Analytics your AI agent can actually use. Track, analyze, run A/B experiments, and optimize across all your projects via CLI. Includes a growth playbook so your agent knows HOW to grow, not just what to track.
Game Analytics Setup
100Invoke when the user needs to set up analytics, define telemetry events, establish KPIs, build dashboards, configure A/B testing, or implement data-driven design capabilities. Triggers on: "analytics", "telemetry", "KPIs", "metrics", "player data", "retention", "DAU", "dashboard", "A/B testing", "funnel analysis". Do NOT invoke for balance tuning (use game-balance-check) or economy design (use game-economy-designer). Part of the AlterLab GameForge collection.