Creating Experiments
Skill ActiveGuides agents through the 3-step experiment creation flow: defining the hypothesis, configuring rollout, and setting up analytics. Delegates rollout decisions to configuring-experiment-rollout and metric setup to configuring-experiment-analytics. TRIGGER when: user asks to create a new experiment or A/B test, OR when you are about to call experiment-create. DO NOT TRIGGER when: user is updating an existing experiment, managing lifecycle, or only browsing experiments.
To streamline the creation of A/B tests in PostHog by providing a structured, step-by-step process for defining experiments and their configurations.
Features
- Guides through 3-step experiment creation flow
- Delegates rollout configuration to `configuring-experiment-rollout`
- Delegates analytics setup to `configuring-experiment-analytics`
- Creates experiment drafts quickly
- Supports specifying experiment name, hypothesis, and feature flag key
Use Cases
- When a user asks to create a new experiment or A/B test
- When preparing to call the `experiment-create` tool
- To quickly create a draft experiment for later refinement
Non-Goals
- Updating an existing experiment
- Managing experiment lifecycle (beyond creation)
- Only browsing existing experiments
- Configuring metrics directly during creation
Trust
- warning:Issues AttentionThere are 544 open issues and 163 closed issues in the last 90 days, indicating a low closure rate (approx. 23%) and a significant number of unaddressed issues.
Installation
npx skills add PostHog/posthogRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
Trust Signals
Similar Extensions
OraClaw Bandit
99A/B testing and feature optimization for AI agents. Pick the best option automatically using Multi-Armed Bandits and Contextual Bandits (LinUCB). No data warehouse needed — works from request
Copying Flags Across Projects
95Copy a feature flag from one PostHog project to one or more target projects in the same organization. Use when the user wants to duplicate a flag, promote a flag from staging to production, sync flags across projects, or replicate a flag configuration in a different workspace. Covers cohort remapping, scheduled-change handling, encrypted payloads, and the safe defaults (disabled in target, no scheduled changes).
Managing Experiment Lifecycle
93Guides experiment state transitions: launching, pausing, resuming, ending, shipping variants, archiving, resetting, and duplicating. Covers preconditions, implications for variant assignment and analysis, and the decision framework for when to use each action. TRIGGER when: user asks to launch, pause, resume, end, ship, archive, reset, or duplicate an experiment. DO NOT TRIGGER when: user is creating an experiment (use creating-experiments), configuring rollout (use configuring-experiment-rollout), or setting up metrics (use configuring-experiment-analytics).
Measure Experiment Design
100Designs an A/B test or experiment with clear hypothesis, variants, success metrics, sample size, and duration. Use when planning experiments to validate product changes or test hypotheses.
Azure App Configuration SDK for Python
100Azure App Configuration SDK for Python. Use for centralized configuration management, feature flags, and dynamic settings. Triggers: "azure-appconfiguration", "AzureAppConfigurationClient", "feature flags", "configuration", "key-value settings".
Experiment Designer
99Use when planning product experiments, writing testable hypotheses, estimating sample size, prioritizing tests, or interpreting A/B outcomes with practical statistical rigor.