Zum Hauptinhalt springen
Dieser Inhalt ist noch nicht in Ihrer Sprache verfügbar und wird auf Englisch angezeigt.

Managing Experiment Lifecycle

Skill Aktiv

Guides experiment state transitions: launching, pausing, resuming, ending, shipping variants, archiving, resetting, and duplicating. Covers preconditions, implications for variant assignment and analysis, and the decision framework for when to use each action. TRIGGER when: user asks to launch, pause, resume, end, ship, archive, reset, or duplicate an experiment. DO NOT TRIGGER when: user is creating an experiment (use creating-experiments), configuring rollout (use configuring-experiment-rollout), or setting up metrics (use configuring-experiment-analytics).

Zweck

To provide clear guidance and execute actions for managing the entire lifecycle of product experiments, ensuring correct state transitions and understanding their impact on users and analysis.

Funktionen

  • Guides experiment state transitions (launch, pause, resume, end, ship, archive, reset, duplicate)
  • Covers preconditions for each action
  • Explains implications for variant assignment
  • Details impact on statistical analysis
  • Provides a decision framework for selecting actions

Anwendungsfälle

  • When a user asks to launch, pause, resume, end, ship, archive, reset, or duplicate an experiment.
  • When needing to understand the consequences of an experiment state change on user assignment or data collection.
  • When a user needs to clean up or restart an experiment's configuration.

Nicht-Ziele

  • Creating new experiments (use `creating-experiments`)
  • Configuring experiment rollout (use `configuring-experiment-rollout`)
  • Setting up experiment metrics (use `configuring-experiment-analytics`)

Workflow

  1. Identify the desired experiment action (launch, pause, end, etc.).
  2. Determine the current state and necessary preconditions.
  3. Execute the corresponding tool with any required parameters (e.g., `variant_key`).
  4. Review the outcome or error message provided by the tool.

Praktiken

  • Experiment lifecycle management
  • Product analytics
  • A/B testing
  • Feature flagging

Voraussetzungen

  • An experiment ID to operate on

Trust

  • warning:Issues AttentionWith 544 open issues and 163 closed issues in the last 90 days, the closure rate is low (approx. 23%), indicating slow maintainer response to open issues.

Installation

npx skills add PostHog/posthog

Führt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.

Qualitätspunktzahl

93 /100
Analysiert about 22 hours ago

Vertrauenssignale

Letzter Commitabout 22 hours ago
Sterne34.5k
LizenzMIT-0
Status
Quellcode ansehen

Ähnliche Erweiterungen

Measure Experiment Design

100

Designs an A/B test or experiment with clear hypothesis, variants, success metrics, sample size, and duration. Use when planning experiments to validate product changes or test hypotheses.

Skill
product-on-purpose

Brainstorm Experiments New

100

Design lean startup experiments (pretotypes) for a new product. Creates XYZ hypotheses and suggests low-effort validation methods like landing pages, explainer videos, and pre-orders. Use when validating a new product idea, creating pretotypes, or testing market demand.

Skill
phuryn

Experiment Design

99

A discipline for designing experiments (A/B tests, multivariate, holdouts) so the results actually answer the question you asked. Hypothesis writing, sample size, duration, segment analysis, interpretation, decision-making, and the common failure modes that produce confidently wrong shipping decisions.

Skill
rampstackco

OraClaw Bandit

99

A/B-Tests und Funktionsoptimierung für KI-Agenten. Wählen Sie automatisch die beste Option mit Multi-Armed Bandits und kontextbezogenen Bandits (LinUCB). Kein Data Warehouse erforderlich – funktioniert ab der Anfrage.

Skill
Whatsonyourmind

Experimentation Platform Orchestrator

98

A platform decision framework for experimentation. When to use Statsig vs PostHog vs GrowthBook vs Optimizely vs Amplitude vs Eppo vs Kameleoon. How to migrate between them. How to coordinate when multi-platform is genuinely warranted. The decisions that compound for years and the ones you can defer. Triggers on which experimentation platform, choose Statsig vs PostHog, evaluate experimentation tools, switch experimentation platform, migrate from Optimizely, consolidate experimentation tools, multi-platform experimentation, experimentation platform decision, ab test platform selection, feature flag platform vs experiment platform, warehouse-native experiments, vendor lock-in experimentation. Also triggers when a team is asking about cost, governance, or migration cost across experimentation tools, or when an evaluation is starting.

Skill
rampstackco

Ab Test Setup

98

When the user wants to plan, design, or implement an A/B test or experiment, or build a growth experimentation program. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," "hypothesis," "should I test this," "which version is better," "test two versions," "statistical significance," "how long should I run this test," "growth experiments," "experiment velocity," "experiment backlog," "ICE score," "experimentation program," or "experiment playbook." Use this whenever someone is comparing two approaches and wants to measure which performs better, or when they want to build a systematic experimentation practice. For tracking implementation, see analytics-tracking. For page-level conversion optimization, see page-cro.

Skill
coreyhaines31