Pro Workflow
插件 活跃Complete AI coding workflow system. Self-correcting memory + persistent FTS5-indexed research wikis + auto-research loop + multi-LLM council on a single SQLite store. 33 skills, 8 agents, 22 commands, 37 hook scripts across 24 events. Cross-agent via SkillKit.
To significantly enhance AI coding productivity by automating learning, organizing knowledge, managing context, and enforcing quality through a suite of integrated tools and patterns.
功能
- Self-correcting memory
- Persistent FTS5-indexed research wikis
- Auto-research loop
- Multi-LLM deliberation
- Parallel session management via worktrees
- LLM-powered quality gates and hooks
使用场景
- Automating learning from AI corrections
- Building and querying persistent knowledge bases
- Managing complex features with phased development
- Improving code quality through AI-driven reviews and checks
- Optimizing context window usage and session costs
非目标
- Replacing core IDE functionality
- Providing a general-purpose scripting environment
- Handling user data beyond what's necessary for session context
实践
- Self-Correction Loop
- Pre-Flight Discipline
- Parallel Worktrees
- Wrap-Up Ritual
- 80/20 Review
- Context Engineering
- Thoroughness Scoring
先决条件
- Node.js environment
- git
- Claude Code or compatible AI coding environment
- SQLite (if building components from source)
Scope
- info:Single responsibility principleWhile the core focus is workflow improvement, the sheer breadth of 34 skills, 8 agents, and 22 commands indicates a large aggregation of capabilities.
- warning:Tool surface sizeThe plugin exposes 34 skills, 8 agents, and 22 commands, significantly exceeding the recommended maximum of 10 tools.
Invocation
- info:Overlapping near-synonym toolsWhile many commands and skills exist, they appear to cover distinct functions within the overall workflow system, with some potential for functional overlap between commands like `/learn` and `/learn-rule`.
- info:Name collisionsWhile there are many commands and skills, they generally have distinct names. Some overlap in purpose between `/learn` and `/learn-rule` might cause minor routing ambiguity.
- info:Hook matcher tightnessSome hook matchers appear broad (e.g., 'Bash' for multiple git operations), but specific `if` conditions within prompts often narrow the scope.
安装
请先添加 Marketplace
/plugin marketplace add rohitg00/pro-workflow/plugin install pro-workflow@pro-workflow包含 34 个扩展
Skill (34)
Coordinate multiple Claude Code sessions as a team — lead + teammates with shared task lists, mailbox messaging, and file-lock claiming. Patterns for team sizing, task decomposition, and when to use teams vs sub-agents vs worktrees.
Auto-configure quality gates, hooks, and settings for a new project. Detects project type and sets up appropriate tooling. Use when onboarding a new codebase.
Decompose large-scale changes into independent units and spawn parallel agents in isolated worktrees. Use for migrations, refactors, codemods, and any change touching 10+ files with the same pattern.
Capture a user-reported defect as a durable GitHub issue written in the project's own domain language. Explores the codebase in parallel for context but never leaks file paths or line numbers into the issue. Use when the user reports a bug conversationally, runs a QA pass, or says "file an issue", "log this as a bug", "capture this".
Smart context compaction with state preservation. Saves critical files, task progress, and working state before compaction, restores after. Use before manual compact or when auto-compact triggers.
Master the four operations of context engineering — Write, Select, Compress, Isolate. Manage token budgets, compaction strategies, and context partitioning to keep AI sessions sharp and efficient.
Optimize token usage and context management. Use when sessions feel slow, context is degraded, or you're running out of budget.
Track session costs, set budget alerts, and optimize token spend. Use to check costs mid-session or set spending limits.
Remove AI-generated code slop, unnecessary comments, and over-engineering from the current branch diff. Cleans up boilerplate, simplifies abstractions, and strips defensive code. Use when cleaning up code, simplifying, removing boilerplate, or before committing.
Configure file watching hooks to auto-react to config changes, env file updates, and dependency modifications. Use to set up reactive workflows.
Show session analytics, learning patterns, correction trends, heatmaps, and productivity metrics. Computes stats from project memory and session history. Use when asking for stats, statistics, progress, how am I doing, coding history, or dashboard.
Capture a correction or lesson as a persistent learning rule with category, mistake, and correction. Stores, categorises, and retrieves rules for future sessions. Use after mistakes or when the user says "remember this", "don't forget", "note this", or "learn from this".
Provider-agnostic multi-LLM deliberation. Three phases — independent responses, cross-model anonymized ranking, chairman synthesis. Provider config from env (OPENAI/ANTHROPIC/FIREWORKS/OPENROUTER/custom OpenAI-compatible base URL). Persists transcript to a wiki page when --wiki <slug> is passed. Use when the user wants multiple AI perspectives, consensus-building, or the "LLM Council" approach for high-stakes reviews, plan critique, or contested learning rules.
LLM-powered quality verification using prompt hooks. Validates commit messages, code patterns, and conventions using AI before allowing operations. Use to set up intelligent guardrails.
Audit connected MCP servers for token overhead, redundancy, and security. Use when sessions feel slow or before adding new MCPs.
Produce a one-screen map of an unfamiliar area of the codebase: entry points, modules, data flow, callers. Designed to be read in fifteen seconds. Use when the user says "I do not know this area", "give me the map", "zoom out", "orient me".
Wire Commands, Agents, and Skills together for complex features. Use when building features that need research, planning, and implementation phases.
Create and manage git worktrees for parallel coding sessions with zero dead time. Use when blocked on tests, builds, wanting to work on multiple branches, context switching, or exploring multiple approaches simultaneously.
Analyze permission denial patterns and generate optimized alwaysAllow and alwaysDeny rules. Use when permission prompts are slowing you down or after sessions with many denials.
Stress-test a plan by walking its decision tree one question at a time. Use when the user wants to pressure-test a design before implementation.
Complete AI coding workflow system. Orchestration patterns, 18 hook events, 5 agents, cross-agent support, reference guides, and searchable learnings. Works with Claude Code, Cursor, and 32+ agents.
Surface past learnings relevant to the current task before starting work. Searches correction history, recalls past mistakes, and applies prior patterns. Use when starting a task, saying "what do I know about", "previous mistakes", "lessons learned", or "remind me about".
Prevent destructive operations using Claude Code hooks. Three modes — cautious (warn on dangerous commands), lockdown (restrict edits to one directory), and clear (remove restrictions). Uses PreToolUse matchers for Bash, Edit, and Write.
Generate a structured handoff document capturing current progress, open tasks, key decisions, and context needed to resume work. Use when ending a session, saying "continue later", "save progress", "session summary", or "pick up where I left off".
Run quality gates, review staged changes for issues, and create a well-crafted conventional commit. Use when saying "commit", "git commit", "save my changes", or ready to commit after making changes.
Track parallel work sessions and prevent confusion across multiple Claude Code instances. Every major step ends with a status line. Every question re-states project, branch, and task.
Compile a structured literature survey on any AI/ML topic. Agent curates a research bundle (taxonomy + sections + bibliography of real papers) from a public anchor resource, then a chosen LLM generates the survey artifact. Output target is a wiki page (markdown), not a one-off HTML — survey lands in `<wiki>/derived/surveys/<slug>.md` with full bibliography rows in `sources.md`. Provider-agnostic (Anthropic/OpenAI/OpenRouter/Fireworks/custom OpenAI-compat). Use when the user asks for a "survey", "literature review", "lit review", or "deep dive" on a technical topic.
Score every decision point with a Thoroughness Rating (1-10). AI makes the marginal cost of doing things properly near-zero — pick the higher-rated option every time. Includes scope checks to distinguish contained vs unbounded work.
Reduce token waste by 40-60% through anti-sycophancy rules, tool-call budgets, one-pass coding, task profiles, and read-before-write enforcement. Inspired by drona23/claude-token-efficient.
Start, structure, and grow a persistent research wiki indexed in pro-workflow's SQLite knowledge base. Each wiki is a folder of markdown pages with provenance, plus a shadow FTS5 index so any session can recall it. Use when the user says "start a wiki", "add to wiki", "compile a page", "wiki on X", or wants a long-lived knowledge base on a topic, paper, product, person, project, or codebase.
Query pro-workflow wikis via SQLite FTS5 BM25 retrieval. Returns top-K passages with citations. Use when answering a question that any of the user's wikis already covers, when the user says "what does the wiki say about X", "ask wiki", "search wikis", or before drafting a new wiki page (to avoid duplication).
Auto-grow a pro-workflow wiki by running a budget-capped BFS research loop over pluggable source fetchers (web, arXiv, GitHub). Each iteration pops a seed from the queue, fetches sources, drafts a wiki page, dedupes claims against existing pages, enqueues follow-up seeds. Halts on budget cap, depth cap, or convergence. Use when the user says "research <topic>", "grow the <slug> wiki", "auto-research", or wants a knowledge base that builds itself overnight.
Render a self-contained HTML viewer for a pro-workflow wiki. Pages, sources, claims, seed queue, page-link graph and full-text search all in one file. No external dependencies, no JS framework, S3-uploadable. Use when the user wants to browse a wiki visually, share its current state with someone, audit research progress, or hand off a knowledge base. Inspired by Thariq Shihipar's "Unreasonable Effectiveness of HTML" — favors information density and shareability over markdown-only outputs.
End-of-session ritual that audits changes, runs quality checks, captures learnings, and produces a session summary. Use when saying "wrap up", "done for the day", "finish coding", or ending a coding session.
质量评分
类似扩展
Plugin Development Toolkit
99Comprehensive toolkit for developing Claude Code plugins. Includes 7 expert skills covering hooks, MCP integration, commands, agents, and best practices. AI-assisted plugin creation and validation.
Kanban
100基于 Markdown 的看板,由 Claude Code 管理。卡片以 .md 文件形式存在 — 无需数据库,无需服务器。
Obey
100让 Claude 真正遵循您的规则。通过自然语言保存规则,通过钩子强制执行规则,跨会话记忆规则。
Ag2 Agent Builder
100Build AG2 (AutoGen) multi-agent systems with slash commands: scaffold agents, wire workflows, create tools, and review code
Agents Design Experience
99Agents for UI/UX design, accessibility, and user experience optimization
Cc Safe Setup
99734 个用于 Claude Code 的安全钩子 — 在自主 AI 编码会话期间防止文件删除、凭证泄露、git 灾难和令牌浪费