Agentdb Learning
技能 已验证 活跃Create and train AI learning plugins with AgentDB's 9 reinforcement learning algorithms. Includes Decision Transformer, Q-Learning, SARSA, Actor-Critic, and more. Use when building self-learning agents, implementing RL, or optimizing agent behavior through experience.
To empower developers to build and optimize self-learning AI agents by providing a robust toolkit for creating and training reinforcement learning plugins.
功能
- Create AI learning plugins with 9 RL algorithms
- Train models using CLI or API
- Supports Decision Transformer, Q-Learning, SARSA, Actor-Critic, and more
- Includes WASM-accelerated inference for faster training
- Manages plugins with commands like list, info, and template listing
使用场景
- Building self-learning agents
- Implementing reinforcement learning in AI
- Optimizing agent behavior through experience
- Learning from logged historical data (Decision Transformer)
非目标
- Providing general AI agent orchestration beyond learning plugin development
- Replacing core AI development frameworks
- Handling real-time inference for deployed models (focus is on training)
安装
npx skills add ruvnet/ruflo通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。
质量评分
已验证类似扩展
Agentdb Memory Patterns
99Implement persistent memory patterns for AI agents using AgentDB. Includes session memory, long-term storage, pattern learning, and context management. Use when building stateful agents, chat systems, or intelligent assistants.
Mcp Setup
100Configure popular MCP servers for enhanced agent capabilities
Deepinit
100Deep codebase initialization with hierarchical AGENTS.md documentation
Agent Worker Specialist
100Agent skill for worker-specialist - invoke with $agent-worker-specialist
Orchestrate
100Wire Commands, Agents, and Skills together for complex features. Use when building features that need research, planning, and implementation phases.
Context Compression
100This skill should be used when the user asks to "compress context", "summarize conversation history", "implement compaction", "reduce token usage", or mentions context compression, structured summarization, tokens-per-task optimization, or long-running agent sessions exceeding context limits.