Component Search
Skill Verified ActiveThis skill should be used when users need to discover Redpanda Connect components for their streaming pipelines. Trigger when users ask about finding inputs, outputs, processors, or other components, or when they mention specific technologies like "kafka consumer", "postgres output", "http server", or ask "which component should I use for X".
To help users find and understand the appropriate Redpanda Connect components for their streaming data pipelines, based on natural language queries.
Features
- Discover Redpanda Connect components by category
- Retrieve detailed configuration schemas for components
- Link to official online documentation for components
- Identify required and optional fields for component setup
Use Cases
- When users need to find inputs, outputs, or processors for their streaming pipelines.
- When users mention specific technologies like 'kafka consumer' or 'postgres output' and need a suitable component.
- When users ask 'which component should I use for X' to build a data pipeline.
Non-Goals
- Configuring or deploying Redpanda Connect pipelines.
- Writing complex pipeline logic within the skill itself.
- Replacing the official Redpanda Connect documentation entirely.
Trust
- info:Issues AttentionIssues Opened (last 90d): 17, Issues Closed (last 90d): 16. The closure rate is approximately 48%, indicating a moderate response rate.
Installation
First, add the marketplace
/plugin marketplace add redpanda-data/connect/plugin install redpanda-connect@redpanda-connect-pluginsQuality Score
VerifiedTrust Signals
Similar Extensions
Pipeline Assistant
96This skill should be used when users need to create or fix Redpanda Connect pipeline configurations. Trigger when users mention "config", "pipeline", "YAML", "create a config", "fix my config", "validate my pipeline", or describe a streaming pipeline need like "read from Kafka and write to S3".
Analyze Codebase Workflow
99Analyze an arbitrary codebase to auto-detect workflows, data pipelines, and file dependencies using putior's put_auto() engine. Produces an annotation plan that maps detected I/O patterns to source files across 30+ supported languages with 902 auto-detection patterns. Use when onboarding onto an unfamiliar codebase to understand data flow, starting putior integration in a project without existing annotations, auditing a project's data pipeline before documentation, or preparing an annotation plan before running annotate-source-files.
Monitor Stream
99Stream live swarm events using the Monitor tool for real-time observability
Implement A2a Server
99Implement a JSON-RPC 2.0 A2A server with full task lifecycle management (submitted/working/completed/failed/canceled/input-required), SSE streaming, and push notifications. Use when implementing an agent that participates in multi-agent A2A workflows, building a backend for an Agent Card, adding A2A protocol support to an existing agent or service, or deploying an agent that must interoperate with other A2A-compliant agents.
Blucli
99BluOS CLI (blu) for discovery, playback, grouping, and volume.
Lark Event Consumer CLI
99Lark/Feishu real-time event listening / subscribing / consuming: stream events as NDJSON via `lark-cli event consume <EventKey>` (covers IM message receive, reactions, chat member changes, etc.). Use for Lark bots, real-time message processing, long-running subscribers, streaming webhook/push handlers. Supports `--max-events` / `--timeout` bounded runs and a stderr ready-marker contract — designed for AI agents running as subprocesses.