Correlate Observability Signals
Skill Verified ActiveUnify metrics, logs, and traces for cohesive debugging. Implement exemplars for log-to-trace linking, build unified dashboards using RED/USE methods, and enable rapid root cause analysis across observability signals. Use when investigating complex incidents spanning multiple systems, reducing mean time to resolution, implementing distributed tracing, or moving from siloed tools to a unified observability platform.
To enable cohesive debugging and rapid root cause analysis by unifying metrics, logs, and traces into a single observability view.
Features
- Implement trace context propagation in logs and metrics
- Configure Prometheus exemplars for log-to-trace linking
- Build unified dashboards using RED and USE methods
- Link logs to traces in Loki for cohesive debugging
- Provide step-by-step guidance for incident investigation workflows
Use Cases
- Investigating complex incidents spanning multiple systems
- Reducing mean time to resolution (MTTR)
- Building unified observability dashboards
- Implementing distributed tracing across services
Non-Goals
- Configuring the underlying observability backends (Prometheus, Loki, Tempo)
- Writing application code beyond instrumentation for trace propagation
- Replacing existing monitoring and alerting tools
Workflow
- Implement Trace Context Propagation
- Configure Exemplars in Prometheus
- Build Unified Dashboard with RED Method
- Implement USE Method for Resources
- Link Logs to Traces in Loki
- Create Unified Incident View
Practices
- Observability
- Distributed Tracing
- Debugging
- Incident Response
Prerequisites
- Prometheus (metrics)
- Log aggregation system (Loki, Elasticsearch, CloudWatch)
- Distributed tracing backend (Tempo, Jaeger, Zipkin)
- Optional: Grafana for unified visualization
- Optional: OpenTelemetry instrumentation
Installation
/plugin install agent-almanac@pjt222-agent-almanacQuality Score
VerifiedTrust Signals
Similar Extensions
Service Mesh Observability
98Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debugging latency issues, or implementing SLOs for service communication.
Grafana Dashboards
99Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visualizing metrics, or creating operational observability interfaces.
Observability Gap Hunt
98Inspects services, jobs, and code paths for missing or weak logs, metrics, traces, alerts, dashboards, or deployment-linked telemetry, then returns a tightly scoped backlog of observability gaps. Use when a user says `find observability gaps`, `audit telemetry coverage`, `what logs or metrics are missing`, `check alerting coverage`, or asks for a recurring telemetry review. Do NOT use for live incident response, root-cause analysis, generic performance tuning, or a broad code review.
Azure Monitor Query Py
100Azure Monitor Query SDK for Python. Use for querying Log Analytics workspaces and Azure Monitor metrics. Triggers: "azure-monitor-query", "LogsQueryClient", "MetricsQueryClient", "Log Analytics", "Kusto queries", "Azure metrics".
Query Netdata Cloud
100Query Netdata Cloud via its REST API -- metrics, logs (systemd-journal / windows-events / otel-logs), topology graphs (topology:snmp), network flows (flows:netflow), alerts, dynamic configuration (DynCfg), and generic Functions on a node. Use when the user asks about querying Netdata Cloud, fetching metrics from the cloud, querying logs / topology / netflow / sflow / ipfix through Cloud, listing or modifying configurations via DynCfg, calling agent Functions through Cloud, listing spaces/rooms/nodes, or building a curl command against `app.netdata.cloud`. Pairs with the `query-netdata-agents` skill when direct-agent access is needed.
LangSmith Observability
99LLM observability platform for tracing, evaluation, and monitoring. Use when debugging LLM applications, evaluating model outputs against datasets, monitoring production systems, or building systematic testing pipelines for AI applications.