Cost Export
Skill Verifiziert AktivExport cost-tracking telemetry in Prometheus textfile or webhook JSON formats — for external observability (Grafana, Datadog, custom dashboards)
To make internal cost data accessible to external observability platforms, enabling unified monitoring and alerting across infrastructure and AI agent usage.
Funktionen
- Export cost telemetry in Prometheus textfile format
- Export cost telemetry in JSON webhook format
- Support for Prometheus node_exporter textfile collector
- Configurable via environment variables
- Clear output metrics for costs, tiers, sessions, and budgets
Anwendungsfälle
- Refreshing dashboards in Grafana, Datadog, or Prometheus after cost tracking runs.
- Keeping external dashboards near real-time by running the export on a schedule.
- Sending cost data to Slack or custom endpoints via webhooks for ad-hoc reporting.
- Monitoring budget utilization and alerting on cost overruns.
Nicht-Ziele
- Collecting or processing cost data; it relies on the `cost-track` skill.
- Configuring external observability systems (Grafana, Datadog, etc.).
- Acting as a data storage layer for cost information.
Workflow
- Retrieve cost-tracking records (`session-*`, `budget-config-*`)
- Format records into Prometheus textfile or JSON webhook payload
- Emit formatted data to specified Prometheus textfile path or webhook URL
- Optionally suppress confirmation output via `EXPORT_QUIET=1`
Praktiken
- Observability
- Cost Management
- Data Export
Voraussetzungen
- Node.js runtime
- The `cost-tracking` namespace data must exist
Scope
- info:Dry-run previewWhile not strictly necessary for an export function, a dry-run mode to preview the telemetry output before sending it to a webhook could be a useful addition.
Installation
Zuerst Marketplace hinzufügen
/plugin marketplace add ruvnet/ruflo/plugin install ruflo-cost-tracker@rufloQualitätspunktzahl
VerifiziertVertrauenssignale
Ähnliche Erweiterungen
Grafana Dashboards
99Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visualizing metrics, or creating operational observability interfaces.
Service Mesh Observability
98Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debugging latency issues, or implementing SLOs for service communication.
Plan Capacity
99Perform capacity planning using historical metrics and growth models. Use predict_linear for forecasting, identify resource constraints, calculate headroom, and recommend scaling actions before saturation. Use before seasonal traffic spikes or product launches, during quarterly capacity reviews, when resource utilization trends upward, or before budget planning cycles.
Define SLO/SLI/SLA
99Establish Service Level Objectives (SLO), Service Level Indicators (SLI), and Service Level Agreements (SLA) with error budget tracking, burn rate alerts, and automated reporting using Prometheus and tools like Sloth or Pyrra. Use when defining reliability targets for customer-facing services, balancing feature velocity against system reliability through error budgets, migrating from arbitrary uptime goals to data-driven metrics, or implementing Site Reliability Engineering practices.
Conduct Empirical Wire Capture
99Capture outbound HTTP and telemetry from a CLI harness at runtime. Covers capture-channel selection (transcript file vs verbose-fetch stderr vs outbound proxy vs on-disk state), hook-driven per-event capture vs long-running session capture, JSONL output format for diff-friendly artifacts, and the observability table that maps each target to the cheapest channel that captures it. Use when a static finding needs runtime confirmation, when a payload shape is needed for a client re-implementation, or when dark-vs-live disambiguation requires watching what the binary actually sends.
Correlate Observability Signals
97Unify metrics, logs, and traces for cohesive debugging. Implement exemplars for log-to-trace linking, build unified dashboards using RED/USE methods, and enable rapid root cause analysis across observability signals. Use when investigating complex incidents spanning multiple systems, reducing mean time to resolution, implementing distributed tracing, or moving from siloed tools to a unified observability platform.