Skip to main content

Oraclaw Calibrate

Skill Verified Active

Prediction quality scoring for AI agents. Brier score, log score, and multi-source convergence analysis. Know if your forecasts are accurate and if your data sources agree.

Purpose

To give AI agents precise, mathematical tools for evaluating the accuracy of their predictions and the agreement between different information sources.

Features

  • Score prediction accuracy (Brier, log score)
  • Analyze multi-source agreement/convergence
  • Detect outlier data sources
  • Provide deterministic algorithmic answers
  • Offer as MCP server, REST API, and SDK

Use Cases

  • Score how accurate past predictions were
  • Check if multiple data sources agree on forecasts
  • Find the outlier source that disagrees with consensus
  • Compare forecast quality across different models

Non-Goals

  • Providing the predictions themselves
  • Performing generic AI agent reasoning
  • Acting as a replacement for primary LLM capabilities

Installation

npx skills add Whatsonyourmind/oraclaw

Runs the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.

Quality Score

Verified
97 /100
Analyzed about 14 hours ago

Trust Signals

Last commit12 days ago
Stars8
LicenseMIT
Status
View Source

© 2025 SkillRepo · Find the right skill, skip the noise.