跳转到主要内容
此内容尚未提供您的语言版本,正在以英文显示。

Openai Whisper

技能 已验证 活跃

Local speech-to-text with the Whisper CLI (no API key).

目的

To provide highly accurate and private speech-to-text transcription locally, without relying on external APIs or requiring an API key, by leveraging the Whisper CLI.

功能

  • Local speech-to-text transcription
  • No API key required
  • Leverages Whisper CLI
  • Supports multiple Whisper models
  • Configurable output formats and directories

使用场景

  • Transcribing audio files locally for privacy
  • Converting audio to text for further processing
  • Translating audio from other languages
  • Utilizing advanced speech recognition without cloud costs

非目标

  • Providing cloud-based transcription services
  • Requiring an API key for operation
  • Handling real-time streaming transcription (focus is on file-based processing)

Documentation

  • info:READMEThe README.md is very large and primarily describes the OpenClaw framework, with minimal specific information about the Whisper CLI skill itself.

安装

npx skills add steipete/clawdis

通过 npx 运行 Vercel skills CLI(skills.sh)— 需要本地安装 Node.js,以及至少一个兼容 skills 的智能体(Claude Code、Cursor、Codex 等)。前提是仓库遵循 agentskills.io 格式。

质量评分

已验证
99 /100
1 day ago 分析

信任信号

最近提交1 day ago
星标371.6k
许可证MIT
状态
查看源代码