Google Veo
Skill AktivGenerate videos with Google Veo models via inference.sh CLI. Models: Veo 3.1, Veo 3.1 Fast, Veo 3, Veo 3 Fast, Veo 2. Capabilities: text-to-video, cinematic output, high quality video generation. Triggers: veo, google veo, veo 3, veo 2, veo 3.1, vertex ai video, google video generation, google video ai, veo model, veo video
Generate high-quality videos from text prompts using advanced Google Veo AI models through a command-line interface.
Funktionen
- Generate videos with Google Veo models
- Supports multiple Veo model versions (3.1, 3.1 Fast, 3, 3 Fast, 2)
- Text-to-video generation
- Cinematic output and high-quality video production
- Operates via inference.sh CLI
Anwendungsfälle
- Creating marketing or explainer videos from text descriptions
- Generating cinematic visuals for creative projects
- Rapidly prototyping video content with AI
- Exploring different AI video generation models and their quality
Nicht-Ziele
- Directly interacting with Google Cloud Vertex AI APIs
- Editing or post-processing generated videos
- Providing a GUI for video generation
License
- info:License usabilityThe README mentions MIT license with a badge, but there is no dedicated LICENSE file or SPDX identifier in the manifest.
Versioning
- warning:Release ManagementThe README mentions installing from `main`, but there is no clear versioning signal (e.g., tag, manifest version) for the specific skill or the repository.
Code Execution
- info:ValidationInput is a JSON object with a 'prompt' field; while the prompt string itself might not be validated, the overall structure is expected.
- info:Error HandlingError handling details for the `belt` CLI or specific Veo model failures are not explicitly documented within the skill's markdown.
Errors
- info:Actionable error messagesError messages would likely come from the `belt` CLI, but the skill itself doesn't detail specific error handling or recovery steps.
Practical Utility
- info:Edge casesWhile happy path examples are provided, specific edge cases or failure modes (e.g., invalid prompts, rate limits) are not explicitly detailed with recovery steps.
Installation
npx skills add inferen-sh/skillsFührt das Vercel skills CLI (skills.sh) via npx aus — benötigt Node.js lokal und mindestens einen installierten skills-kompatiblen Agent (Claude Code, Cursor, Codex, …). Setzt voraus, dass das Repo dem agentskills.io-Format folgt.
Qualitätspunktzahl
Ähnliche Erweiterungen
Ai Video Generation
98Generate AI videos with Google Veo, Seedance 2.0, HappyHorse, Wan, Grok and 40+ models via inference.sh CLI. Models: Veo 3.1, Veo 3, Seedance 2.0, HappyHorse 1.0, Wan 2.5, Grok Imagine Video, OmniHuman, Fabric, HunyuanVideo. Capabilities: text-to-video, image-to-video, reference-to-video, video editing, lipsync, avatar animation, video upscaling, foley sound. Use for: social media videos, marketing content, explainer videos, product demos, AI avatars. Triggers: video generation, ai video, text to video, image to video, veo, animate image, video from image, ai animation, video generator, generate video, t2v, i2v, ai video maker, create video with ai, runway alternative, pika alternative, sora alternative, kling alternative, seedance, happyhorse
Seedance Video Generation
98Generate videos with ByteDance Seedance 2.0 via inference.sh CLI. Unified model for text-to-video, image-to-video, and reference-to-video with synchronized audio, up to 1080p, 4-15s duration. Pro and Fast variants. Studio variants with private asset library for portrait consistency. Use for: social media videos, music videos, product demos, animated content, AI video with sound. Triggers: seedance, seedance 2, bytedance video, seedance t2v, seedance i2v, seedance r2v, video with audio, seedance 2.0, bytedance seedance, seedance studio
HappyHorse Video Generation
98Generate and edit videos with Alibaba HappyHorse 1.0 models via inference.sh CLI. Models: HappyHorse T2V, I2V, R2V, Video Edit. Capabilities: text-to-video, image-to-video, reference-to-video, video editing with natural language, character preservation, 720P/1080P, up to 15 seconds. Use for: physically realistic video, video editing, character-consistent content, product demos, social media. Triggers: happyhorse, happy horse, alibaba video, happyhorse 1.0, dashscope video, alibaba happyhorse, video editing ai, ai video editor
Trader Regime
100Detect current market regime using npx neural-trader — bull/bear/ranging/volatile classification with recommended strategy
Setup
100Use first for install/update routing — sends setup, doctor, or MCP requests to the correct OMC setup flow
Project Session Manager
100Worktree-first dev environment manager for issues, PRs, and features with optional tmux sessions