Ltx2
Skill ActiveAI video generation with LTX-2.3 22B — text-to-video, image-to-video clips for video production. Use when generating video clips, animating images, creating b-roll, animated backgrounds, or motion content. Triggers include video generation, animate image, b-roll, motion, video clip, text-to-video, image-to-video.
To generate AI-powered video clips from text prompts or existing images for various video production needs, such as creating b-roll, animated backgrounds, or motion content.
Features
- Text-to-video generation
- Image-to-video animation
- Customizable video parameters (resolution, duration, quality)
- Style LoRA application for aesthetic control
- Integration with cloud GPU platforms (Modal, RunPod)
- Detailed prompting guide and use case examples
Use Cases
- Generating atmospheric b-roll clips
- Animating still images for motion content
- Creating animated slide backgrounds
- Generating stylized character cameos
- Producing branded intro/outro motion backgrounds
Non-Goals
- Generating photorealistic human lip-sync where precision is critical
- Producing videos longer than ~8 seconds without stitching
- Generating readable text directly within video frames
- Providing voiceover or music directly (requires integration with other tools)
Security
- warning:Secret ManagementThe SKILL.md mentions requiring `MODAL_LTX2_ENDPOINT_URL` in `.env`, and setup instructions mention HuggingFace token and Gemma 3 license acceptance, but there is no explicit guidance on how these secrets are securely handled or logged.
Installation
npx skills add digitalsamba/claude-code-video-toolkitRuns the Vercel skills CLI (skills.sh) via npx — needs Node.js locally and at least one installed skills-compatible agent (Claude Code, Cursor, Codex, …). Assumes the repo follows the agentskills.io format.
Quality Score
Trust Signals
Similar Extensions
Videoagent Video Studio
100Generate short AI videos from text or images — text-to-video, image-to-video, and reference-based generation — with zero API key setup. Use when the user wants to create a video clip, animate an image, or generate video from a description.
Openclaw Video Toolkit
99Create professional videos autonomously using claude-code-video-toolkit — AI voiceovers, image generation, music, talking heads, and Remotion rendering.
Video Prompting Skill
95Draft and refine prompts for video generation models (text-to-video and image-to-video), and create character-sheet prompts for image models when the goal is character consistency before image-to-video. Use when a user asks for a "video prompt", a model-specific prompt such as Seedance 2.0, Ovi, Sora, Veo 3, Wan 2.2, LTX-2, or LTX-2.3, or a consistent-character prompt such as "character sheet prompt", "character turnaround", "character reference sheet", or "photographic identity sheet".
Video
100When the user wants to create, generate, or produce video content using AI tools or programmatic frameworks. Also use when the user mentions 'video production,' 'AI video,' 'Remotion,' 'Hyperframes,' 'HeyGen,' 'Synthesia,' 'Veo,' 'Runway,' 'Kling,' 'Pika,' 'video generation,' 'AI avatar,' 'talking head video,' 'programmatic video,' 'video template,' 'explainer video,' 'product demo video,' 'video pipeline,' or 'make me a video.' Use this for video creation, generation, and production workflows. For video content strategy and what to post, see social-content. For paid video ad creative, see ad-creative.
Compose Sacred Music
100Compose or analyze sacred music in Hildegard von Bingen's distinctive modal style. Covers modal selection, melodic contour (wide-range melodies), text-setting (syllabic and melismatic), neumatic notation, and liturgical context for antiphons, sequences, and responsories. Use when composing a new piece in Hildegardian style, analyzing an existing chant for structure and mode, researching medieval modal music, preparing to perform or teach Hildegard's music, or setting Latin sacred texts.
Openclaw
100Generate images and videos from text with multi-provider routing — supports GPT Image 2.0 (near-perfect text rendering), Nanobanana 2, Seedream 5.0, Midjourney V8.1 (unified photorealistic + anime), Flux 2 Klein (cheap drafts), Seedance 2.0 / Happyhorse 1.0 / Veo 3.1 video, and local ComfyUI workflows. Includes 1,446 curated prompts and style-aware prompt enhancement. Use when users want to create images/videos, design assets, animate photos, enhance prompts, or manage AI art workflows. NOT for: generic chat, code generation, document writing, video editing of existing footage, audio/TTS, or any task unrelated to AI image/video creation.