happy-image-gen
Skill VérifiéUniversal AI image generation supporting OpenAI DALL·E / gpt-image, Google Gemini Image / Imagen, Replicate (Flux / SDXL / any model), Stability AI, FAL, Ark (Seedream 4.5), Bailian (qwen-image / wanx), and SiliconFlow. Use this skill whenever the user asks to generate, create, draw, illustrate, render, or synthesize images from text prompts or reference images. Typical phrases include "draw a ...", "generate an image of ...", "画一张 ...", "给我来张图", "make a poster of ...", "create an illustration ...", or any mention of image-generation model families like DALL·E, gpt-image, Flux, SDXL, Seedream, Imagen, Gemini image, Kolors, or Wanx. Always use this skill even if the user does not name a specific model — pick a provider based on their EXTEND.md defaults or available API keys in the environment. Do NOT use this skill when the user explicitly mentions 即梦 / Dreamina / Jimeng — those go to happy-dreamina instead.
This skill provides a unified interface for interacting with various AI image generation services via a single CLI tool. It handles text-to-image and image-to-image workflows, supports numerous providers like OpenAI, Google, Replicate, and more, and manages API keys and configurations through environment variables or an EXTEND.md file.
Practical Utility
- warning:Production readinessWhile the core functionality appears to be implemented for OpenAI and basic configurations, many providers are marked as 'planned' or 'not wired in MVP', indicating incomplete support for its stated goal of universal image generation.
- warning:Edge casesThe 'Step 4: Handle errors' section in SKILL.md lists common errors and some fixes, but it doesn't comprehensively cover all potential edge cases (e.g., malformed aspect ratios beyond basic checks, specific provider model limitations, rate limiting strategies).
Maintenance
- warning:Commit recencyNo commits have been made to the default branch, indicating the project may be unmaintained. The last commit was f49e7782a551759c9f9e0a4d4417ff053f0a86fd.
Code Execution
- warning:ValidationWhile CLI arguments are parsed, explicit validation and sanitization of input values (e.g., prompt content, aspect ratio formats beyond basic parsing) are not rigorously implemented before being passed to provider-specific logic.
- warning:Error HandlingError handling is present, with `try/catch` blocks and custom error classes. However, some provider implementations might surface raw API errors without extensive user-friendly guidance beyond what the API provides.
Compliance
- info:GDPRThe skill processes user prompts and potentially reference images, which could include personal data. This data is sent to third-party AI providers, and while not directly submitted to an external analytics service, there's no explicit mention of sanitization for personal data before sending to the image generation APIs.
Portability
- warning:Runtime stabilityThe skill explicitly depends on Bun and notes a fallback to 'npx -y bun', but does not detail compatibility with other runtimes or shells beyond basic POSIX expectations for commands like 'command -v'.
Installation
Ajoutez d'abord la marketplace
/plugin marketplace add iamzhihuix/happy-claude-skills/plugin install happy-image-gen@happy-claude-skillsExtensions similaires
Image Generation
98Create effective AI image generation prompts for DALL-E, Midjourney, and Stable Diffusion. Generate prompts for various styles and use cases.
happy-video-gen
98Universal AI video generation supporting OpenAI Sora, Google Veo 2/3, Runway Gen-3/Gen-4, Pika 2.2, Luma Dream Machine (Ray 2), FAL (Kling / Wan / Veo / Sora wrappers), Ark Seedance 1.5 Pro/Lite, Bailian Wanx (i2v), MiniMax Hailuo-02, and Vidu Q3. Use this skill whenever the user asks to generate, create, make, or synthesize a video from a text prompt or from a first-frame image. Covers text-to-video and image-to-video, with optional last-frame control on providers that support it. Typical phrases include "generate a video of ...", "make a 5-second clip of ...", "animate this image", "生成一段视频", "做个短片", or any mention of video-generation model families like Sora, Veo, Runway Gen, Kling, Wan, Seedance, Hailuo, Pika, Dream Machine, Vidu. Always use this skill even if the user does not name a specific model — pick a provider from their EXTEND.md defaults or available API keys. Do NOT use this skill when the user explicitly mentions 即梦 / Dreamina / Jimeng — those go to happy-dreamina instead.
Godot Asset Generator
98Generate game assets using AI image generation APIs (DALL-E, Replicate, fal.ai) and prepare them for Godot. Covers the full art pipeline from concept art and style guides to final sprites, sprite sheets, and import configuration. This skill should be used when creating game art, generating sprites, making tilesets, creating UI elements, or preparing assets for Godot import. Keywords: game assets, AI art, DALL-E, Replicate, fal.ai, sprite sheet, tileset, Godot, pixel art, character sprite, game art, texture, animation frames.
npx CLI Tool Development (Bun-First)
98Build and publish npx-executable CLI tools using Bun as the primary toolchain with npm-compatible output. Use when the user wants to create a new CLI tool, set up a command-line package for npx execution, configure argument parsing and terminal output, or publish a CLI to npm. Covers scaffolding, citty arg parsing, sub-commands, terminal UX, strict TypeScript, Biome + ESLint linting, Vitest testing, Bunup bundling, and publishing workflows. Keywords: npx, cli, command-line, binary, bin, tool, bun, citty, commander, terminal, publish, typescript, biome, vitest.
AI Image Generation
95Implement AI image generation capabilities using the z-ai-web-dev-sdk. Use this skill when the user needs to create images from text descriptions, generate visual content, create artwork, design assets, or build applications with AI-powered image creation. Supports multiple image sizes and returns base64 encoded images. Also includes CLI tool for quick image generation.
LibTV AI Agent Skills
95agent-im 会话技能 - 通过 liblib.tv 的 AI 能力生成和编辑图片/视频。覆盖场景包括:生成(文生图、文生视频、图生视频、做动画、画一个xxx、来段xxx)、编辑修改(把xxx换成yyy、去掉xxx、加上xxx、改成xxx、调整xxx、局部修改、改镜头)、风格转换(风格迁移、转绘、换风格)、视频续写延长、复刻视频/TVC/宣传片、短剧/短漫剧生成、音乐MV生成、产品广告/展示片制作、分镜/故事板设计、教育视频/短视频制作。当用户提到 liblib、libtv、上传参考图/视频、查看生成进度时也应触发。关键判断:只要用户的请求涉及 AI 图片或视频的创作、生成、编辑、修改,无论措辞如何(如"画只猫"、"做个海报"、"把纸船换成爱心"、"这个视频帮我改一下"、"帮我复刻这段视频"、"用这首歌做个MV"、"一句话生成短剧"),都必须触发此技能。