Skip to main content

Prompt Engineer

Skill Verified Active

Writes, refactors, and evaluates prompts for LLMs — generating optimized prompt templates, structured output schemas, evaluation rubrics, and test suites. Use when designing prompts for new LLM applications, refactoring existing prompts for better accuracy or token efficiency, implementing chain-of-thought or few-shot learning, creating system prompts with personas and guardrails, building JSON/function-calling schemas, or developing prompt evaluation frameworks to measure and improve model performance.

Purpose

To serve as an expert resource for users looking to create, refine, and test prompts for large language models, ensuring optimal performance, accuracy, and efficiency.

Features

  • Optimizing prompts for accuracy and token efficiency
  • Generating structured output schemas (JSON, function calling)
  • Implementing advanced prompting patterns (CoT, Few-shot, ReAct)
  • Developing evaluation frameworks and test suites
  • Providing guidance on system prompts and context management

Use Cases

  • Designing prompts for new LLM applications
  • Refactoring existing prompts for better performance
  • Building reliable and consistent LLM interactions
  • Creating robust prompt evaluation frameworks

Non-Goals

  • Directly executing prompts against an LLM
  • Managing LLM model deployment or infrastructure
  • Replacing the need for user-defined task logic

Installation

First, add the marketplace

/plugin marketplace add jeffallan/claude-skills
/plugin install claude-skills@fullstack-dev-skills

Quality Score

Verified
98 /100
Analyzed about 15 hours ago

Trust Signals

Last commit13 days ago
Stars9k
LicenseMIT
Status
View Source

Similar Extensions

SHAP Model Interpretability

100

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

Skill
K-Dense-AI

Arize Prompt Optimization

100

Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement.

Skill
github

Prompt Optimization

100

Applies prompt repetition to improve accuracy for non-reasoning LLMs

Skill
asklokesh

Molfeat

99

Molecular featurization for ML (100+ featurizers). ECFP, MACCS, descriptors, pretrained models (ChemBERTa), convert SMILES to features, for QSAR and molecular ML.

Skill
K-Dense-AI

OraClaw Forecast

100

Time series forecasting for AI agents. ARIMA and Holt-Winters predictions with confidence intervals. Predict revenue, traffic, prices, or any sequential data. Sub-5ms inference.

Skill
Whatsonyourmind

Arize Evaluator

100

Handles LLM-as-judge evaluation workflows on Arize including creating/updating evaluators, running evaluations on spans or experiments, managing tasks, trigger-run operations, column mapping, and continuous monitoring. Use when the user mentions create evaluator, LLM judge, hallucination, faithfulness, correctness, relevance, run eval, score spans, score experiment, trigger-run, column mapping, continuous monitoring, or improve evaluator prompt.

Skill
github

© 2025 SkillRepo · Find the right skill, skip the noise.