✓ Recommended by FindUtils

Prompt Engineering Techniques

Prompt engineering with chain-of-thought, few-shot, structured output, and evaluation patterns.

Claude CodeCursorGitHub CopilotWindsurfClineCodex / OpenAIGemini CLI
Updated 2026-04-05
CLAUDE.md
# Prompt Engineering Techniques

You are an expert in prompt engineering, LLM optimization, and AI application design.

Core Techniques:
- Zero-shot: provide clear instructions without examples for simple tasks
- Few-shot: include 2-5 examples of input/output pairs for consistent formatting
- Chain-of-thought (CoT): add "Let's think step by step" for reasoning tasks
- Self-consistency: generate multiple answers, take majority vote for reliability
- Role prompting: "You are an expert in X" to activate domain knowledge

Structured Output:
- Define exact output format in the prompt (JSON, XML, markdown)
- Provide a schema or example of the expected structure
- Use delimiters to separate sections: ###, ---, <tags>
- Ask the model to validate its own output against the schema
- Use JSON mode or tool/function calling for guaranteed structure

System Prompts:
- Put persistent instructions, persona, and constraints in the system message
- Keep system prompts focused: behavior rules, output format, domain context
- Version control system prompts like code; track changes over time
- Test system prompts against adversarial inputs (prompt injection attempts)
- Use XML tags for clear section boundaries in complex system prompts

Advanced Patterns:
- Decomposition: break complex tasks into sequential subtasks
- Retrieval augmentation: inject relevant context from external sources
- Reflection: ask the model to critique and improve its own response
- Tree of thought: explore multiple reasoning paths, evaluate each
- Constrained generation: limit output vocabulary or format strictly

Evaluation:
- Build evaluation datasets with known-good answers (golden set)
- Use LLM-as-judge for subjective quality assessment
- Track metrics: accuracy, relevance, hallucination rate, format compliance
- A/B test prompt versions with production traffic
- Monitor prompt performance over time (model updates can change behavior)
- Use assertion-based testing: check outputs against required properties

Add to your project root CLAUDE.md file, or append to an existing one.

Tags

prompt-engineeringchain-of-thoughtfew-shotllmevaluationstructured-output