4 min read

Prompt Engineering: Patterns That Actually Work

Prompt engineering is where most AI engineers spend their time in practice. This guide covers the patterns interviewers actually test — from reasoning techniques to output control to security — and how to talk about prompting decisions like a senior engineer.

What Is Prompt Engineering and Why Do Interviewers Ask About It

Prompt engineering is designing inputs to language models that reliably produce the outputs you need. It is the most accessible AI skill, but depth reveals how well you understand model behavior, failure modes, and production constraints.

Interview Tip

The interview signal is not whether you know prompting tricks — it is whether you can reason about why certain techniques work, when to use them, and what happens when they fail.

Core Techniques

Chain-of-Thought (CoT)

Asks the model to show reasoning step by step before a final answer. Dramatically improves multi-step logic, math, and reasoning tasks.

VariantHow it works
Zero-shot CoTAdd "Let us think step by step" — simple but effective
Few-shot CoTProvide 2-3 examples with worked-out reasoning — more reliable for structured tasks
Key Insight

CoT increases output tokens (more cost, more latency) in exchange for accuracy. For simple classification where the model already performs well, CoT adds cost without benefit. Always match technique to task complexity.

Few-Shot vs Fine-Tuning

When to use each approach
Few-Shot LearningFine-Tuning
< 100 examples1000+ examples with consistent patterns
Requirements change frequentlyStable, well-defined task
Need to iterate quicklyNeed minimal per-request latency/cost
Base model is close to your needsDomain-specific patterns prompting cannot teach
Zero infrastructureRequires training pipeline + model versioning
Interview Tip

The production answer: most teams start with few-shot, measure quality gaps, and only fine-tune when prompting cannot close the gap. Saying this shows you have seen the real workflow.

Structured Output Control

1
Function calling / tool use (most reliable)
Typed schema, returns structured JSON. Supported by Claude, GPT-4. Use this in production.
2
System instructions + few-shot examples
Specify the exact format, provide examples, state the model should output nothing else. Moderately reliable.
3
Free-text generation (least reliable)
Hoping the model follows format instructions. Always add output validation as a safety net.

Reducing Hallucinations

Ground with context (RAG)
Give source material, instruct to answer only from provided context.
Explicit uncertainty
Tell the model to say "I do not know" rather than fabricate.
Citation requirements
Ask the model to cite which source supports each claim. Does not prevent hallucination but makes it detectable.
Temperature control
Lower temperature (0.0-0.3) reduces creative outputs that are more hallucination-prone.
Key Insight

No technique eliminates hallucination entirely. The engineering answer is layered defenses: ground with context, instruct for honesty, verify with citations, monitor with evaluation.

Prompt Injection Defense

Defense in depth
Input Separation
Sanitization
Output Validation
Monitoring
Input-output separation
Clearly delimit user input from system instructions using XML tags or structured formats.
Output validation
Check outputs against expected schemas and behaviors. Catches successful injections after the fact.
Second model check
For high-security: a separate model evaluates whether output looks injection-influenced.

How to Explain This in an Interview

1
Frame as engineering tradeoffs
Not "I would use CoT" but "CoT improves accuracy on multi-step reasoning, and the token cost increase is justified here. For the simpler classification tasks in this pipeline, I would skip it."
2
Describe systematic development
"Baseline with simple prompt, measure, add techniques incrementally, measure each change. Goal: simplest prompt that meets quality requirements."
3
Mention evaluation and versioning
Interviewers at production companies want to hear about prompt test suites, version control, and iteration — not magic phrases.
Common Mistake

Treating prompt engineering as art rather than engineering discipline is a red flag. Production systems need evaluation, versioning, testing, and iteration.

Common Interview Questions

What to Practice Next

Browse all Prompt Engineering interview questions for detailed problems with walkthroughs.

Next module: AI Agents & Tool Use: Design Patterns for Autonomous Systems

Practice Questions

View all prompt engineering questions

Test your knowledge

Reading is step one. Practice with questions on this topic to reinforce what you've learned.

Browse practice questions