What Is Few-Shot Prompting?
Few-shot prompting is a technique where you include a small number of complete input/output examples directly in the prompt to demonstrate to the LLM exactly what kind of response you expect. The model performs in-context learning — inferring the task pattern from the examples without any weight updates — and applies that pattern to the new input.
The term comes from the machine learning concept of "few-shot learning," but in the LLM context it refers entirely to prompt design rather than model training.
Prompting Variants by Example Count
| Variant | Examples in prompt | Use case |
|---|---|---|
| Zero-shot | 0 | Simple, well-defined tasks |
| One-shot | 1 | When you have one canonical example |
| Few-shot | 2–10 | Complex formatting, classification, extraction |
| Many-shot | 10–100+ | Highly specialized tasks, long context models |
A Practical Few-Shot Example
Zero-shot (may produce inconsistent output):
Extract the company name and founding year from this text:
"Apple was founded by Steve Jobs in 1976."
Few-shot (consistent, structured output):
Extract company name and founding year as JSON.
Text: "Microsoft was founded by Bill Gates in 1975."
Output: {"company": "Microsoft", "founded": 1975}
Text: "Stripe was founded by Patrick Collison in 2010."
Output: {"company": "Stripe", "founded": 2010}
Text: "Apple was founded by Steve Jobs in 1976."
Output:
The model now reliably produces {"company": "Apple", "founded": 1976}.
Why Few-Shot Prompting Works
LLMs have seen enormous amounts of pattern-matching examples during training. When you provide a few examples, you are activating the model's learned ability to continue a pattern rather than relying on its ability to interpret abstract instructions. This is especially effective for:
- Unusual output formats not covered well by zero-shot instructions alone.
- Edge case handling — showing how to treat missing values, ambiguous inputs, etc.
- Tone calibration — demonstrating the exact level of formality or conciseness you want.
- Classification tasks — showing label boundaries through examples rather than descriptions.
Few-Shot Prompting for Structured Extraction
Few-shot prompting is particularly powerful for web content extraction tasks. Rather than writing elaborate instructions, you show the model what "good extraction" looks like:
const systemPrompt = `Extract structured product data from web page content.
Example input:
"The ProWidget X1 retails for $299. Available in blue and red. Ships within 3 days."
Example output:
{"name": "ProWidget X1", "price": 299, "colors": ["blue", "red"], "shipping_days": 3}
Now extract from the following page content:`;
const { content } = await sdk.scrape("https://example.com/product");
const userMessage = content;
KnowledgeSDK's /v1/extract endpoint uses this approach internally — providing the model with structured examples of well-extracted knowledge to ensure consistent, clean output across a wide variety of web page formats.
Tips for Effective Few-Shot Examples
- Use real, representative examples — synthetic edge cases may teach the model the wrong distribution.
- Cover your edge cases — include an example with a missing field, an ambiguous value, or an unusual format.
- Keep examples concise — long examples inflate token count; trim to the essential signal.
- Order matters — the last example before the real input has disproportionate influence; make it your cleanest one.
- Balance your examples — for classification tasks, include roughly equal examples per class.
Few-Shot vs. Fine-tuning
Few-shot prompting is dramatically easier to iterate on than fine-tuning: change an example, test immediately, no training run required. Start with few-shot prompting and only graduate to fine-tuning when you have 100+ examples and need consistent performance at high request volumes.