knowledgesdk.com/glossary/prompt-engineering
LLMsbeginner

Also known as: prompting

Prompt Engineering

The practice of crafting and optimizing instructions given to an LLM to elicit accurate, relevant, and well-formatted responses.

What Is Prompt Engineering?

Prompt engineering is the discipline of designing, testing, and iterating on the text instructions you provide to a large language model in order to reliably produce the output you want. Because LLMs are sensitive to phrasing, ordering, and context, small changes in a prompt can dramatically shift the quality, format, and accuracy of responses.

Think of a prompt as an interface: the clearer and more precise the interface, the more predictably the model behaves.

Why Prompt Engineering Matters

LLMs are general-purpose systems. Without explicit guidance they may:

  • Return verbose, unformatted prose when you need JSON.
  • Hedge endlessly on questions that have clear answers.
  • Misinterpret ambiguous instructions.
  • Hallucinate facts when context is missing.

Good prompt engineering mitigates these failure modes before resorting to fine-tuning or more expensive interventions.

Core Techniques

Zero-Shot Prompting

Provide only an instruction, no examples. Works well for straightforward tasks.

Summarize the following article in three bullet points:
{article_text}

Few-Shot Prompting

Supply 2–5 input/output examples to demonstrate the desired pattern before the real query.

Chain-of-Thought (CoT)

Ask the model to reason step by step before giving a final answer. Dramatically improves performance on multi-step math and logic problems.

Think step by step, then answer: If a train travels 60 mph for 2.5 hours, how far does it go?

Role Prompting

Frame the model with a persona to set tone and domain expertise.

You are a senior software engineer specializing in TypeScript. Review this code for bugs:

Structured Output Prompting

Instruct the model to return data in a specific schema.

Return a JSON object with keys: title (string), summary (string), tags (array of strings).

Prompt Engineering Best Practices

  • Be explicit about format — specify length, structure (JSON, markdown, list), and tone.
  • Use delimiters — wrap dynamic content in <article>...</article> or triple backticks to prevent prompt injection.
  • Separate instruction from data — place the task description before the content, not mixed within it.
  • Iterate with evaluation — treat prompts like code: version-control them and measure output quality.
  • Reduce ambiguity — "Write a short summary" is vague; "Write a 2-sentence summary for a technical audience" is not.

Prompt Engineering and Knowledge Quality

The best prompt in the world cannot compensate for poor input data. When your LLM application scrapes web pages, the raw HTML noise — navigation bars, cookie banners, ads — pollutes the context window.

KnowledgeSDK solves this upstream. Its /v1/scrape endpoint returns clean markdown from any URL, and /v1/extract adds structured metadata. Feeding cleaned content to your LLM means your prompts can focus on reasoning rather than noise filtering.

const { content } = await sdk.scrape("https://docs.example.com/api");
const prompt = `You are a technical writer. Summarize this API documentation:\n\n${content}`;

From Art to Engineering

Prompt engineering has matured from ad hoc experimentation into a structured practice with reproducible patterns. Teams now maintain prompt libraries, run A/B tests on variants, and use automated evaluations (LLM-as-judge) to score output quality at scale.

Related Terms

LLMsbeginner
Few-Shot Prompting
A prompting technique that provides a small number of input-output examples in the prompt to guide the LLM toward the desired response format.
LLMsbeginner
System Prompt
Instructions placed at the start of an LLM conversation that define the model's role, persona, constraints, and output format.
AI Agentsbeginner
Chain of Thought
A prompting technique that encourages LLMs to reason step-by-step before producing a final answer, improving accuracy on complex tasks.
LLMsbeginner
Large Language Model
A neural network trained on vast text corpora that can generate, summarize, translate, and reason about language.
PrecisionProxy Rotation

Try it now

Build with Prompt Engineering using one API.

Extract, index, and search any web content. First 1,000 requests free.

GET API KEY →
← Back to glossary