knowledgesdk.com/glossary/system-prompt
LLMsbeginner

Also known as: system message, system instruction

System Prompt

Instructions placed at the start of an LLM conversation that define the model's role, persona, constraints, and output format.

What Is a System Prompt?

A system prompt (also called a system message or system instruction) is a special piece of text sent to an LLM at the beginning of a conversation that configures the model's behavior for the entire session. Unlike user messages, the system prompt is typically invisible to the end user — it is set by the developer and remains constant across interactions.

The system prompt is your highest-leverage tool for shaping how an LLM behaves. It is where you establish:

  • Role and persona — "You are a helpful customer support agent for Acme Corp."
  • Tone and style — "Always respond in a concise, professional tone."
  • Constraints — "Never discuss competitor products."
  • Output format — "Always respond with valid JSON."
  • Context injection — Dynamic knowledge, user preferences, or retrieved documents.

System Prompt vs. User Message

Property System Prompt User Message
Who writes it Developer End user (or developer on behalf of user)
Persistence Usually constant across a session Changes each turn
Visibility Hidden from end user Visible to end user
Authority High (most models treat it as authoritative) Lower
Typical content Persona, rules, format, injected context Actual question or task

How System Prompts Are Structured (by Provider)

OpenAI:

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    {
      role: "system",
      content: "You are a technical documentation assistant. Always respond in markdown with code examples."
    },
    {
      role: "user",
      content: "How do I use the KnowledgeSDK search API?"
    }
  ]
});

Anthropic:

const response = await anthropic.messages.create({
  model: "claude-opus-4-6",
  system: "You are a structured data extraction assistant. Return only valid JSON.",
  messages: [{ role: "user", content: pageContent }]
});

Injecting Dynamic Context

One of the most powerful uses of the system prompt is injecting fresh, dynamic context retrieved at query time — the foundation of RAG (Retrieval-Augmented Generation):

const results = await sdk.search({ query: userQuestion, topK: 5 });
const context = results.map(r => r.content).join("\n\n---\n\n");

const systemPrompt = `You are a helpful assistant. Answer questions using only the following knowledge base excerpts:

<context>
${context}
</context>

If the answer is not in the context, say "I don't have information on that."`;

This pattern ensures the model answers from verified, up-to-date knowledge extracted by KnowledgeSDK rather than from potentially stale parametric memory.

Best Practices

  • Place instructions before context — Models attend more reliably to instructions at the start of the system prompt.
  • Use delimiters — Wrap injected content in XML tags (<context>, <document>) to separate it from instructions.
  • Be explicit about format — If you need JSON, say so and provide the schema.
  • Set a fallback behavior — Tell the model what to do when it cannot answer: "Say 'I don't know' rather than guessing."
  • Keep it focused — Overly long system prompts with contradictory rules degrade model performance.
  • Test for prompt injection — User inputs may attempt to override system prompt instructions; use input sanitization.

System Prompt Security

System prompts are not a security boundary — they are instructions, not access controls. A determined user can often extract or override system prompt instructions through adversarial prompting. For truly sensitive logic (auth, data filtering), implement it in your application layer, not the system prompt.

Related Terms

LLMsbeginner
Prompt Engineering
The practice of crafting and optimizing instructions given to an LLM to elicit accurate, relevant, and well-formatted responses.
AI Agentsbeginner
Guardrails
Safety and policy constraints applied to agent inputs and outputs to prevent harmful, off-topic, or undesired behaviors.
LLMsbeginner
Few-Shot Prompting
A prompting technique that provides a small number of input-output examples in the prompt to guide the LLM toward the desired response format.
Structured OutputTemperature

Try it now

Build with System Prompt using one API.

Extract, index, and search any web content. First 1,000 requests free.

GET API KEY →
← Back to glossary