A Guide to Prompt Templates in LangChain

Published on
Apr 5, 2024

A LangChain prompt template is a class containing elements you typically need for a Large Language Model (LLM) prompt. At a minimum, these are:

  • A natural language string that will serve as the prompt: This can be a simple text string, or, for prompts consisting of dynamic content, an f-string or docstring containing placeholders that represent variables.
  • Formatting instructions (optional), that specify how dynamic content should appear in the prompt, i.e., whether it should be italicized, capitalized, etc. 
  • Input parameters (optional) that you pass into the prompt class to provide instructions or context for generating prompts. These parameters influence the content, structure, or formatting of the prompt. But oftentimes they’re variables for the placeholders in the string, whose values resolve to produce the final string that goes into the LLM through an API call as the prompt.

A LangChain prompt template defines how prompts for LLMs should be structured, and provides opportunities for reuse and customization. You can extend a template class for new use cases. These classes are called “templates” because they save you time and effort, and simplify the process of generating complex prompts.

The prompts themselves can be as simple or as complex as you need them to be. They can be a simple question to the LLM. Or they can consist of several parts, like a part explaining context, a part containing examples, etc., to elicit more relevant or nuanced responses. However you decide to structure your prompt will depend more on your use case; there’s no universal best practice that your prompt needs to contain separate parts like context, roles, etc.

LangChain encourages developers to use their prompt templates to ensure a given level of consistency in how prompts are generated. This consistency, in turn, should achieve reliable and predictable model responses. Consistent prompt structures help to fine-tune model performance over time by reducing variability in inputs, and they support iterative model improvement and optimization.

LangChain's prompt templates are a great solution for creating intricate prompts, and we appreciate their functionality. However, when dealing with codebases involving production-grade LLM applications, ones making around 100 LLM calls (or so), in our experience prompting can become harder to manage and organize using LangChain’s native prompt templates and management system. Mirascope’s focus on developer best practices promotes a norm of writing clean code that’s easy to read, easy to find, and easy to use.

In this article, we give an overview on how LangChain prompt templates work and provide examples of these. Then we explain how prompting works in Mirascope, and highlight its differences with LangChain.

3 Types of LangChain Prompt Templates

When you prompt in LangChain, you’re encouraged (but not required) to use a predefined template class such as:

  • `PromptTemplate` for creating basic prompts.
  • `FewShotPromptTemplate` for few-shot learning.
  • `ChatPromptTemplate` for modeling chatbot interactions.

Prompt types are designed for flexibility, not exclusivity, allowing you to blend their features, like merging a FewShotPromptTemplate with a ChatPromptTemplate, to suit diverse use cases.


LangChain’s PromptTemplate class creates a dynamic string with variable placeholders:

1from langchain.prompts import PromptTemplate
3prompt_template = PromptTemplate.from_template(
4  "Write a delicious recipe for {dish} with a {flavor} twist."
7# Formatting the prompt with new content
8formatted_prompt = prompt_template.format(dish="pasta", flavor="spicy")

It contains all the elements needed to create the prompt, but doesn’t feature autocomplete for the variables `dish` and `flavor`.

LangChain’s templates use Python’s `str.format` by default, but for complex prompts you can also use jinja2


Often a more useful form of prompting than sending a simple string with a request or question is to include several examples of outputs you want.

This is few-shot learning and is used to train models to do new tasks well, even when they have only a limited amount of training data available.

Many real-world use cases benefit from few-shot learning, for instance:

  • An automated fact checking tool where you provide different few-shot examples where the model is shown how to verify information, ask follow-up questions if necessary, and conclude whether a statement is true or false.
  • A technical support and troubleshooting guide that assists users in diagnosing and solving issues with products or software, where the `FewShotPromptTemplate` could contain examples of common troubleshooting steps, including how to ask the user for specific system details, interpret symptoms, and guide them through the solution process.

The `FewShotPromptTemplate` class takes a list of (question-and-answer) dictionaries as input, before asking a new question:

1from langchain.prompts.few_shot import FewShotPromptTemplate
2from langchain.prompts.prompt import PromptTemplate
4examples = [
5  {
6    "question": "What is the tallest mountain in the world?",
7    "answer": "Mount Everest"
8  },
9  {
10    "question": "What is the largest ocean on Earth?",
11    "answer": "Pacific Ocean"
12  },
13  {
14    "question": "In which year did the first airplane fly?",
15    "answer": "1903"
16  }
19prompt_template = FewShotPromptTemplate(examples)
20example_prompt = PromptTemplate(
21    input_variables=["question", "answer"],
22         template="Question: {question}\n{answer}",
25prompt = FewShotPromptTemplate(
26    examples=examples,
27    example_prompt=example_prompt,
28    suffix="Question: {input}",
29    input_variables=["input"],
32print(prompt.format(input="What is the name of the famous clock tower in London?"))


The `ChatPromptTemplate` class focuses on the conversation flow between a user and an AI system, and provides instructions or requests for roles like user, system, assistant, and others (the exact roles you can use will depend on your LLM model).

Such roles give deeper context to the LLM and elicit better responses that help the model grasp the situation more holistically. System messages in particular provide implicit instructions or set the scene, informing the LLM of expected behavior.

1from langchain_core.prompts import ChatPromptTemplate
3# Define roles and placeholders
4chat_template = ChatPromptTemplate.from_messages(
5  [
6    ("system", "You are a knowledgeable AI assistant. You are called {name}."),
7    ("user", "Hi, what's the weather like today?"),
8    ("ai", "It's sunny and warm outside."),
9    ("user", "{user_input}"),
10   ]
13messages = chat_template.format_messages(name="Alice", user_input="Can you tell me a joke?")

The roles in this class are:

  • `System` for a system chat message setting the stage (e.g., “You are a knowledgeable historian”).
  • `User`, which contains the user’s specific historical question.
  • `AI`, which contains the LLM’s preliminary response or follow-up question.

Once the template object is instantiated, you can use it to generate chat prompts by replacing the placeholders with actual content.

This prompt sets the context, provides the user's question, and typically leaves the AI response blank for the LLM to generate.

Chaining and Pipelines in LangChain

As its name suggests, LangChain is designed to handle complex pipelines consisting of disparate components, like prompts and output parsers, that get chained together—often through special classes and methods, as well as via the pipe operator (`|`). 

Examples of special LangChain structures for chaining components are classes like `LLMChain`, `SequentialChain`, and `Runnable`, and methods such as `from_prompts`.

Here, however, we’ll highlight LangChain’s use of piping to pass outputs of one component to the next, to achieve an end result.

Chaining with the Pipe Operator

The code below shows an example of a LangChain pipeline joining together the following components: a context (retrieved from a vector store), a prompt template, an LLM interaction, and an output parser function, all in a single statement (at the bottom, under `retrieval_chain`):

1from langchain_community.vectorstores import FAISS
2from langchain_core.output_parsers import StrOutputParser
3from langchain_core.prompts import ChatPromptTemplate
4from langchain_core.runnables import RunnablePassthrough
5from langchain_openai import ChatOpenAI, OpenAIEmbeddings
7vectorstore = FAISS.from_texts(
8    ["Julia is an expert in machine learning"], embedding=OpenAIEmbeddings()
10retriever = vectorstore.as_retriever()
11template = """Answer the question based only on the following context:
14Question: {question}
16prompt = ChatPromptTemplate.from_template(template)
17model = ChatOpenAI()
19retrieval_chain = (
20    {"context": retriever, "question": RunnablePassthrough()}
21    | prompt
22    | model
23    | StrOutputParser()
26retrieval_chain.invoke("what is Julia's expertise?")

In the above chain, the inputs are dynamically provided content (i.e., context from the `retriever` and a question passed as is without change), which are fed into a template that formats them into a complete prompt. This formatted prompt is then processed by the model, and the model's output is parsed into a structured format.

Forwarding Data Unchanged

The example also illustrates LangChain’s `RunnablePassthrough`, an object that forwards input data without changes, if these are already in the desired format.

In the example, `RunnablePassthrough` takes the input from the preceding part of the pipeline—namely, the outputs of `retriever` and the input `question`—and forwards these unaltered to `prompt` for further processing.

LangChain also offers the `Runnable.bind` method if you want to add conditions to the pipeline at runtime. An example is `model.bind(stop="SOLUTION")` below, which stops the model’s execution when the token or text "SOLUTION" is encountered:

1runnable = (
2    {"equation_statement": RunnablePassthrough()}
3    | prompt
4    | model.bind(stop="SOLUTION")
5    | StrOutputParser()
7print(runnable.invoke("x raised to the third plus seven equals 12"))

This means that before executing the `model`, `.bind` creates a new `Runnable` that has been pre-configured with a stop parameter set to "SOLUTION". This pre-configuration doesn’t execute the model; it only sets up how the model should behave once it’s invoked.

Using `.bind` in such a way is useful in scenarios where you want to add user interactivity to control a chat interaction, such as when users mark a response from a tech support bot as the solution.

How Prompting in Mirascope Works

As suggested previously, it gets harder to manage prompts and LLM calls at scale when these are separately located. That’s why Mirascope makes the LLM call the central organizing unit around which everything gets versioned, including the prompt. We refer to this as colocation—everything that affects the quality of the call, from the prompt to model parameters, should live with the call. 

We believe this provides an efficient approach to managing your codebase, and brings benefits such as simplicity, clarity, and maintainability.

With our library, all you need to know to accomplish effective prompt engineering is Python and Pydantic. We don’t introduce new, complex structures and you can just code as you need to. For example, if you need to create an output parser then you just code it in Python without concern that it won’t later pass correctly in some other special class.

Mirascope’s `BasePrompt` Class

As a library premised on best developer practices, Mirascope offers its `BasePrompt` class that centralizes internal prompt logic such as model configuration defined by `OpenAICallParams`:

1# prompts/travel_recommendation.py
2from mirascope.openai import OpenAICall, OpenAICallParams
4class TravelRecommendationPrompt(OpenAICall):
5    """
6    I've recently visited the following places: {places_in_quotes}.
8    Where should I travel to next?
9  """
11    visited_places: list[str]
13    call_params: OpenAICallParams = OpenAICallParams(model="gpt-4-turbo")
15    @property
16    def places_in_quotes(self) -> str:
17        """Returns a comma separated list of visited places each in quotes."""
18        return ", ".join([f'"{place}"' for place in self.visited_places])

Another example of prompt logic centralization is how the class property `places_in_quotes` dynamically constructs a part of the prompt (i.e., the list of places visited, formatted with quotes) based on the class’s state (`visited_places`), illustrating the class's capacity to manage its state and use it to systematically influence the prompt's content.

Mirascope’s prompt class also provides automatic data validation by extending Pydantic’s `BaseModel`, which means that:

  • Your prompt inputs are seamlessly type checked and constrained, without you having to write extra error validation or handling logic. Pydantic also reports when data fails validation (e.g., in your IDE and at runtime), allowing you to quickly identify which field failed validation and why.
  • You can export your prompt instances to dictionaries, JSON, or other formats, which facilitates serialization and interoperability with other systems or APIs.
  • Using Pydantic’s `AfterValidator` class, you can do custom validation. For example, you can verify that a company’s internal policies are legally compliant with existing regulations, which would otherwise be difficult to manually verify without an LLM’s assistance, even using code.

Below is an example of using `AfterValidator` to verify a company policy. `PolicyComplianceChecker` checks the compliance of policy text, and the `validate_compliance` function validates this compliance using an assertion based on the LLM's output. The `CompliantPolicy` model uses `AfterValidator` to apply this custom validation logic:

1from enum import Enum
2from typing import Annotated, Type
4from pydantic import AfterValidator, BaseModel, ValidationError
6from mirascope.anthropic import AnthropicExtractor
8class ComplianceStatus(Enum):
9    COMPLIANT = "compliant"
10    NON_COMPLIANT = "non_compliant"
12class CAPolicyComplianceChecker(AnthropicExtractor[ComplianceStatus]):
13    extract_schema: Type[ComplianceStatus] = ComplianceStatus
15    prompt_template = """
16    Is the following policy compliant with CA regulations?
17    {policy_text}
18    """
20    policy_text: str
23    def validate_compliance(policy_text: str) -> str:
24        """Check if the policy is compliant with local regulations."""
25        compliance_checker = CAPolicyComplianceChecker(policy_text=policy_text)
26        compliance_status = compliance_checker.extract()
27        assert compliance_status == ComplianceStatus.COMPLIANT, "Policy is not compliant."
28        return policy_text
30class Policy(BaseModel):
31    text: Annotated[str, AfterValidator(validate_compliance)]
33class PolicyWriter(AnthropicExtractor[Policy]):
34    extract_schema: Type[Policy] = Policy
36    prompt_template = """
37    Write an internal company policy document that is compliant with CA regulations.
38    """
41    policy = PolicyWriter().extract()
42    print(policy.text)
43except ValidationError as e:
44    print(e)
45    # > 1 validation error for CompliantPolicy
46    # policy_text
47    # Assertion failed, Policy is not compliant.
48    # [type=assertion_error, input_value="The policy text here...", input_type=str]
49    # For further information visit https://errors.pydantic.dev/2.6/v/assertion_error

One of the most important aspects about prompt validation, and the main reason why we designed Mirascope in this way, is to prevent uncaught errors from entering into your prompts and affecting LLM outputs. We want to ensure type safety and the general quality of your prompts.

Prompting with Mirascope vs. LangChain

As you may have noticed, Mirascope and LangChain handle prompting similarly in terms of using dedicated classes that contain strings with variable placeholders. However, we’d like to point out a few differences.

One difference is in error handling. Below is an example of a LangChain `ChatPromptTemplate`:

1from langchain_core.output_parsers import StrOutputParser
2from langchain_core.prompts import ChatPromptTemplate
3from langchain_openai import ChatOpenAI
5prompt = ChatPromptTemplate.from_template("tell me a fun fact about {topic}")
6model = ChatOpenAI(model="gpt-4")
7output_parser = StrOutputParser()
9chain = prompt | model | output_parser
11chain.invoke({"topic": "pandas"})

LangChain doesn’t offer editor support for `topic` since the input to `invoke` is a dict with string keys. So you could indeed enter `topics` (plural) in the invocation dictionary and the editor would give you no warning or error message, as shown below.

Chain = prompt | model | output_parser

The error would only be shown at runtime:

KeyError: Input to ChatPromptTemplate is missing variables

In contrast, the following `OpenAICall` example from Mirascope (which extends `BasePrompt` to support interacting with the OpenAI API) shows how `storyline` is defined as a fixed word and string attribute, and so passing in, say, a plural form of the word would automatically generate an error, thanks to Pydantic:

1import os
3from mirascope import tags
4from mirascope.openai import OpenAICall, OpenAICallParams
6os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
10class Editor(OpenAICall):
11    prompt_template = """
12    SYSTEM:
13    You are a top class manga editor.
15    USER:
16    I'm working on a new storyline. What do you think?
17    {storyline}
18    """
20    storyline: str
22    call_params = OpenAICallParams(model="gpt-4-turbo", temperature=0.4)
25storyline = "..."
26editor = Editor(storyline=storyline)
29# > [{'role': 'system', 'content': 'You are a top class manga editor.'}, {'role': 'user', 'content': "I'm working on a new storyline. What do you think?\n..."}]
31critique = editor.call()
33# > I think the beginning starts off great, but...

In our IDE, trying to pass in a wrong name like `storylines` generates an error:

Unexpected keyword argument "storylines" for "Editor"

And autocompletion is also provided for the attribute:

Storyline autocompletion example

(You get both Mirascope’s and Pydantic’s API documentation in your IDE.)

The catch here is that in the case of LangChain, if you haven’t defined your own error handling logic, it might take you a while to figure out from where bugs originate. Mirascope warns you immediately of such errors.

Mirascope Colocates Prompts with LLM Calls

Another important difference is that Mirascope colocates everything contributing to the output of an LLM call together with the prompt to simplify oversight and modifications.

As we’ve seen previously, certain information regarding the API call, such as parameters defining model type and temperature, are passed into `call_params`, which is typically defined inside the prompt class.

`call_params` also ties tools (function calling) to LLM calls, increasing cohesion of the code even further and reducing or eliminating any boilerplate or convoluted callback mechanisms needed to extend LLM capabilities. 

For instance, the tool `get_current_weather` below is tied to the LLM call via `call_params`:

1from typing import Literal
3from mirascope.openai import OpenAICall, OpenAICallParams
6def get_current_weather(
7    location: str, unit: Literal["celsius", "fahrenheit"] = "fahrenheit"
9    """Get the current weather in a given location."""
10    if "tokyo" in location.lower():
11        print(f"It is 10 degrees {unit} in Tokyo, Japan")
12    elif "san francisco" in location.lower():
13        print(f"It is 72 degrees {unit} in San Francisco, CA")
14    elif "paris" in location.lower():
15        print(f"It is 22 degress {unit} in Paris, France")
16    else:
17        print("I'm not sure what the weather is like in {location}")
20class Forecast(OpenAICall):
21    prompt_template = "What's the weather in Tokyo?"
23    call_params = OpenAICallParams(model="gpt-4-turbo", tools=[get_current_weather])
25tool = Forecast().call().tool
26if tool:
27    tool.fn(**tool.args)
28	  #> It is 10 degrees fahrenheit in Tokyo, Japan

Encapsulating the call parameters within the prompt class in this way makes your code more organized, modular, and easier to maintain. It also promotes reusability, as the same set of parameters (such as the tool in the above example) can be reused across multiple API calls with minimal modification.

Additionally, colocating all the relevant information together means it all gets versioned as a single unit (via Mirascope’s CLI), allowing you to easily track all changes, so you’re effectively pushing as much information as feasible into the version.

On the contrary, LangChain doesn’t encourage you to colocate everything with the LLM call, so this increases the risk that relevant code gets scattered around the codebase, thus requiring you to manually track everything.

This means that, in LangChain, if you end up locating the code defining your model type (e.g., gpt-4-turbo) separately from your prompt for instance, it’s more effort to find this code to modify it when needed. 

To associate model information with a prompt in LangChain, you use `Runnable.bind`—as we previously discussed in the context of LangChain pipelines and chains. This is the equivalent of Mirascope’s `call_params` class attribute. The biggest difference here is that `call_params` is tied to the prompt class, whereas LangChain’s `.bind` isn’t, as shown below:

1from langchain_core.output_parsers import StrOutputParser
2from langchain_core.prompts import ChatPromptTemplate
3from langchain_core.runnables import RunnablePassthrough
4from langchain_openai import ChatOpenAI
6prompt = ChatPromptTemplate.from_messages(
7    [
8        (
9            "system",
10            "Translate the given word problem into a mathematical equation and solve it.",
11        ),
12        ("human", "{equation_statement}"),
13    ]
15model = ChatOpenAI(model="gpt-4-turbo", temperature=0).bind(
16    function_call={"name": "equation_solver"}, functions=[function]
18runnable = {"equation_statement": RunnablePassthrough()} | prompt | model
20runnable.invoke("the square root of a number plus five is equal to eight")

Chaining in Mirascope vs. LangChain

The key difference between how Mirascope and LangChain accomplish chaining is that Mirascope’s approach relies on existing structures already defined in Python, whereas LangChain’s approach requires an explicit definition of chains and their flows. For this, they offer their LangChain Expression Language (LCEL), which provides an interface for building complex chains. 

This approach involves using specific classes and methods provided by the LangChain framework, which introduces additional layers of abstraction. This requires developers to adapt to LangChain’s specific way of structuring applications.

For example, simple chains in LangChain are clean and easy to use:

1from langchain_community.vectorstores import FAISS
2from langchain_core.output_parsers import StrOutputParser
3from langchain_core.prompts import ChatPromptTemplate
4from langchain_openai import ChatOpenAI, OpenAIEmbeddings
6vectorstore = FAISS.from_texts(
7    ["Julia is an expert in machine learning"], embedding=OpenAIEmbeddings()
9retriever = vectorstore.as_retriever()
10template = """Answer the question based only on the following context:
13Question: {question}
16question = "what is Julia's expertise?"
17context = retriever.retrieve(texts=["Julia is an expert in machine learning"])[0]  # Retrieve context based on the input
19prompt = ChatPromptTemplate.from_template(template.format(context=context, question=question))
20model = ChatOpenAI()
22retrieval_chain = (
23    prompt
24    | model
25    | StrOutputParser()
28result = retrieval_chain.invoke()

However, for passing arguments through the chain at runtime, or to achieve reusability, you might consider using LangChain’s `RunnablePassthrough` (as shown in previous examples). This is a LangChain-specific construct to master, however. And using such structures throughout your codebase eventually adds a layer of complexity to your code.

Mirascope’s approach, however, is more implicit, and leverages Python’s syntax and inheritance to chain together components.

An example of this is the `RecipeRecommender` class in the code below, which extends `ChefSelector` and enables use of `@cached_property` for the `chef` method, ensuring that the API call to determine the chef based on food type is made only once, even if accessed multiple times:

1import os
2from functools import cached_property
4from mirascope.openai import OpenAICall, OpenAICallParams
6os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
9class ChefSelector(OpenAICall):
10    prompt_template = "Name a chef who is really good at cooking {food_type} food"
12    food_type: str
14    call_params = OpenAICallParams(model="gpt-4-turbo")
17class RecipeRecommender(ChefSelector):
18    prompt_template = """
19    SYSTEM:
20    Imagine that you are chef {chef}.
21    Your task is to recommend recipes that you, {chef}, would be excited to serve.
23    USER:
24    Recommend a {food_type} recipe using {ingredient}.
25    """
27    ingredient: str
29    call_params = OpenAICallParams(model="gpt-4")
31    @cached_property  # !!! so multiple access doesn't make multiple calls
32    def chef(self) -> str:
33        """Uses `ChefSelector` to select the chef based on the food type."""
34        return ChefSelector(food_type=self.food_type).call().content
36response = RecipeRecommender(food_type="japanese", ingredient="apples").call()
38# > Certainly! Here's a recipe for a delicious and refreshing Japanese Apple Salad: ...

This style of chaining encapsulates related functionality in the class, making the code more readable and maintainable.

Take a Modular Approach to Building Complex LLM Applications

Choosing the right library will depend on the complexity of your project. For straightforward tasks and prompt designs, the OpenAI SDK, or the respective API for your chosen LLM, is often sufficient.

For simple prompt chains, LangChain works fine because its chaining generally offers a clean structure for such use cases. But the moment you depart into more complex scenarios, LangChain gets complicated to use.

Mirascope’s philosophy is that a development library should let you build a complex LLM application or system, if that’s what you want. But it shouldn’t build that complex app or system for you, because that would mean dictating how you should build it.

Moreover, Mirascope eliminates the need for extensive boilerplate code and complex abstractions. As a viable LangChain alternative, Mirascope simplifies the development process to its core components: identifying your prompts, defining your variables, and specifying your calls.

This approach lets developers focus on their main task of prompting and building LLM applications, and less on tracking errors and dealing with unnecessary complexity.

Want to learn more? You can find more Mirascope code samples on both our documentation site and on GitHub.

Join our beta list!

Get updates and early access to try out new features as a beta tester.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.