Getting Started with LangChain Prompt Templates

Learn LangChain prompt templates

Generative AI is revolutionizing the way we develop applications. Using LLM tools like ChatGPT or Gemini AI, we can create applications that perform multiple tasks by using different instructions. For example, we can use the same LLM model to perform sentiment analysis on a text or translate it into another language just by changing the instruction given to the model.

The LangChain framework provides the PromptTemplate module to help us create prompts dynamically and achieve flexibility in giving instructions to the LLM models.

Let’s discuss how we can use the PromptTemplate module to structure prompts and dynamically create prompts tailored to specific tasks or applications.

Understanding Prompts in LangChain

A prompt is the text input that we pass to an LLM application. To understand prompts, let us create a generative AI-based application that generates restaurant names based on cuisine and location. For this, we can use the LangChain framework and Gemini API, as shown below:

from langchain_google_genai import ChatGoogleGenerativeAI
import os
# Specify Google API key
os.environ['GOOGLE_API_KEY'] = "your_API_key"
# Initialize an LLM application
llm = ChatGoogleGenerativeAI(model = "gemini-pro")
# Generate results using the LLM application
result = llm.invoke("Suggest one name for a restaurant in England that serves Indian food.")
print(result.content)

We need a Google API key to run the code correctly, or an OpenAI API key if we are using the OpenAI API. The output for the above code is the following:

Masala Cottage

In the above example, the text “Suggest one name for a restaurant in England that serves Indian food.” is a prompt.
In a generative AI-based application, a prompt can have one or more elements from the following:

  • Context: We use contexts in prompts to provide external information to the LLM application. For instance, if we want the LLM application to analyze a text and give answers based on the information in the text, we can provide the text as context to the LLM application.
  • Query: The text input we pass to the LLM application to get an answer is a query. The query asks a specific question to the LLM application.
  • Instruction: Instructions instruct the LLM application on how to use the context, format the output, and process the query. For example, if we are giving a context and the query cannot be answered using the information in the context, you can instruct the LLM application to give a specific answer.

To generate answers from an LLM application, we can use the query, context, and instruction as shown in the following example:

from langchain_google_genai import ChatGoogleGenerativeAI
import os
# Specify Google API key
os.environ['GOOGLE_API_KEY'] = "your_API_key"
# Initialize an LLM application
llm = ChatGoogleGenerativeAI(model = "gemini-pro")
# Create a prompt with query, context, and instruction
prompt = """
Instruction: Answer the question based on the context below. If you cannot answer the question with the given context, answer with "I don't know.".
Context: Codecademy is an interactive online learning platform offering courses in various programming languages and tech skills. It provides a hands-on, project-based approach to learning, allowing users to write and execute code directly in the browser. The platform covers topics such as web development, data science, computer science, and machine learning. Codecademy features a mix of free and paid content, with the Pro membership granting access to advanced courses, quizzes, and real-world projects. The site also includes community forums, career advice, and a personalized learning path to help users achieve their specific goals.
Query: Is Codecademy an online learning platform?
"""
# Generate results using the LLM application
result = llm.invoke(prompt)
print(result.content)

In the above example, we passed a text containing information about Codecademy as context and an instruction that tells the LLM application how to process the query. For the query in the above example, the code gives the following output:

Yes

If we ask a query that cannot be answered using the information in the given context, the LLM will give “I don’t know.” as the answer, as specified in the instruction.

from langchain_google_genai import ChatGoogleGenerativeAI
import os
# Specify Google API key
os.environ['GOOGLE_API_KEY'] = "your_API_key"
# Initialize an LLM application
llm = ChatGoogleGenerativeAI(model="gemini-pro")
# Create a prompt with query, context, and instruction
prompt = """
Instruction: Answer the question based on the context below. If you cannot answer the question with the given context, answer with "I don't know.".
Context: Codecademy is an interactive online learning platform offering courses in various programming languages and tech skills. It provides a hands-on, project-based approach to learning, allowing users to write and execute code directly in the browser. The platform covers topics such as web development, data science, computer science, and machine learning. Codecademy features a mix of free and paid content, with the Pro membership granting access to advanced courses, quizzes, and real-world projects. The site also includes community forums, career advice, and a personalized learning path to help users achieve their specific goals.
Query: How many users does Codecademy have?
"""
# Generate results using the LLM application
result = llm.invoke(prompt)
print(result.content)

Output:

I don't know. 

The prompt is very large in these examples compared to the actual query. The context and instruction won’t change for any query; we only need to modify the query to get answers to different questions. Hence, writing the entire prompt whenever we want to execute a new query won’t be practical. We need to find a way to generate answers only by giving the query as input and reusing the context and the instruction. For this, we can use prompt templates.

What is A Prompt Template?

A prompt template is a reproducible way to generate prompts. It allows us to create prompts based on existing statements that have placeholders for queries or field names.

For example, consider the following prompt:

prompt = """ 
Instruction: Answer the question based on the context below. If you cannot answer the question with the given context, answer with "I don't know.". 
Context: Codecademy is an interactive online learning platform offering courses in various programming languages and tech skills. It provides a hands-on, project-based approach to learning, allowing users to write and execute code directly in the browser. The platform covers topics such as web development, data science, computer science, and machine learning. Codecademy features a mix of free and paid content, with the Pro membership granting access to advanced courses, quizzes, and real-world projects. The site also includes community forums, career advice, and a personalized learning path to help users achieve their specific goals. 
Query: How many users does Codecademy have? 
""" 

To be able to give different queries as input using the same context and instruction, we can create a prompt template like so:

prompt_template = """ 
Instruction: Answer the question based on the context below. If you cannot answer the question with the given context, answer with "I don't know.". 
Context: Codecademy is an interactive online learning platform offering courses in various programming languages and tech skills. It provides a hands-on, project-based approach to learning, allowing users to write and execute code directly in the browser. The platform covers topics such as web development, data science, computer science, and machine learning. Codecademy features a mix of free and paid content, with the Pro membership granting access to advanced courses, quizzes, and real-world projects. The site also includes community forums, career advice, and a personalized learning path to help users achieve their specific goals. 
Query: {input_query} 
""" 

In the above template, we can replace the variable input_query to create the complete prompt based on different queries.

We can also create prompt templates to give variable values to fields in a query. For example, consider the following prompt:

prompt = "Suggest one name for a restaurant in England that serves Indian food."

For the above prompt, we can create a prompt template that takes the cuisine name and country as input and creates a prompt as shown below:

prompt_template = "Suggest one name for restaurant in {country} that serves {cuisine} food."

In the prompt template provided above, we can fill in any value for the country and cuisine fields and generate restaurant names. Thus, prompt templates allow us to create prompts dynamically.

Why Do We Use Prompt Templates?

Prompt templates are a powerful tool to help you build efficient generative AI-based applications. We can use prompt templates to achieve the following:

  1. Prompt templates can be used to quickly generate multiple prompts for similar scenarios without writing the prompt from scratch every time, helping us create generative AI-based applications that use varying user inputs and contexts.

  2. We can use prompt templates to standardize the structure and content of prompts. This helps us give prompts a consistent structure, leading to reliable and predictable outputs. We can specify the context, instructions, etc., in the prompt templates to generate consistent outputs.

  3. Using prompt templates, we can experiment with different prompt structures to optimize the application’s performance. This helps us fine-tune the prompts for maximum efficiency using the minimum number of tokens.

  4. Prompt templates make maintaining and updating prompts easier if the application requirements change. Instead of modifying prompts scattered throughout the code base, we can update the template, ensuring consistent changes across all instances where the template is used. Thus, prompt templates also help us improve the application’s maintainability.

We have discussed the various theoretical aspects of prompts and prompt templates. Let’s now discuss how to generate prompt templates using LangChain and use them in our generative AI-based applications.

How to Create Prompt Templates in LangChain?

The PromptTemplate module in LangChain provides two ways to create prompt templates.

  1. The from_template() function
  2. The PromptTemplate() function

Let us discuss both approaches to creating prompt templates with LangChain.

Generate Prompt Templates Using the from_template() Function

The from_template() function takes a string template having placeholders for different values as input argument to its template parameter and generates a prompt template.

After generating the prompt template, we can invoke the format() method to create prompts. The format() method takes values for all the placeholders in the prompt template and generates the prompt.

As shown below, we can use the prompt returned by the format() method to generate outputs from LLM applications:

from langchain_core.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
import os
# Specify Google API key
os.environ['GOOGLE_API_KEY'] = "your_API_key"
# Initialize an LLM application
llm = ChatGoogleGenerativeAI(model="gemini-pro")
# Create a prompt template
prompt_template = PromptTemplate.from_template(template="Suggest one name for a restaurant in {country} that serves {cuisine} food.")
# Create a prompt using the prompt template
prompt = prompt_template.format(cuisine="Mexican", country="USA")
print("The prompt is:",prompt)
# Generate results using the LLM application
result = llm.invoke(prompt)
print("The output is:",result.content)

Output:

The prompt is: Suggest one name for a restaurant in the USA that serves Mexican food.
The output is: Taco Time

Generate Prompt Templates Using the PromptTemplate() Function

Instead of using the from_template() function, we can also use the PromptTemplate() function to generate prompt templates in LangChain.

The PromptTemplate() function also takes a string template with placeholders as input to its template parameter. Additionally, it takes a list of field names in the placeholders from the template string as input to its input_variables parameter.

After execution, the PromptTemplate() function returns the prompt template that we can use to generate prompts using the format() function, as shown below:

from langchain_core.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
import os
# Specify Google API key
os.environ['GOOGLE_API_KEY'] = "your_API_key"
# Initialize an LLM application
llm = ChatGoogleGenerativeAI(model = "gemini-pro")
# Create a prompt template
prompt_template = PromptTemplate(
input_variables=["country", "cuisine"],
template = "Suggest one name for a restaurant in {country} that serves {cuisine} food." )
# Create a prompt using the prompt template
prompt = prompt_template.format(cuisine = "Chinese", country = "India")
print("The prompt is: ", prompt)
# Generate results using the LLM application
result = llm.invoke(prompt)
print("The output is:", result.content)

Output:

The prompt is: Suggest one name for a restaurant in India that serves Chinese food. 
The output is: "Dragon Palace" 

In the above code examples, we can pass any country name and cuisine to the country and cuisine parameters, respectively. This will help us generate restaurant names for different cuisines and locations without changing the prompts. Similarly, we can create prompt templates to specify context and prompt instructions.

Both of the approaches we discussed to create prompt templates are semantically the same. After defining the prompts, we use the templates in the same way. However, we can use the PromptTemplate() function to create prompt templates. The PromptTemplate() function requires us to specify the input variables to the prompt. This will help us keep track of all the variables used in the prompt templates, especially for large templates.

Conclusion

Prompt templates are a great way to make your prompts dynamic and effective. Here are some key takeaways from this article-

  1. A prompt is the text input that we pass to an LLM application, and it can have multiple elements such as context, instruction, and query.
  2. Prompt templates provide us with a reusable way to generate prompts using a base prompt structure. This helps standardize the structure and content of prompts.
  3. In LangChain, we can use the PromptTemplate() function and the from_template() function defined in the PromptTemplate module to generate prompt templates.

We recommend you experiment with the code and create prompt templates with different contexts, instructions, and input variables to understand how they can help you create generative AI applications with dynamic prompts.

We hope you enjoyed this article. You can read more articles on artificial intelligence topics.
Happy Learning!

Author

Codecademy Team

'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'

Meet the full team