Getting Started with LangChain Prompt Templates
Generative AI is revolutionizing the way we develop applications. Using LLM tools like ChatGPT or Gemini AI, we can create applications that perform multiple tasks by using different instructions. For example, we can use the same LLM model to perform sentiment analysis on a text or translate it into another language just by changing the instruction given to the model.
The LangChain framework provides the PromptTemplate
module to help us create prompts dynamically and achieve flexibility in giving instructions to the LLM models.
Let’s discuss how we can use the PromptTemplate
module to structure prompts and dynamically create prompts tailored to specific tasks or applications.
Understanding Prompts in LangChain
A prompt is the text input that we pass to an LLM application. To understand prompts, let us create a generative AI-based application that generates restaurant names based on cuisine and location. For this, we can use the LangChain framework and Gemini API, as shown below:
from langchain_google_genai import ChatGoogleGenerativeAIimport os# Specify Google API keyos.environ['GOOGLE_API_KEY'] = "your_API_key"# Initialize an LLM applicationllm = ChatGoogleGenerativeAI(model = "gemini-pro")# Generate results using the LLM applicationresult = llm.invoke("Suggest one name for a restaurant in England that serves Indian food.")print(result.content)
We need a Google API key to run the code correctly, or an OpenAI API key if we are using the OpenAI API. The output for the above code is the following:
Masala Cottage
In the above example, the text “Suggest one name for a restaurant in England that serves Indian food.” is a prompt.
In a generative AI-based application, a prompt can have one or more elements from the following:
- Context: We use contexts in prompts to provide external information to the LLM application. For instance, if we want the LLM application to analyze a text and give answers based on the information in the text, we can provide the text as context to the LLM application.
- Query: The text input we pass to the LLM application to get an answer is a query. The query asks a specific question to the LLM application.
- Instruction: Instructions instruct the LLM application on how to use the context, format the output, and process the query. For example, if we are giving a context and the query cannot be answered using the information in the context, you can instruct the LLM application to give a specific answer.
To generate answers from an LLM application, we can use the query, context, and instruction as shown in the following example:
from langchain_google_genai import ChatGoogleGenerativeAIimport os# Specify Google API keyos.environ['GOOGLE_API_KEY'] = "your_API_key"# Initialize an LLM applicationllm = ChatGoogleGenerativeAI(model = "gemini-pro")# Create a prompt with query, context, and instructionprompt = """Instruction: Answer the question based on the context below. If you cannot answer the question with the given context, answer with "I don't know.".Context: Codecademy is an interactive online learning platform offering courses in various programming languages and tech skills. It provides a hands-on, project-based approach to learning, allowing users to write and execute code directly in the browser. The platform covers topics such as web development, data science, computer science, and machine learning. Codecademy features a mix of free and paid content, with the Pro membership granting access to advanced courses, quizzes, and real-world projects. The site also includes community forums, career advice, and a personalized learning path to help users achieve their specific goals.Query: Is Codecademy an online learning platform?"""# Generate results using the LLM applicationresult = llm.invoke(prompt)print(result.content)
In the above example, we passed a text containing information about Codecademy as context and an instruction that tells the LLM application how to process the query. For the query in the above example, the code gives the following output:
Yes
If we ask a query that cannot be answered using the information in the given context, the LLM will give “I don’t know.” as the answer, as specified in the instruction.
from langchain_google_genai import ChatGoogleGenerativeAIimport os# Specify Google API keyos.environ['GOOGLE_API_KEY'] = "your_API_key"# Initialize an LLM applicationllm = ChatGoogleGenerativeAI(model="gemini-pro")# Create a prompt with query, context, and instructionprompt = """Instruction: Answer the question based on the context below. If you cannot answer the question with the given context, answer with "I don't know.".Context: Codecademy is an interactive online learning platform offering courses in various programming languages and tech skills. It provides a hands-on, project-based approach to learning, allowing users to write and execute code directly in the browser. The platform covers topics such as web development, data science, computer science, and machine learning. Codecademy features a mix of free and paid content, with the Pro membership granting access to advanced courses, quizzes, and real-world projects. The site also includes community forums, career advice, and a personalized learning path to help users achieve their specific goals.Query: How many users does Codecademy have?"""# Generate results using the LLM applicationresult = llm.invoke(prompt)print(result.content)
Output:
I don't know.
The prompt is very large in these examples compared to the actual query. The context and instruction won’t change for any query; we only need to modify the query to get answers to different questions. Hence, writing the entire prompt whenever we want to execute a new query won’t be practical. We need to find a way to generate answers only by giving the query as input and reusing the context and the instruction. For this, we can use prompt templates.
What is A Prompt Template?
A prompt template is a reproducible way to generate prompts. It allows us to create prompts based on existing statements that have placeholders for queries or field names.
For example, consider the following prompt:
prompt = """
Instruction: Answer the question based on the context below. If you cannot answer the question with the given context, answer with "I don't know.".
Context: Codecademy is an interactive online learning platform offering courses in various programming languages and tech skills. It provides a hands-on, project-based approach to learning, allowing users to write and execute code directly in the browser. The platform covers topics such as web development, data science, computer science, and machine learning. Codecademy features a mix of free and paid content, with the Pro membership granting access to advanced courses, quizzes, and real-world projects. The site also includes community forums, career advice, and a personalized learning path to help users achieve their specific goals.
Query: How many users does Codecademy have?
"""
To be able to give different queries as input using the same context and instruction, we can create a prompt template like so:
prompt_template = """
Instruction: Answer the question based on the context below. If you cannot answer the question with the given context, answer with "I don't know.".
Context: Codecademy is an interactive online learning platform offering courses in various programming languages and tech skills. It provides a hands-on, project-based approach to learning, allowing users to write and execute code directly in the browser. The platform covers topics such as web development, data science, computer science, and machine learning. Codecademy features a mix of free and paid content, with the Pro membership granting access to advanced courses, quizzes, and real-world projects. The site also includes community forums, career advice, and a personalized learning path to help users achieve their specific goals.
Query: {input_query}
"""
In the above template, we can replace the variable input_query
to create the complete prompt based on different queries.
We can also create prompt templates to give variable values to fields in a query. For example, consider the following prompt:
prompt = "Suggest one name for a restaurant in England that serves Indian food."
For the above prompt, we can create a prompt template that takes the cuisine name and country as input and creates a prompt as shown below:
prompt_template = "Suggest one name for restaurant in {country} that serves {cuisine} food."
In the prompt template provided above, we can fill in any value for the country
and cuisine
fields and generate restaurant names. Thus, prompt templates allow us to create prompts dynamically.
Why Do We Use Prompt Templates?
Prompt templates are a powerful tool to help you build efficient generative AI-based applications. We can use prompt templates to achieve the following:
Prompt templates can be used to quickly generate multiple prompts for similar scenarios without writing the prompt from scratch every time, helping us create generative AI-based applications that use varying user inputs and contexts.
We can use prompt templates to standardize the structure and content of prompts. This helps us give prompts a consistent structure, leading to reliable and predictable outputs. We can specify the context, instructions, etc., in the prompt templates to generate consistent outputs.
Using prompt templates, we can experiment with different prompt structures to optimize the application’s performance. This helps us fine-tune the prompts for maximum efficiency using the minimum number of tokens.
Prompt templates make maintaining and updating prompts easier if the application requirements change. Instead of modifying prompts scattered throughout the code base, we can update the template, ensuring consistent changes across all instances where the template is used. Thus, prompt templates also help us improve the application’s maintainability.
We have discussed the various theoretical aspects of prompts and prompt templates. Let’s now discuss how to generate prompt templates using LangChain and use them in our generative AI-based applications.
How to Create Prompt Templates in LangChain?
The PromptTemplate
module in LangChain provides two ways to create prompt templates.
- The
from_template()
function - The
PromptTemplate()
function
Let us discuss both approaches to creating prompt templates with LangChain.
Generate Prompt Templates Using the from_template()
Function
The from_template()
function takes a string template having placeholders for different values as input argument to its template
parameter and generates a prompt template.
After generating the prompt template, we can invoke the format()
method to create prompts. The format()
method takes values for all the placeholders in the prompt template and generates the prompt.
As shown below, we can use the prompt returned by the format()
method to generate outputs from LLM applications:
from langchain_core.prompts import PromptTemplatefrom langchain_google_genai import ChatGoogleGenerativeAIimport os# Specify Google API keyos.environ['GOOGLE_API_KEY'] = "your_API_key"# Initialize an LLM applicationllm = ChatGoogleGenerativeAI(model="gemini-pro")# Create a prompt templateprompt_template = PromptTemplate.from_template(template="Suggest one name for a restaurant in {country} that serves {cuisine} food.")# Create a prompt using the prompt templateprompt = prompt_template.format(cuisine="Mexican", country="USA")print("The prompt is:",prompt)# Generate results using the LLM applicationresult = llm.invoke(prompt)print("The output is:",result.content)
Output:
The prompt is: Suggest one name for a restaurant in the USA that serves Mexican food.The output is: Taco Time
Generate Prompt Templates Using the PromptTemplate()
Function
Instead of using the from_template()
function, we can also use the PromptTemplate()
function to generate prompt templates in LangChain.
The PromptTemplate()
function also takes a string template with placeholders as input to its template
parameter. Additionally, it takes a list of field names in the placeholders from the template string as input to its input_variables
parameter.
After execution, the PromptTemplate()
function returns the prompt template that we can use to generate prompts using the format()
function, as shown below:
from langchain_core.prompts import PromptTemplatefrom langchain_google_genai import ChatGoogleGenerativeAIimport os# Specify Google API keyos.environ['GOOGLE_API_KEY'] = "your_API_key"# Initialize an LLM applicationllm = ChatGoogleGenerativeAI(model = "gemini-pro")# Create a prompt templateprompt_template = PromptTemplate(input_variables=["country", "cuisine"],template = "Suggest one name for a restaurant in {country} that serves {cuisine} food." )# Create a prompt using the prompt templateprompt = prompt_template.format(cuisine = "Chinese", country = "India")print("The prompt is: ", prompt)# Generate results using the LLM applicationresult = llm.invoke(prompt)print("The output is:", result.content)
Output:
The prompt is: Suggest one name for a restaurant in India that serves Chinese food.
The output is: "Dragon Palace"
In the above code examples, we can pass any country name and cuisine to the country
and cuisine
parameters, respectively. This will help us generate restaurant names for different cuisines and locations without changing the prompts. Similarly, we can create prompt templates to specify context and prompt instructions.
Both of the approaches we discussed to create prompt templates are semantically the same. After defining the prompts, we use the templates in the same way. However, we can use the PromptTemplate()
function to create prompt templates. The PromptTemplate()
function requires us to specify the input variables to the prompt. This will help us keep track of all the variables used in the prompt templates, especially for large templates.
Conclusion
Prompt templates are a great way to make your prompts dynamic and effective. Here are some key takeaways from this article-
- A prompt is the text input that we pass to an LLM application, and it can have multiple elements such as context, instruction, and query.
- Prompt templates provide us with a reusable way to generate prompts using a base prompt structure. This helps standardize the structure and content of prompts.
- In LangChain, we can use the
PromptTemplate()
function and thefrom_template()
function defined in thePromptTemplate
module to generate prompt templates.
We recommend you experiment with the code and create prompt templates with different contexts, instructions, and input variables to understand how they can help you create generative AI applications with dynamic prompts.
We hope you enjoyed this article. You can read more articles on artificial intelligence topics.
Happy Learning!
Author
'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'
Meet the full teamRelated articles
Learn more on Codecademy
- Skill path
Code Foundations
Start your programming journey with an introduction to the world of code and basic concepts.Includes 5 CoursesWith CertificateBeginner Friendly4 hours - Career path
Full-Stack Engineer
A full-stack engineer can get a project done from start to finish, back-end to front-end.Includes 51 CoursesWith Professional CertificationBeginner Friendly150 hours