Prompt Engineering 101: Understanding Zero-Shot, One-Shot, and Few-Shot
What is prompt engineering?
Prompt engineering is the practice of designing input prompts for large language models (LLMs) to get the best possible outputs in a desired format. Using a prompt template, we provide a large language model with context, instruction, or example along with the query for which we want an answer from the LLM. The LLM then uses the provided information to generate responses for the given query.
A prompt can have one or more elements from the following:
- Context: The context provides external information to the LLM, which it uses to generate accurate responses. It could be a text containing information or facts that the LLM might need to respond to the given query.
- Examples: Examples are used to help the LLM decide the structure of the result or learn how to solve a given query. In the prompt, we provide solved examples related to the given query, and the LLM understands how to solve the query and structure outputs by looking at the examples.
- Instructions: Instructions are direct commands that we provide to the LLM to help it understand the requirements and respond in a desired manner.
Prompting is used to instruct the model to perform a task. Based on how the prompt is structured, we classify prompting as zero-shot, one-shot, few-shot, and chain-of-thought prompting. We will discuss each prompting technique separately and look at the ways to improve the prompts using prompt engineering.
Learn Prompt Engineering
Learn about effective prompting techniques to craft high-quality prompts, maximizing your use of generative AI.Try it for freeWhat is zero-shot prompting?
Zero-shot prompting is a technique we use to instruct a large language model to complete a task without providing any examples. LLMs are trained on a vast amount of data and are capable of performing many tasks that are already present in the training dataset. In zero-shot prompting, the LLM relies on its pre-trained knowledge to determine how to perform a given task.
Let’s perform sentiment analysis on a few sentences to understand how zero-shot prompting works. First, we will use the following prompt for sentiment classification.
Instruction: Classify the following sentences.1. I am feeling good today.2. I don't want to go out and play right now.3. I will better read an article.
For this prompt, an LLM gives us the following output:
Here's a classification of your sentences, focusing on their intent or common usage:1. I am feeling good today. : Declarative/Statement (expressing a state or feeling)2. I don't want to go out and play right now. : Declarative/Statement (expressing a preference or disinclination)3. I will better read an article. : Declarative/Statement (expressing an intention or decision)
This output shows that the LLM decides and classifies sentences based on their intent and common usage. It didn’t understand that we wanted to perform sentiment analysis. Let’s update the prompt with clear instructions and feed the prompt to the LLM.
Instruction: Classify the following sentences into positive, negative, and neutral sentiments.1. I am feeling good today.2. I don't want to go out and play right now.3. I will better read an article.
For this prompt, the LLM correctly classifies the sentences by sentiment, as shown in the following output:
Here's a sentiment classification for your sentences:1. I am feeling good today. : Positive2. I don't want to go out and play right now. : Negative3. I will better read an article.: Neutral
In both examples, we used zero-shot prompting to perform sentiment analysis. However, the first example didn’t produce the desired result as the instruction was ambiguous. Hence, it is important to phrase the prompt in a way that clearly communicates the task using only instructions and context, which is done using zero-shot prompt engineering.
Zero-shot prompt engineering
Zero-shot prompt engineering is the process of designing and refining instructions in a zero-shot prompt to get the best possible output from an LLM. It helps us experiment with and decide the structure, wording, and clarity of a prompt that gives the best output. Common techniques that you can use to tune prompts in zero-shot prompt engineering are as follows:
- Use precise instructions: You should make the task unambiguous by clearly defining what you want the model to do. Be specific about the task and the output format.
- Add output constraints: You should define the format and structure of the output to ensure consistency and relevance. Clearly tell the model how you want the output to look.
- Set a role or persona: You can give the model a persona or role to influence the tone, expertise, or style of the output. Shaping the model’s behavior by specifying the role improves the quality of the output content.
- Enumerate instructions: If you give multistep instructions, break them into clear and numbered steps. This improves the completeness and formatting of multi-part tasks.
- Provide context: Along with the instruction, you can provide background, setup, or information so that the model better understands the situation.
Using these prompt engineering techniques, we can rewrite the instruction for the sentiment analysis task as follows:
Instruction: You are a sentiment analysis model. Classify the following sentences into positive, negative, and neutral sentiments. Give one-word output as "Positive", "Negative", or "Neutral".1. I am feeling good today.2. I don't want to go out and play right now.3. I will better read an article.
This prompt clearly defines the model’s persona, the task to complete, and the required output, which helps ensure accurate output from the LLM.
When do we use zero-shot prompting?
We use zero-shot prompting to find answers to simple queries that are in abundance in the LLM’s training data. It is sufficient for tasks like text summarization, sentiment analysis, arithmetic operations, or finding answers to queries based on historical data.
Zero-shot prompting relies heavily on the phrasing of the instructions, and it is less effective for complex tasks. If we can provide an example to the LLM to help it understand how to perform a complex task, it gives better results. This is what we do in one-shot prompting.
What is one-shot prompting?
One-shot prompting is a prompting technique in which we provide a single example of a task along with the instruction to help the LLM understand what we want. The example helps the LLM understand the task better, even if the instruction given in the prompt is unclear.
To understand this, consider the following example.
Instruction: Classify the following sentences.Example:I am happy : PositiveInput sentences:1. I am feeling good today.2. I don't want to go out and play right now.3. I will better read an article.
In this prompt, we haven’t clearly defined that the model has to perform sentiment analysis. However, the LLM looks at the example, identifies the task, and classifies the input sentences based on sentiment, as shown in the following output:
1. I am feeling good today. : Positive2. I don't want to go out and play right now. : Negative3. I will better read an article. : Neutral
Thus, one-shot prompting improves on zero-shot prompting by providing an example along with the instruction. Clear instructions and a well-chosen example can improve the LLM’s response to any given task. We use one-shot prompt engineering to design the instruction and provide examples in one-shot prompting.
One-shot prompt engineering
One-shot prompt engineering is the process of designing and optimizing prompts that provide an example paired with clear instructions to guide an LLM to perform a task. It involves all the techniques we discussed in zero-shot prompt engineering.
Additionally, we should choose a strong example that has a typical input and output so that the LLM can understand the task clearly.
When do we use one-shot prompting?
We use one-shot prompting when the instructions given to the LLM are ambiguous or the task is slightly difficult. In such cases, providing an example to the LLM clarifies the task and improves the output generated by the model.
A given task can have multiple types of inputs, and we cannot include all the variations in a single example. Therefore, we can provide multiple examples in the prompt to help the LLM understand different task variations. For this purpose, we use few-shot prompting.
What is few-shot prompting?
Few-shot prompting is a prompting method where we give an LLM three to five examples in the prompt along with the instructions on performing any task. It helps the LLM solve complex problems accurately as we provide multiple task variations in the examples. The LLM learns the format, style, and pattern of the inputs and outputs from the examples and solves new tasks. For instance, we can use few-shot prompting for sentiment analysis as follows:
Instruction: Classify the following sentences.Examples:I am happy : PositiveThis is a terrible situation : NegativeI am going to play Cricket : NeutralInput sentences:1. I am feeling good today.2. I don't want to go out and play right now.3. I will better read an article.
Zero-shot or one-shot prompting isn’t sufficient to solve complex or domain-specific tasks. Hence, we provide the LLMs with multiple examples using few-shot prompting, which allows the model to generate outputs that adhere to a specific domain or pattern. To select good examples and get better results from few-shot prompting, we use few-shot prompt engineering.
Few-shot prompt engineering
Few-shot prompt engineering involves designing prompts and selecting appropriate examples to help an LLM produce accurate, relevant, and consistent responses for a given task.
- In few-shot prompt engineering, we design the instructions using all the prompt engineering techniques from zero-shot prompt engineering.
- Additionally, we need to select diverse and representative examples that cover different edge cases to help the LLM solve a given problem accurately.
- We also need to keep the structure of each example identical to avoid ambiguity.
When do we use few-shot prompting?
We use few-shot prompting to solve complex domain-specific tasks with varied inputs that need accurate outputs. We also use few-shot prompting when we need precisely structured outputs in JSON or YAML formats. In such cases, the LLM needs multiple examples to capture the patterns and produce the desired results.
Chain-of-thought prompting
Chain-of-thought prompting is used to help an LLM learn how to solve complex tasks by breaking them into simple steps. You can go through the chain-of-thought prompting article to learn more about this.
Why do we need prompt engineering?
We need prompt engineering because LLMs like ChatGPT and Gemini do not automatically know what kind of output we want unless we clearly guide them. Prompt engineering helps us get better, useful, structured, and predictable outputs from large language models. To understand how prompt engineering works, consider the following example:
Imagine you want to summarize a text using ChatGPT. You can do it by using the following prompt:
Summarize this text.--- Actual text to summarize---
Now consider the following prompt:
Summarize the following text in 3-5 bullet points, focusing on the main arguments and key takeaways. Avoid filler and keep the tone neutral.--- Actual text to summarize---
Which prompt do you think will produce better results?
The second prompt will produce better results as it specifies the output’s format, length, and tone. It also guides the LLM in extracting only key arguments and avoiding fluff, resulting in a better summary.
Prompt engineering has the following benefits:
- Prompt engineering helps control the output: Without clear instructions on what kind of output to generate, LLMs may generate vague, verbose, or irrelevant outputs. Prompt engineering helps us set the output’s tone, structure, format, and focus.
- Prompt engineering improves the accuracy and relevance of the output: Depending on how we frame the prompt, the same model can give different answers for the same query. By clearly defining the prompts, we can reduce hallucinations, bias, and off-topic results generated by the LLM.
- Optimizes performance without fine-tuning: We use LLM fine-tuning to improve the model’s performance for domain-specific or task-specific applications. However, not all companies and individuals can afford to retrain or fine-tune LLMs. In such cases, prompt engineering helps improve the LLM’s performance.
- Prompt engineering helps an LLM adapt to different use cases: Different tasks like summarization, code generation, sentiment analysis, etc, work on different principles. However, LLMs are not trained for a specific use case. Prompt engineering helps us turn a general-purpose model into a task-specific one without making any model changes.
Now that we know why prompt engineering is useful, let’s discuss how to write prompts for different use cases, such as image generation, image analysis, and coding.
Prompt engineering for image generation and analysis
Image generation is one of the most exciting use cases for large language models. LLMs like Midjourney, LLaVA, DALL·E 2, and stable diffusion models are capable of generating realistic images. These models can also analyze existing images for object detection and pattern matching. Let’s discuss some of the prompt engineering techniques you can use to improve outputs from LLMs while working with images.
Realistic image generation
While generating realistic images, you should write detailed prompts that clearly specify the desired image. To generate better results, try to include different elements like lighting, scenery, and objects in the image. You can also specify the color, mood, and quality of the image for consistent results.
For example, instead of saying “Give me a realistic image of mountains”, you should format the prompt as “Give me a realistic image of snow-capped mountains during sunrise with a clear blue sky above and a reflective alpine lake in the foreground.”
Artistic image generation
While generating artistic images, you should focus on the art style and techniques to include in the image. You can explain the content of the image in the prompt to help the LLM create pictures that evoke specific emotions.
For example, instead of saying “Create a sad painting”, you can format the prompt as “Create a painting in fantasy style showing a man sitting on a bench under heavy rain in a deserted city. The man sits with his head down, soaked in rain, creating a sad atmosphere.”
Editing images
While editing images using LLMs, make sure that you describe the contents of the image first. Then, clearly mention the desired changes. For example, suppose you have a friend’s photo with a red-eye effect and uneven skin tone due to poor lighting while capturing the photo.
To edit the photo, instead of saying “Fix this photo.”, you can structure the prompt as “The photo shows a person standing on the pavement, illuminated by the soft glow of a nearby streetlamp. Remove the red-eye effect from the person’s eyes, adjust the skin tone to appear more natural, and crop the image to center the face while keeping the background slightly blurred.”
Apart from image generation and editing, prompt engineering also finds its applications in programming. Let’s discuss how to use prompt engineering while coding.
Prompt engineering for coding and programming
LLM applications like ChatGPT and Claude code are very helpful for software engineers. Using LLMs, we can write code in any programming language, optimize it, debug it, and translate it to another programming language. Let’s discuss some prompt engineering techniques for coding and programming.
Writing code
To help an LLM write better code, you should specify the inputs, outputs, choice of programming language, and the desired functionality for the code. For example, suppose that you want to write the code for the maximum subarray sum problem. For this, you can ask the LLM to implement the code using the prompt “Implement the solution for the maximum subarray sum problem.”. Instead of this instruction, you can formulate the prompt as follows:
Implement a function in Python to calculate the maximum subarray sum problem using Kadane's algorithm. The function should take a 1D list as input and return the maximum subarray sum. Ensure that the code handles the cases when all the inputs are negative.
When writing prompts for code completion, you should also provide the LLM with the code snippet along with the context and requirements. This will help the LLM generate the required code.
Translating code
You should specify the source and target programming languages for translating code from one language to another. You should also specify that the LLM shouldn’t change the functionalities or variable naming conventions to avoid any unintended side effects or bugs.
For example, instead of saying “Convert this code from Python to C++.”, you can structure the prompt as “Convert this code from Python to C++. Keep the implementation logic, variable names, and comments the same as the original code.”
Debugging code
While debugging code, you should specify the inputs, outputs, and desired functionality. Then, you should provide the code with the error message and ask the LLM to analyze and debug it. Instead of saying “Fix this code to remove the error”, you can formulate the prompt as follows:
This function calculates the maximum subarray sum problem using the Kadane's algorithm in Python. The function takes a 1D list as input and returns the maximum subarray sum.## Insert code that is running into errorThis code generates the following exception when executed# Insert the exception traceAnalyze this code and the exception and implement the changes to fix the error.
Optimizing code
While writing prompts for optimizing code, you should clearly specify the required output. You should also specify if you want to refactor the code or optimize the code for time complexity, space complexity, or readability. For example, instead of saying “Optimize this code”, you should formulate the prompt as follows:
This function calculates the maximum subarray sum problem in Python. The function takes a 1D list as input and returns the maximum subarray sum. Optimize this code to reduce time and space complexity. Add comments to improve readability.# Add the code to optimize
Conclusion
As generative AI applications become more integrated into our workflows, the importance of prompt engineering will only increase as it enables us to guide LLMs to generate accurate outputs in a desired format. By learning how to craft good prompts, you can unlock the full potential of LLMs by transforming them from generic tools into powerful, task-specific assistants. In this article, we discussed the different prompt engineering techniques and their importance. We also discussed prompt engineering for tasks like image generation and coding that will help you in your day-to-day tasks.
To learn more about prompt engineering, you can go through this learn prompt engineering course that discusses effective prompting techniques to craft high-quality prompts for generative AI applications. You might also like this prompt engineering to build a Django app course that teaches you how to develop real-world applications using Django with generative AI tools.
Frequently asked questions
1. Does prompt engineering require coding?
No. Prompt engineering doesn’t require coding. It requires critical thinking and experimentation skills that help you design suitable prompts for a particular task by iteratively testing and improving the prompts.
2. What is the difference between a zero-shot prompt and a few-shot prompt?
A zero-shot prompt contains only instructions and context with no examples. A few-shot prompt contains instructions, context, and three to five examples detailing how to solve a particular task.
3. What are the limitations of few-shot prompting?
Few-shot prompts require manual effort to select good representative examples for a task. They also increase the length of the input prompt and are limited by the token length that can be processed by an LLM.
4. What is the persona pattern?
Persona pattern in prompt engineering refers to designing prompts that instruct the LLM to adopt a specific role, identity, or point of view to invoke specific behavior, tone, and domain expertise for generating outputs.
5. How many shots in few-shot learning?
In few-shot learning, we typically provide three to five examples.
'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'
Meet the full teamRelated articles
- Article
Getting Started with LangChain Prompt Templates
Learn LangChain prompt templates - Article
Chain of Thought Prompting Explained (with examples)
Learn the basics and implementation of chain of thought (CoT) prompting using LangChain. - Article
AI Prompting Best Practices
A focused dive into best prompting practices for generative AI
Learn more on Codecademy
- Course
Learn Prompt Engineering
Learn about effective prompting techniques to craft high-quality prompts, maximizing your use of generative AI.With CertificateBeginner Friendly1 hour - Skill path
Generative AI for Everyone
Learn the basics of generative AI and best prompt engineering practices when using AI chatbots like ChatGPT to create new content.Includes 6 CoursesWith CertificateBeginner Friendly3 hours - Free course
Learn How to Use AI for Data Analysis
Ready to learn how to use AI for data analysis in Python? We’ll show you how to use AI like ChatGPT or Gemini as your own personal analytics assistant.Intermediate< 1 hour
- What is prompt engineering?
- What is zero-shot prompting?
- What is one-shot prompting?
- What is few-shot prompting?
- Chain-of-thought prompting
- Why do we need prompt engineering?
- Prompt engineering for image generation and analysis
- Prompt engineering for coding and programming
- Insert code that is running into error
- Conclusion
- Frequently asked questions