Learning_to_code_02.webp

The 6 Rules of Writing Effective AI Prompts 

08/24/2023
5 minutes

From learning new programming languages to creating marketing campaigns, there are tons of ways AI can make your job easier. But getting good results hinges on writing good prompts. 

A good prompt gives a large language model (LLM) a starting point to figure out what to create, says Ada Morse, a Senior Instructional Designer at Codecademy. Prompt engineering is the art of designing prompts that result in the most effective output from AI models. Ahead, Ada breaks down some of the basics of AI prompt writing. These are general guidelines that will help optimize your results no matter what you’re working on. For a more in-depth look at prompt engineering, check out our Prompt Engineering for Marketing course. 

Learn something new for free

DON’T be ambiguous 

One of the best things about generative AI tools like ChatGPT and DALL-E is their ability to process natural language. You can talk to them the same way you’d talk to a friend, and it’ll respond appropriately (more or less). But natural language can be ambiguous, and computers aren’t great with ambiguity. 

“The more ambiguous your language, the more likely it is the model will go off and do something random that you didn’t want,” Ada says. For example, please analyze this dataset of customer transactions and provide insights into the most frequently bought and returned items for a meeting about product performance gives a model a lot more to work with that please analyze this dataset and summarize the results. 

DO provide details 

LLMs don’t have any background knowledge or context, so every time you start a new chat, you’re starting from a blank page. That’s why it’s crucial to provide as much info as you can. 

“If the starting point doesn’t provide all the needed information, the probability machine might generate nonsense that’s not very useful — or completely incorrect,” Ada says. 

In our free course Prompt Engineering for Marketing, you can experiment with different example prompts to see how providing detailed instructions, context, and examples can help ensure the program produces the desired output. 

DON’T use sensitive info 

AI tools can be great for time-consuming tasks like debugging or analyzing data, especially at large volumes — but be mindful of sharing private or sensitive information. Depending on the tool, your prompts and responses may be recorded and used as training data to help fine-tune the model. So if you’re using AI tools at work, make sure you know how your prompts are being used before uploading company data or code. 

“Make sure you read the privacy agreement, and if you’re giving it confidential data, know how it’s being handled,” Ada says. 

DO reference help guides 

Different tools operate by different rules, and there are many factors that can influence a model’s output, including its training data, set parameters, and content type. For example: text-based models like GPT-4 tend to perform better with more information, but wordy prompts can confuse image-based tools like Midjourney. (Check out our free Intro to Generative AI to learn more about the different types of content you can create with AI.) 

That’s why it pays to review user guides, manuals, and other helpful materials. For instance, ChatGPT breaks down how to provide custom instructions and Midjourney provides a user guide that explains how to use various commands and parameters. 

“It’s always a good idea to read through the frequently asked questions on the website,” Ada says. “They might have more or less detail depending on the tool, but they’ll provide the basics.” 

DON’T forget to check your responses 

LLMs are probability-based, and they don’t have a fact-checking system so there’s no guarantee for accuracy. In fact, AI programs often make up information (or “hallucinate”), and studies show that ChatGPT’s accuracy fluctuates over time. 

“Remember that models aren’t optimized to be correct or credible; they’re optimized to produce a plausible response,” Ada says. Even with the best-written prompts, AI can make mistakes. That’s why you always need to validate your results.  

But while you can’t take AI’s output at face value, you can get it closer to what you want by iterating on your prompts. “A big mistake is to write a prompt, get the response, and move on from there instead of going back and iterating on prior messages,” Ada says. In our Prompt Engineering for Marketing course, we go over how to use reflection — referring to past prompts — to increase the likelihood that the AI will follow instructions. 

DO adjust your parameters 

Some AI tools allow you to adjust the configuration of the model’s parameters, and experimenting with them can help fine-tune your output. “The more control you have over the model, the more reliable the output’s going to be,” Ada says. 

For instance, in OpenAI’s Playground, you can mess with different parameters and see how changes affect GPT-3.5’s (the model behind ChatGPT) output. Turning down the model’s temperature, the parameter that determines the randomness of its output, will make it less random and more deterministic. And if you turn the temperature all the way down, it may reproduce text from its training set or provide the same output every time, Ada says. 

Along with your parameters, you can also experiment with different plugins. By default, LLMs are limited to their training data, which may be out of date — but add-ons like our ChatGPT plugin help update the model’s knowledge base and improve its functionality. 

Use the tips above to experiment with different prompts, and if you want to learn more about all the cool things you can do with AI, check out our AI courses. We’ll introduce you to some of the latest AI tools in our free courses Intro to ChatGPT and Intro to Generative AI then break down their impact on our society in Learn the Role and Impact of Generative AI and ChatGPT

Related courses

3 courses

Related articles

7 articles