Providing examples of what is wanted will allow ChatGPT to produce more similar results.
Providing context to ChatGPT can each help in creating a more useful response. Context includes:
Defining a clear purpose for a prompt can assist in getting useful results. Think about:
Tone: How do you want the output to sound? Funny? Professional?
Format: How do you want the output structured? Bullet list? Paragraph? An essay?
Audience: Who is this for? Do you want something for children? Beginners? Experts?
When defining a Generative AI prompt, it is essential to use clear, straightforward language. Confusing and unusual word choices may result in output that is incorrect or not what a user is looking for.
Instead of:
My team is interested in X, tell me about that
Consider:
Provide a summary of X, including its history, features, and configuration.
A marketer can design effective prompts by being purposeful, and defining the:
Including marketing context in a prompt can help ChatGPT produce output that meets existing marketing strategies.
Without context, the language model may output irrelevant content or even hallucinate incorrect information about your marketing strategy.
Along with clear instructions and context, providing past examples of marketing content in a prompt can help guide ChatGPT to produce content meeting the prompt requirements.
ChatGPT often produces misinformation, or information that isn’t true. Its tendency to make up false facts is part of a problem known as AI hallucination.
In general, large language models like ChatGPT will not know anything about laws or other regulations for marketing specific products. It is our job to check that any generated content is factual and complies with any applicable marketing laws.
There are also concerns about ChatGPT generating plagiarized and copyrighted works especially the more niche a prompt is.
Sometimes, the data we feed into language models through prompts may be used in future training. This can result in data leaks where our information becomes part of the publicly-available language model.
Be sure to understand your company’s internal rules about what information can and cannot be shared in prompts to language models. If there is sensitive internal data or customer information in a prompt, it is probably best for the data to be anonymized or handled by humans.
ChatGPT will sometimes reproduce biases found in its training data. These biases have the potential to produce inappropriate or harmful content.
A prompt library is a collection of engineered prompts that we can re-use to produce consistent, high-quality results.
The benefits of having a prompt library include:
A prompt library can be built with something as simple as a text document or a spreadsheet.
Reflection is a prompting technique that asks ChatGPT to check its own work and make necessary changes to the output.
While this technique allows ChatGPT to reflect on its generated content, it is still important that the results be double-checked by a human!
Large language models, like ChatGPT, learn from their extensive training data which may contain human biases that are reproduced in their generated output.
Therefore, it is important to check that the language model’s output does not contain any harmful implicit or explicit biases about groups of people, especially when targeting specific demographics in marketing materials.