10 prompt engineering tips and best practices
Generative AI is finally coming into its own as a tool for research, learning, creativity and interaction. The key to a modern generative AI interface is the prompt: the request that users compose to query an AI system in an attempt to elicit a desirable response.
But AI has its limits, and getting the right answer means asking the right question. AI systems lack the insight and intuition to actually understand users' needs and wants. Prompts require careful wording, proper formatting and clear details. Often, prompts should avoid much of the slang, metaphors and social nuance that humans take for granted in everyday conversation.
Getting the most from a generative AI system requires expertise in creating and manipulating prompts. Here are some ideas that can help users construct better prompts for AI interactions.
What is a prompt and what is prompt engineering?
A prompt is a request made by a human to a generative AI system, often a large language model.
Many prompts are simply plain-language questions, such as "What was George Washington's middle name?" or "What time is sunset today in the city of Boston?" In other cases, a prompt can result in tangible outcomes, such as the command "Set the thermostat in the house to 72 degrees Fahrenheit" for a smart home system.
But prompts can also involve far more complicated and detailed requests. For example, a user might request a 2,000-word explanation of marketing strategies for new computer games, a report for a work project, or pieces of artwork and music on a specific topic.
Prompts can grow to dozens or even hundreds of words that detail the user's request, set formats and limits, and include guiding examples, among other parameters. The AI system's front-end interface parses the prompt into actionable tasks and parameters, then uses those extracted elements to access data and perform tasks that meet the user's request within the limits of the system's underlying models and data set.
What is the role of a prompt engineer?
At a glance, the idea of a prompt is easy to understand. For example, many users interact with online chatbots and other AI entities by asking and answering questions to resolve problems, place orders, request services or perform other simple business transactions.
However, complex prompts can easily become large, highly structured and exquisitely detailed requests that elicit very specific responses from the model. This high level of detail and precision often requires the extensive expertise provided by a type of AI professional called a prompt engineer.
A successful prompt engineer has the following characteristics:
- A working knowledge of the AI system's underlying model architecture.
- An understanding of the contents and limitations of the training data set.
- The ability to extend the data set and use new data to fine-tune the AI system.
- Detailed knowledge of the AI system's prompt interface mechanics, including formatting and parsing methodologies.
Ultimately, a prompt engineer is well versed in creating and refining effective prompts for complex and detailed AI requests. But regardless of your level of prompt engineering experience and knowledge, the following 10 tips can help you improve the quality and results of your AI prompts.
10 tips for better prompts
Any user can prompt an AI tool, but creating good prompts takes some knowledge and skill. The user must have a clear perspective of the answer or result they seek and possess a thorough understanding of the AI system, including the nuances of its interface and its limitations.
This is a broader challenge than it might initially seem. Keep the following guidelines in mind when creating prompts for generative AI tools.
1. Understand the desired outcome
Successful prompt engineering is largely a matter of knowing what questions to ask and how to ask them effectively. But this means nothing if the user doesn't know what they want in the first place.
Before a user ever interacts with an AI tool, it's important to define the goals for the interaction and develop a clear outline of the anticipated results beforehand. Plan it out: Decide what to achieve, what the audience should know and any associated actions that the system must perform.
2. Form matters
AI systems can work with simple, direct requests using casual, plain-language sentences. But complex requests will benefit from detailed, carefully structured queries that adhere to a form or format that is consistent with the system's internal design.
This kind of knowledge is essential for prompt engineers. Form and format can differ for each model, and some tools, such as art generators, might have a preferred structure that involves using keywords in predictable locations. For example, the company Kajabi recommends a prompt format similar to the following for its conversational AI assistant, Ama: "Act like" + "write a" + "define an objective" + "define your ideal format" A sample prompt for a text project, such as a story or report, might resemble the following: Act like a history professor who is writing an essay for a college class to provide a detailed background on the Spanish-American War using the style of Mark Twain.
3. Make clear, specific requests
AI is neither psychic nor telepathic. It's not your best buddy and it has not known you since elementary school; the system can only act based on what it can interpret from a given prompt. Form clear, explicit and actionable requests. Understand the desired outcome, then work to describe the task that needs to be performed or articulate the question that needs to be answered.
For example, a simple question such as "What time is high tide?" is an ineffective prompt because it lacks essential detail. Tides vary by day and location, so the model would not have nearly enough information to provide a correct answer. A much clearer and more specific query would be "What times are high tides in Gloucester Harbor, Massachusetts, on August 31, 2023?"
4. Watch for length
Prompts might be subject to minimum and maximum character counts. Many AI interfaces don't impose a hard limit, but extremely long prompts can be difficult for AI systems to handle.
AI tools often struggle to parse long prompts due to the complexity involved in organizing and prioritizing the essential elements of a lengthy request. Recognize any word count limitations for a given AI and make the prompt only as long as it needs to be to convey all the required parameters.
5. Choose words with care
Like any computer system, AI tools can be excruciatingly precise in their use of commands and language, including not knowing how to respond to unrecognized commands or language.
The most effective prompts use clear and direct wording. Avoid ambiguity, colorful language, metaphors and slang, all of which can produce unexpected and undesirable results.
However, ambiguity and other discouraged language can sometimes be employed with the deliberate goal of provoking unexpected or unpredictable results from a model. This can produce some interesting outputs, as the complexity of many AI systems renders their decisionmaking processes opaque to the user.
6. Pose open-ended questions or requests
Generative AI is designed to create -- that's its purpose. Simple yes-or-no questions are limiting and will likely yield short and uninteresting output.
Posing open questions, in contrast, gives room for much more flexibility in output. For example, a simple prompt such as "Was the American Civil War about states' rights?" will likely lead to a similarly simple, brief response. However, a more open-ended prompt, such as "Describe the social, economic and political factors that led to the outbreak of the American Civil War," is far more likely to provoke a comprehensive and detailed answer.
7. Include context
A generative AI tool can frame its output to meet a wide array of goals and expectations, from short, generalized summaries to long, detailed explorations. To make use of this versatility, well-crafted prompts often include context that helps the AI system tailor its output to the user's intended audience.
For example, if a user simply asks an LLM to explain the three laws of thermodynamics, it's impossible to predict the length and detail of the output. But adding context can help ensure the output is suitable for the target reader. A prompt such as "Explain the three laws of thermodynamics for third-grade students" will produce a dramatically different level of length and detail compared with "Explain the three laws of thermodynamics for Ph.D.-level physicists."
8. Set output length goals or limits
Although generative AI is intended to be creative, it's often wise to include guardrails on factors such as output length. Context elements in prompts might include requesting a simplified and concise versus lengthy and detailed response, for example.
Keep in mind, however, that generative AI tools generally can't adhere to precise word or character limits. This is because natural language processing models such as GPT-3 are trained to predict words based on language patterns, not count. Thus, LLMs can usually follow approximate guidance such as "Provide a two- or three-sentence response," but they struggle to precisely quantify characters or words.
9. Avoid conflicting terms
Long and complex prompts sometimes include ambiguous or contradictory terms. For example, a prompt that includes both the words detailed and summary might give the model conflicting information about expected level of detail and length of output. Prompt engineers take care to review the prompt formation and ensure all terms are consistent.
The most effective prompts use positive language and avoid negative language. A good rule of thumb is "Do say 'do,' and don't say 'don't.'" The logic here is simple: AI models are trained to perform specific tasks, so asking an AI system not to do something is meaningless unless there is a compelling reason to include an exception to a parameter.
10. Use punctuation to clarify complex prompts
Just as humans rely on punctuation to help parse text, AI prompts can also benefit from the judicious use of commas, quotation marks and line breaks to help the system parse and operate on a complex prompt.
Consider the simple elementary school grammar example of "Let's eat Grandma" versus "Let's eat, Grandma." Prompt engineers are thoroughly familiar with the formation and formatting of the AI systems they use, which often includes specific recommendations for punctuation.
Additional prompt engineering tips for image generators
The 10 tips covered above are primarily associated with LLMs, such as ChatGPT. However, a growing assortment of generative AI image platforms is emerging that can employ additional prompt elements or parameters in requests.
When working specifically with image generators such as Midjourney and Stable Diffusion, keep the following seven tips in mind:
- Describe the image. Offer some details about the scene -- perhaps a cityscape, field or forest -- as well as specific information about the subject. When describing people as subjects, be explicit about any relevant physical features, such as race, age and gender.
- Describe the mood. Include descriptions of actions, expressions and environments -- for example, "An old woman stands in the rain and cries by a wooded graveside."
- Describe the aesthetic. Define the overall style desired for the resulting image, such as watercolor, sculpture, digital art or oil painting. You can even describe techniques or artistic styles, such as impressionist, or an example artist, such as Monet.
- Describe the framing. Define how the scene and subject should be framed: dramatic, wide-angle, close-up and so on.
- Describe the lighting. Describe how the scene should be lit using terms such as morning, daylight, evening, darkness, firelight and flashlight. All these factors can affect light and shadow.
- Describe the coloring. Denote how the scene should use color with descriptors such as saturated or muted.
- Describe the level of realism. AI art renderings can range from abstract to cartoonish to photorealistic. Be sure to denote the desired level of realism for the resulting image.
Avoiding common prompt engineering mistakes
Prompt formation and prompt engineering can be more of an art than a science. Subtle differences in prompt format, structure and content can have profound effects on AI responses. Even nuances in how AI models are trained can result in different outputs.
Along with tips to improve prompts, there are several common prompt engineering mistakes to avoid:
- Don't be afraid to test and revise. Prompts are never one-and-done efforts. AI systems such as art generators can require enormous attention to detail. Be prepared to make adjustments and take multiple attempts to build the ideal prompt.
- Don't look for short answers. Generative AI is designed to be creative, so form prompts that make the best use of the AI system's capabilities. Avoid prompts that look for short or one-word answers; generative AI is far more useful when prompts are open-ended.
- Don't stick to the default temperature. Many generative AI tools incorporate a "temperature" setting that, in simple terms, controls the AI's creativity. Based on your specific query, try adjusting the temperature parameters: higher to be more random and diverse, or lower to be narrower and more focused.
- Don't use the same sequence in each prompt. Prompts can be complex queries with many different elements. The order in which instructions and information are assembled into a prompt can affect the output by changing how the AI parses and interprets the prompt. Try switching up the structure of your prompts to elicit different responses.
- Don't take the same approach for every AI system. Because different models have different purposes and areas of expertise, posing the same prompt to different AI tools can produce significantly different results. Tailor your prompts to the unique strengths of the system you're working with. In some cases, additional training, such as introducing new data or more focused feedback, might be needed to refine the AI's responses.
- Don't forget that AI can be wrong. The classic IT axiom "garbage in, garbage out" applies perfectly to AI outputs. Model responses can be wrong, incomplete or simply made-up, a phenomenon often termed hallucination. Always fact-check AI output for inaccurate, misleading, biased, or plausible yet incorrect information.
Credit computerweekly.com