Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Prompt engineering guide: Techniques, examples, and use cases

Want to be a prompt engineer? We cover the fundamental techniques: Few Shot, Game Play, Cognitive Verifier, Persona, Forecasting, and more.

May 28, 2024 • 10 Minute Read

Please set an alt value for this image...
  • AI & Machine Learning
  • Learning & Development
  • Guides

These days, most people have dabbled with tools like ChatGPT, eager to use AI to save some time in their day. However, the opposite often happens, as they struggle to get the answer they want the first time, instead losing their precious time. This can lead them to throw up their hands and proclaim:

“AI doesn’t save me any time at all! This technology is useless.”

What they’re lacking are prompt engineering techniques. Prompt engineering is the art of asking the right questions of a generative AI solution to get the best possible answers. If you know these secret formulas, you can make sure you spend less time playing 20 questions with a chatbot, and actually get on with your day.

In this article, I’m going to share seven fundamental tricks — known as “patterns” — you can use to consistently get the best results out of a solution like ChatGPT, so you can become a true power-user and work more efficiently. Note these techniques also work with other Large Language Models --- like Microsoft CopilotAnthropic's Claude, or Google Gemini --- we're just using ChatGPT to demonstrate.

Want to learn why prompt engineering is going to be an important work skill for the foreseeable future? Read this article by Adam Ipsen: “Is prompt engineering dead? No, and it won’t be for a long time.”

The 7 key Prompt Engineering patterns


1. Persona pattern

Users don’t always know which outcomes they need. However, they do know the type of person they would ask for that specific information. For example, if you wanted a house built, you’d go ask an architect, right?

If you ask a large language model (LLM) like ChatGPT to act like a certain person — such as “Act as an interior designer” — it limits the responses it can give, and gives you a more consistent result. After all, ChatGPT knows that you wouldn’t be asking it to act like an interior designer if you were after medical advice, so when you ask about how to set up your new office space, it’s going to focus on suggesting how you should make the place look stylish rather than the benefits of ergonomic work chairs.

How to use the Persona pattern

Write the following prompts:

  • Act as a persona X

  • Tell me the outcomes that Persona X would provide

Example

In the examples below, you can see the different outputs of the same question, based on whether the Persona Pattern is used or not. 


Without the pattern


With the pattern

As you can see, you can get extremely different results just by asking the AI in question to put itself in someone else’s shoes.


2. Forecasting pattern

One of the things that is easy for AI to do — and not so much for humans — is to digest a ton of raw data and predict future trends. Instead of pouring over it for hours and risking falling asleep, we can ask ChatGPT to do it in seconds if we use the right prompt.

The forecasting pattern is a way you can get an AI to make these predictions. 

You’ll need some data to give it, and depending on the model you have access to will depend on how you provide it. For instance:

  • With ChatGPT 3.5 (free), you’ll need to paste the raw data into the prompt along with your question. 

  • With ChatGPT 4 and 4o, you can simply attach it as a document to your prompt (like an Excel spreadsheet or PDF) and ask it to scan it. 

How to use the forecasting pattern

For solutions that don’t allow attachments, like ChatGPT 3.5, write this prompt:

Assume that X:

[Data provided by the user]

Make a prediction based on X.

If the solution does allow attachments, like ChatGPT 4, write this prompt:

Scan the attached data.

Assume that X:

Make a prediction based on X.

Example 


Without the pattern


With the pattern

As you can see, the result is completely different depending on if you give the LLM data or not. Just like a regular person, they can’t make predictions on the future if they don’t have the right information to work with.

So, why is it important to provide an LLM with this data? AI crawl the internet and know everything, right? Wrong. An LLM’s training data is limited by a specific cut-off date, so we need to provide additional details on all subsequent events. Additionally each LLM’s training data is based on a limited subset of databases available. This means your prompt’s instruction might not be in the training corpus, even if it is within the cut-off date. That’s why it’s so important to use the forecasting pattern for questions like this.


3. Question Refinement pattern

Sometimes, you just don’t know enough to even ask the right question. You’ve got a vague idea of what you’re after, and that’s it. 

Think about the first time you went to buy a car. Did you know what questions to ask? Normally, someone would turn to a parent or friend — someone who’s bought a car before — and ask “what should I look for?” They’re more knowledgeable, after all, so they know what questions to ask.

The question refinement pattern works just like that. You leverage the LLM’s considerable knowledge to ask what questions you should ask, since it’s likely crawled the whole internet of people every FAQ under the sun. This allows you to refine vague queries, and then you can use said query to get a more accurate response.

Let’s see how this works in practice.

How to use the question refinement pattern

Write the following prompt:

When I ask a question, suggest a better question and ask me if I would like to use it instead.

[Insert your general question here]

Then, take the more accurate question suggested by the LLM, and use it as your next prompt.

PRO TIP: You can actually do this over and over again, using the questions the LLM provides you with, and asking it to improve on its own work.

Example



4. Cognitive Verifier Pattern

Contrary to popular belief, solutions like ChatGPT don’t “think”. They’re eager to give you an answer, and so they rush to give you a response without properly taking all the different factors into account. This means it’s normally on you to think of all the different angles. 

But what if you don’t know enough to do that? 

That’s where the cognitive verifier pattern comes in. This pattern forces the LLM to not rush to answer you, but make a list of all the factors you should consider, and the information that it needs from you to make a truly informed decision. 

How to use the cognitive verifier pattern

Write the following prompt:

Whenever I ask a question, only ask me for additional information to clarify what I’m asking before providing a final answer. Please combine all my responses.

Let your LLM answer. Then, ask the question. 

Example


Without the pattern


With the pattern

Look how different the outputs of the LLM are whenever the same question is asked with and without introductory questions! 

  • In the first example, ChatGPT gives you an answer by shooting from the hip.

  • In the second, it asks for a lot more background information to make an informed call. 

You can then provide it with all of these details to get a far better response than in the first case.



5. Flipped Interaction pattern

Normally, we are in the driver’s seat when it comes to working with LLMs. We ask the question, and it gives the response. However, sometimes it’s beneficial to switch roles and have it ask us questions instead.

Think of a doctor. You sit down, and you tell them why you’re there today. Then, the doctor asks a bunch of questions, until they have enough information to come up with a final diagnosis. This process wouldn’t work anywhere near as well if the doctor wasn’t able to ask questions. After all, your doctor might ask you if you’ve had a sore arm to check for signs of heart attack, but you might not think to offer that up when you walk in.

Flipped interaction pattern works exactly the same way, turning your LLM from interviewee to interviewer, until it has the information it needs to solve your problem.

How to use the flipped interaction pattern

Write the following prompt:

Ask me questions on X until you have enough information to suggest Y

When you have enough information to achieve that condition, present Y solution to me. 

Ask me the first question.

The LLM will then start asking you questions. Keep on answering until it has enough information to solve the problem.

PRO TIP: You can add the phrase “Ask me the questions (one/two/xth) of the time” in your initial prompt to set the pace of the discussion.

Example



6. Gameplay pattern

Shall we play a game?

Gamification — adding game-like elements to non-game contexts — is a surefire way to make something that would be rather boring a lot more interesting. But it has other benefits other than just being fun, such as encouraging creativity, helping enhance our ideas and strategies.

The gameplay pattern can help us achieve all of that. This pattern encourages curiosity and motivation of users, by turning a task into a game-like experience. You get better results, and that droll task you’re working on suddenly becomes a lot more engaging: that’s win-win.

How to use the gameplay pattern

Write the following prompt:

Create a game for me around X.

Give fundamental rules of the game.

The LLM will explain the rules of the game. Then, start playing it!

Example



7. Few-shot pattern

If you’re looking to interact programmatically with any LLM via API, the Few-Shot pattern (also known as “in context learning") is incredibly useful.

Here’s how it works. We provide examples, known as “shots”, to the LLM of some example inputs and outputs — basically, showing it the type of inputs we’re going to give, and the type of responses we want to get back. The LLM then detects the task, and if you give it similar inputs, it will fill in the blanks as desired.

This pattern only works if it aligns with how the LLM has been trained to complete these types of jobs. LLMs like ChatGPT have often been trained to do a wide range of tasks, such as translation, question answering, sentiment analysis, and summarisation. That means it’s fairly likely that what you’re asking will fit into one of these categories.

How to use the few-shot pattern

Write the following prompt, with the blanks filled in.

Input1: [Your example input]

Output1: [Your desired output]

Input2: [The input you want a response for, but don’t currently have.]

Example

In the following example, we will provide the task of summarisation, a difficult task by itself for ChatGPT to detect what we want; however, after only one example, it detects the task successfully (summarisation) and provides the summary:



Learning more about prompt engineering

As you’ve seen in this article, there are plenty of ways to get the most out of LLMs like ChatGPT, and save yourself a lot of time in the process. These patterns are just a small sample of the actual potential of using prompt engineering to enhance, automate, and accelerate your productivity. You can even use these techniques to empower your apps and features, with just a little bit of creativity!

To take your prompt engineering journey further, below are some further resources you can use to dive deeper with some expert-led courses. Additionally, there’s also a link to one of the foundational papers of prompt engineering patterns, where you can learn more techniques like those listed above. I encourage you to check it out!

Axel Sirota

Axel S.

Axel Sirota is a Microsoft Certified Trainer with a deep interest in Deep Learning and Machine Learning Operations. He has a Masters degree in Mathematics and after researching in Probability, Statistics and Machine Learning optimization, he works as an AI and Cloud Consultant as well as being an Author and Instructor at Pluralsight, Develop Intelligence, and O'Reilly Media.

More about this author