AI Friday: Prompt Engineering Basics

20 min read


Hello friends, it's been a while since I created an AI topic. Today, based on a conversation with Rodrigo Peplau, I decided to write this blog post. Prompt engineering and how to use Generative AI to create useful outputs is a powerful topic. Many newcomers to AI don't realize all the amazing things it can do. Rodrigo and I are working on some auto-personalization functionality using Generative AI (sorry Rodrigo if you didn't want me to share that 😛). In our case, I wanted to take some information about a website built with Sitecore and use Generative AI to generate persona and profile key information for the site automatically. I mentioned that I had done this for my SUGCON EU presentation on behavioral personalization but had lost the prompts. So, this is an exercise in recreating that outcome. I believe I used Microsoft Copilot and provided information about the SUGCON North America website, asking it to create personas for this type of site and output them in a table. In today's use case, I wanted the output in JSON format so that the data could be used for other purposes.

Initial Prompts

My initial attempts to create the Persona and Profile Keys (or Attributes) shared across Personas began with a straightforward description of the site. These attempts resulted in multiple prompts to achieve the desired JSON output, which looked something like this:

1[ 2 { 3 "name": "Some Persona", 4 "description": "Information about the persona", 5 "attributes": [ 6 { 7 "name": "Tech Enthusiast", 8 "score": "10" 9 } 10 ] 11 } 12]

This JSON output would allow us to then take additional action with the output generated by Generative AI. The additional prompts were helpful to use in our code to combine those additional instructions into a single request to help generate the outcome that we were expecting. Some of the following were the issues that I had with the initial prompt, that I had to follow up with additional prompts:

  • The attributes were inconsistent across personas, whereas in Sitecore, profile keys must be uniform for a Profile and Pattern Card.
  • Give it more context about the schema of the attributes.
  • It didn't know how to provide the score, resulting in vague scores like High or Low. I clarified that the scores needed to be numerical, specifically within a range of 1 to 100, which resolved the issue.

After these additional prompts, I finally got the desired output. However, I now wanted a way to run this from the code. Let's discuss some strategies I used to incorporate those additional instructions and context into a single request to guide the model effectively. Note that the approach might differ slightly when using the Chat Completions API.

Instruct the Model to Pretend to be a Specific Role

I've seen this used elsewhere on the web, where you have a specific purpose. In our case, I was trying to get Generative AI to assume it was a Strategist or Data Science role, to have it profile the content and come up with a general set of Profile Keys/Attributes and then use those attributes as a way to score the personas of a web page. So adding an instruction such as: You are a website marketing strategist and you need to create profile pages on a website...

This is considered “Role Prompting” (🟢 Assigning Roles ( which improves the accuracy of the output from our request. In our case, because Generative AI is assuming it’s a strategist, it’s going to analyze and profile the content that we provided it.

Now, for a lot of the prompting, I was simply using Microsoft Copilot, which doesn’t really allow you to set system prompts, which is where a lot of these instructions that I talk about in this post, should really go. So that when I describe the site or the pages of the site, it uses the system context to automatically send the expected outcome back to the user.

Provide an Example

In my specific use case, I did not provide an example, but it’s an important element that can greatly improve the accuracy of the LLM in returning the desired output. When I used Microsoft Copilot, even with specific instructions to not use unique attributes per persona, it often made errors. I found that the Microsoft model wasn’t advanced enough to handle this scenario, so I switched to a more sophisticated model for better results. Providing an example of the desired output would likely have generated the correct response. Including a JSON schema in my final prompt yielded better results, and if I had provided more scenarios to generate a common set of profile keys, it might have resolved the issue.

Since I'm not providing examples, this approach qualifies as zero-shot prompting (Zero-Shot Prompting | Prompt Engineering Guide ( While no examples are given to guide the AI, the additional context I provide should help achieve a more accurate response.

Final Outcome

Let's review what my final prompt looks like. You will notice that many of the elements discussed above are included in this initial prompt:

11. You are a content marketer/strategist working for a website that is about events in the Sitecore community 22. Your response should be in only JSON in the following format [{ name: string; description: string; attributes: Record<string, number>; }] 33. The attributes will have a string for the name of the attribute and a number between 1 and 100 to score that persona. 44. The attributes that you define, should repeat for each persona, so how you score the attribute the persona will differentiate those personas from each other. 55. The site has various pages about the event such as the agenda, sponsorship information, and the speakers as well as ways to Signup.

As you can see, I used a numbered list of instructions to outline the key points, including an example of the desired JSON output and guidelines on what was acceptable. I also assigned the role of content marketer/strategist to refine the results further. While this prompt could be enhanced by providing an actual example, I was already getting the expected output.

I will cover more details about this code and its usage in a future blog post. I will also explain how to build upon these prompts to generate JSON, which can be used with function calls to interact with an external API. Stay tuned for that topic 🙂.

Extra Credit

Many forget that AI output may not always meet expectations. Human oversight is essential to verify and refine the information suggested by AI. It's crucial in any AI application to have a human validate the output. While this feature isn't built yet, the next step could involve allowing users to select specific generated personas and create them, rather than discarding others. Another option is to enable users to regenerate or refine certain personas with additional context. This process supports user creativity, leading to impressive outcomes.



Stay up to date

© 2024 All rights reserved.