Are you ready to be a Prompt Engineer? Soon, you’ll have to be!

Zvonimir Petkovic

What if anyone could become an engineer? Even if it only meant coming up with clever prompts for data-driven AI beasts.

The AI has crash-landed in the engineering space, seemingly threatening to replace the very engineers who created it. But what if we could all be engineers, in a way? Even if it only meant coming up with clever prompts for these huge data-driven beasts.

Arthur C. Clarke was famous for saying:

Any sufficiently advanced technology is indistinguishable from magic.

Although some job anxiety is certainly felt in the market because of this particular brand of magic, people won’t likely be replaced by AI anytime soon. Human qualities are difficult to replace by a statistics-fueled huge data model.

Yet something else is on the horizon. It’s likely that some people could be replaced by other people using AI effectively and, in return, being more productive.

Rather than being horrified, we should look at the positive prospects for human work and see it as a powerful tool aimed to make our lives easier – and learn how to use it to our advantage. It seems like we’re all becoming prompt engineers to a degree.

Engineering the prompts, or Prompt Engineering

Considering that not even the OpenAI knows how to fully explain how their models work and that it’s impossible to predict its output, the input is the place where we can make an impact!

The question you’re asking ChatGPT or Bard is your powerful new tool – the prompt. Even if you never thought about it that way, by asking it a question, you are basically querying the model to do some mental work and to reward you with the answer.

It makes sense then to put some emphasis on the prompts themselves because they define the output quality of your generative AI assistants. The skill to do this has been conceptually called Prompt Engineering, and it’s shaping to be a crucial skill for all workforces.

Some may dismiss the whole idea, but the fact is that by the end of 2023, we’ll see deep integration of generative AI into a range of widely used products – like Office suites, communication applications, and others. This will, in turn, mean we’ll be even more exposed to the technology than we’ve been before.

Ready or not, it’s coming to the application near you.

Prompt Engineering is shaping up to be a distinct job on its own and is slowly creeping into the market. There will be some granularity to this role since it currently encompasses pretty much everything that involves interaction between LLM AI and a human. Nowadays, it can refer to a developer who writes an app and does API calls to a language model or vector database storing the tokens, as well as a marketing specialist using the same technology for content creation.

There’s a huge opportunity for specialists who follow the market trends and development of this technology because the strongest steel is forged in the hottest fires. And boy, are the fires burning.

Challenge the model with few-shot prompting

What most users of ChatGPT are doing at the moment is zero-shot prompting. They ask direct questions without much explanation or context and simply expect a response. This works… sometimes. Then again, if you have a specific and complex problem for ChatGPT to solve, it does not work, and the response you get is either very generic or downright wrong.

As the models rise in scale, you can extract better results by providing a better context. A technique called few-shot prompting requires more user input but also outputs better results.

The input can, for example, include suggestions, examples, or other data that better describe the task you’re pursuing. Or a pattern, if you will, in the simple but essential example below.

Beneath the hood is the model is making chains of different reasoning and only then outputs the result you can see.

Chain-of-thought benefit has also been demonstrated empirically on a standard GSM8K benchmark (word math problems) shown in the picture below. As the models grow (ChatGPT already uses one of the biggest models), chain-of-thought prompting gives a lot better results than standard prompting without it.

Picture from: Characterizing Emergent Phenomena in Large Language Models – Google AI Blog (

Word math problems are used as an example here simply because LLMs are inherently bad at math. Recent updates have pretty much mitigated some of the issues, but you wouldn’t ask Shakespeare to solve a differential equation, would you?

Don’t be left out of the revolution

Regardless of the uncertainty present, there’s a quiet revolution happening, and companies need to find a way to help their employees make the most of it.

I’m not just talking about integrating AI into the products. I’m referring to every employee who could be at least a tiny bit more productive in their daily work.

Being able to effectively use this new tool at your disposal will mean embracing a new mindset and adjusting the approach to get the best of it. Not everyone is proficient at expressing themselves in writing, especially when you need to have in mind the specifics of the prompting.

There are, of course, multiple ways to tackle this, from courses to workshops, communities of practice, or (prompt) knowledge bases etc. Enable everyone to implement a tiny bit of prompt engineering themself.

Finally, adopting prompt engineering principles in the company will, if done extensively, resolve the security and legal issues because of the deeper understanding of what and how to input.

Workshops and educational activities are great ways to ensure the prompts created by people are free of any sensitive data and, in general, safe to use in day-to-day work and that the private model deployments are used when tapping into super sensitive data.

Net results of these new skills will be visible both as a company benefit and, more crucially, as a long-term job security. Because AI doesn’t stand a chance against a human with AI as a sidekick.

> subscribe shift-mag --latest

Sarcastic headline, but funny enough for engineers to sign up

Get curated content twice a month

* indicates required

Written by people, not robots - at least not yet. May or may not contain traces of sarcasm, but never spam. We value your privacy and if you subscribe, we will use your e-mail address just to send you our marketing newsletter. Check all the details in ShiftMag’s Privacy Notice