Prompt Engineering
Prompt Engineering is a super important concept when it comes to constructing AI models. With prompt engineering, you can create a model to do something and 'front-load' it with examples. This means that you do not need to spend the time to make a resource intensive dataset. So how does prompt engineering look like in real life?
You may have a prompt for a Product Description model which could look something like this;
This is a basic prompt and can work. The user would input their fields and you'd get something out the other end. The AI however does not know what length text you expect, what the tone should be or your style. That is a problem. The easiest way to fix this problem is with Prompt Engineering. To do this, you'd write a prompt like this;
You see from Prompt Engineering, we give the AI context of three input pairing and three desired outputs. The AI now understands that this is the format it is expected to follow. When the AI is called into action, it will see the user inputs on line 18 and 19 and also see that 21 is blank. This is where it is expected to write the output as is shown on line 6, 11 and 16.
Advantages of Prompt Engineering
One of the main advantages is that it is incredibly quick. You can quickly test out giving the AI different contexts and teaching it the pattern you want it to follow. By providing just a few examples, it can learn with some incredible quality. If the pattern is a relatively simple one, then Prompt Engineering may be all that is needed to provide a useable model.
Using Prompt Engineering also means that you don't have to spend the time creating a wider dataset. A dataset can consist of thousands of examples and whilst you will get a better output when using a dataset and with fine-tuning, Prompt Engineering is a lot quicker of a process.
Disadvantages of Prompt Engineering
One of the main disadvantages of Prompt Engineering isn't immediately obvious. A lot of NLP models charge based on a system of tokens. They will charge based on both the input and the output. You are therefore going to pay more per request if you have a larger amount of Prompt Engineering. Your prompt is likely to be a larger size than the actual text you want to output so it can be a balancing act of getting quality outputs without having the model cost too much to run, this is where fine-tuning could be a better option.
Another disadvantage of Prompt Engineering is actually giving enough examples to get the pattern followed correctly. When things get more complex, this is difficult. One of the main downfalls of AI technology is the amount of tokens one can use in both the prompt and the completion. This can vary from NLP provider to NLP provider but tends to be anything from 1500 characters upwards to about a max of 3000 characters. There is only so much information you can provide in such limited tokens so for more complex or longer models, prompt engineering is not really suitable.
Last updated