The playground is a place where you can play with the AI technology and start to understand the basics of how to put a model together. There is a bit of a learning curve to it but we'll do our best to get you through how to create a model successfully and understand all the toggles and settings you can use.

Let's go over some of the settings and try and explain it in plain English for you, if you need a reminder whilst in the playground, just hover over the setting title and a tooltip will appear giving you a bit of additional information.

Temperature - This is how wild or how factual you want the output to be. A higher temperature will be a more creative output. A lower temperature will be more factual. The closer the temperature is to 0, the more likely it is to be factual and / or repeat ideas already seen in the prompt. If the temperature is set closer to 1, it is likely to be creative and build upon your prompt with new ideas.

Top P - The Top P controls the diversity of the output. In nerdy terms, it is how many tokens are considered when creating new content. Having the Top P set to 1, means it will consider all of the tokens whereas setting it lower means it will reject some potential output tokens and narrow in on what is created. Generally, in most text generation settings, you won't need to change the Top P setting.

Output Length - The output length is how many tokens are created. The output length is measured in tokens and tokens are measured slightly different to characters. Each token equates to roughly 1 syllable so for a rough comparison for 100 tokens expect to receive approximately 200 words.

Stop Sequence - Building a model is all about patterns as we explain when discussing Prompt Engineering. Create the pattern and it will follow it. In the example above, we have the 2 inputs followed by '###' and then the output followed by '###'. The AI will know to stop producing text when it next creates '###' and that is how we get a full output each and every time.

Last updated