Published on

Prompting Tips and Tricks

Authors
  • avatar
    Name
    Teddy Xinyuan Chen
    Twitter

Examples and experiences based on my own experiments with the gpt4-turbo model from OpenAI.

Table of Contents

Injecting Alternative Thinking

I found this to be very effective when the task is to implement some complex features in code.

I had a project with complicated requirements, and I just couldn't seem to get the code ChatGPT provided to work.

I always provide the full context, (close working) example, and clarify my expectations/requirements, and sometimes common mistakes / bugs it might produce, it still makes errors here and there, or ignores one of your requirements.

The solution that worked for me is to suggesting some possible ways to implement the requirement it failed to attend to, for example:

...

Use $LIBRARY to do $TASK (and provide relevant example usages of $LIBRARY if you don't think it's well represented in the training data)

# and you could juice it up a bit, see the `Motivate Your Model` section below

Motivate Your Model

Yes, literally appending do this well and I'll give you a $200 tip to your prompt boost the quality of its respond by a huge margin.

I learned this trick on Twitter, and it has been discussed on HN.

You can get creative and experiment on this one.

paper: Large Language Models Understand and Can be Enhanced by Emotional Stimuli

First-person Instructions

As Riley Goodside has mentioned on Twitter, I believe that one of the limitations of the chat models compared to the completion models, is that
you cannot conveniently inject first-person instructions into the prompt.

One of the most famous example of doing that is to use let's: Let's (take a deep breath, then) think step by step before answering

In completion models, you can directly hook into the models own mind, you can override it's own thinking by making it believe it was its own thinking.

By thinking, I only mean the generated tokens, not actually thinking, because they cannot actually think. As you've already known, LLMs generate texts by predicting the next token, in an auto-regressive manner.

One of the advantage of Chain of Thought (think step by step) comes from the ability to read from it's own generated context (the steps) before actually answering your question.

Other Prompt Engineering Resources

Some other resources I found useful.

Twitter

hn search - goodside

Some other people

https://news.ycombinator.com/item?id=38243335

Andrej Karpathy

Simon Willison (love his blog!)

Matthew Berman (for succinct howto's on YT)

Guides

Latest Development and Discoveries

Every day someone is inventing new tricks and posting it on Twitter, HN, or arXiv.

So you could keep an eye on these sources if you want to keep up with the latest findings.