Deepmind’s Promptbreeder automates prompt engineering



summary
Summary

Google Deepmind researchers demonstrate Promptbreeder, an AI system that can improve itself by recursively generating better prompts.

The development of natural language instructions (prompts), also known as prompt engineering, is a key technique for improving the capabilities of large language models such as GPT-4. However, the process is time-consuming, often not intuitive and the results are not always reproducible. Promptbreeder automates prompt engineering by generating better prompts in successive generations using an evolutionary algorithm.

The system starts with an initial set of prompts that are scored based on the model’s performance on logical tasks involving those prompts. High-scoring prompts are mutated according to certain rules specified in “Mutation Prompts” to generate new prompt variations, while low-scoring prompts are discarded. This recursive process allows Promptbreeder to explore a large prompt space and discover highly effective prompts for different domains.

Crucially, Promptbreeder is also self-referencing – it not only develops better task prompts but also better mutation prompts that control how the task prompts are varied. In this way, the system also improves its own search process and becomes more efficient at creating prompts over time.

Ad

Ad

Promptbreeder finds better prompts – and will become much more important in the future

The researchers tested Promptbreeder on arithmetic, reasoning, and hate speech classification tasks. They consistently outperformed traditional manual prompting techniques such as “chain of thought” prompting. The prompts developed were often counterintuitive and domain-specific, such as the “SOLUTION” prompt before a math task, which outperformed even the recently discovered “take a deep breath” prompt.

Image: Fernando et al.

The self-referential architecture is critical to success, the team says, because tests have shown that removing any ability to self-improve significantly degrades performance.

Future iterations could allow Promptbreeder to develop more complex and general logical strategies that go beyond task-specific cues. As language models become larger and more powerful, the potential benefits of prompt strategies will also increase, they said.

“I believe the future is going to be wild as we will see increasingly open-ended self-referential self-improvement systems that continue to scale with ever larger and more capable LLMs. Promptbreeder is an important stepping stone in this direction,” says contributing researcher Tim Rocktäschel. The development of such a system is a “holy grail of AI research,” he adds.



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top