Prompt engineering in LLMs is finding the right vector program


LLMs (Large Language Models) like OpenAI’s GPT-4 act as repositories for millions of vector programs mined from human-generated data learned as a by-product of language compression, says AI researcher François Chollet.

Prompt engineering then involves searching for the right “program key” and “program argument(s)” to accomplish a given task more accurately.

Chollet expects that as LLMs evolve, prompt engineering will remain critical, but can be automated for a seamless user experience.

This is in line with recent ideas from labs such as Deepmind, which is exploring automated prompt engineering.

Ad

Ad

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:

Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top