Screenshot 2025-09-11 at 9.30.52 AM.png

If you're building a customer service AI agent for your online store, that agent needs to be equipped with the ability to handle all sorts of inquiries like billing problems, refund issues, login issues, etc. To give it all the information it needs to accomplish these tasks, your prompts become larger and more complex. This is very different from prompting when you’re having a conversation with ChatGPT.

Screenshot 2025-09-11 at 9.49.08 AM.png

And this is the origin of “context engineering.”

It isn't some new technique coming out of the blue. It’s really the progression of prompt engineering in the specific case of building AI applications.

Screenshot 2025-09-11 at 9.58.25 AM.png

“It isn't just the single prompt you send to an LLM. Think of it as everything the model sees before it generates a response.” - Philipp Schmid, Senior Engineer at Google DeepMind

The official definition is:

Context Engineering is the discipline of designing and building dynamic systems that provide a large language model (LLM) with the right information and tools, in the right format, and at the right time to accomplish a given task reliably and accurately.

Role.png

Tasks.png

Input.png

Output.png

Output2.png

Constraints.png