Prompting becomes a system: retrieval + transformation + reuse
Once you hit the limits of “stuff it in the prompt,” the question shifts from writing to selecting. Which context matters? What’s authoritative? How do you keep it current? That’s retrieval. And once you retrieve, you almost always need transformation: clean it, normalize it, compress it, and present it in a schema the model can reliably use.
This is the point where “prompt engineering” quietly turns into information engineering. The prompt is only one layer. The system is: fetch → format → answer → log → improve.
If you do this well, the model looks smarter. But what actually happened is that you gave it better inputs and a tighter job.