Prompt Repetition and Rephrasing: A Reliability Tactic That Lasts

Repeat or rephrase the prompt by placing it at the top and/or bottom to keep the model anchored and improve consistency on long or complex inputs.

While models have gotten much better at following instructions and prompts, there are still a few “older” techniques from the early GPT-3 era that can be surprisingly helpful—especially when you’re dealing with a challenging task or a long, messy input.

One that still works more often than you’d expect: repeating the prompt.

A recent paper pointed out that sometimes just repeating the prompt can dramatically improve model performance on certain tasks. That doesn’t really surprise me, because it’s something I used to do all the time with earlier models—particularly when I had really long prompts.

For example, if I needed to edit a piece of text (or do anything substantial with a long body of content), I’d structure the input like this:

  1. Put the prompt at the top
  2. Include the body text
  3. Repeat the prompt at the bottom

The reason is simple: models can get kind of “lost” in the middle.

If you put the prompt at the top, the model starts off with the intent of what it needs to do. But by the time it has worked through all the different embeddings, all the different directions the text is sending it, it can drift a bit. Then, if it reaches the end and sees the prompt again, it can serve as a reminder—almost like re-centering the model on the actual task.

In practice, that small change can make outputs noticeably more consistent.

There’s also an interesting historical note here. For a period of time, OpenAI models tended to perform best when the prompt was at the top, and Anthropic models often performed better when the prompt was at the bottom. That became more widely discussed later, but I’d already gotten into the habit of putting the prompt at both the top and the bottom just to cover both cases and keep the model anchored.

If you want a variation on this idea, another technique is to rephrase the prompt rather than repeat it verbatim.

You can give the prompt once, then expand on it with a clearer restatement—something like:

  • “We’re going to do this task…”
  • followed by a more explicit description of what “this task” actually is
  • maybe including any key constraints or definitions the model might otherwise miss

These are early prompt techniques, but they still help in situations where the model might otherwise lose the thread—especially with long inputs, editing tasks, or anything where it’s easy for the model to wander away from the point.

If you’re stuck on a prompt that “should” be working but isn’t, try the simplest fix first: say it again. Or say it again, better.