GPT-3 Grammar and Style Editing in Practice

GPT-3 enables advanced grammar and style edits, tone adjustment, coherence improvements, and format transformations across text without explicit training as a dedicated grammar tool.

One of the things that became apparent early on with GPT-3 was that there were a lot of smaller tasks—painful, not exactly mind-blowing on their own—that suddenly looked different when you considered the scale. Individually, they’re “nice to have.” In aggregate, they add up to an incredible amount of potential.

A simple example is grammar checking. We’ve had grammar checking inside word processors for almost as long as we’ve had word processors. At the most basic level, you can keep a library of commonly used words and flag anything that doesn’t match. But once you move beyond spelling and into more complicated grammar correction, you’re forced to build systems that understand a huge number of rules and nuances—proper structure, edge cases, style conventions, and all the messy exceptions in real language.

Over the decades, those systems evolved a lot. Modern tools can even adjust to how much help you want: light correction, stronger rewrites, or suggestions for rephrasing. GPT-3 was remarkably good at this kind of task. You give it a paragraph, and it can correct it and make it sound better.

But what really stood out was that it could do things traditional grammar checkers struggled with or couldn’t do at all—especially at the “whole passage” level:

  • Change the tone easily (make something friendlier, more formal, more direct)
  • Take something rambling and tighten it into something focused
  • Clean up writing that’s barely coherent, not just a little messy

One of the demos I kept coming back to was taking text that was just barely coherent—full of misspellings and misused words, with the kind of errors that aren’t just “wrong spelling” but wrong choices that break the meaning—and having GPT-3 clean it up into something readable. Not just correcting typos, but actually recovering intent and producing a clear version in a way that no grammar system I’d used could do previously.

And the funny part is: GPT-3 was never explicitly trained to be a “grammar improvement system.” It was simply one of the many things it naturally learned by learning so much about language.

Another personal favorite example was what I think of as format shifting: taking something written as a script and turning it into something you might find in a novel, or taking a passage from a novel and turning it into something that looks like a script. I liked playing around with my own writing—seeing what changed when you convert from one form to the other. Among writers, one of my most popular prompts was asking it to turn something back and forth between those two formats.

To be clear, doing a proper novelization—or turning a novel into a script—is more than reformatting a paragraph. There’s an entire art to adaptation: pacing, scene selection, interiority, what you show versus what you say, and so on. But it was still fascinating to see how naturally you could shift text between these forms, and how easily GPT-3 could do that “first pass” transformation.

That broader category—aside from summarization and translation—is what I ended up thinking of as “mutation” (for lack of a better word): transforming text while preserving the underlying meaning. It’s the kind of thing that would be practically impossible to reproduce by hand-coding all the language rules you’d need to make it work across topics, styles, and contexts.

One neat test I found was simply having the model break a large block of text into paragraphs. This sounds trivial until you try to define it. Paragraphs are subjective. In theory, they’re where one complete thought ends and another begins—but people think differently, and paragraph flow can be just as important as word choice. As a novelist, I like shorter, punchier paragraphs. Other people like paragraphs that run for hundreds of words and still maintain a coherent flow.

So if you ask a model to break something into paragraphs, it doesn’t mean it’s going to do it exactly the way you would. But it was still an important realization: these models could read a large passage, pick up on independent thoughts, and decide, “Okay—this is a paragraph. Let’s break here. This is another paragraph. Let’s stop here.”