ChatGPT told Charlie Brooker exactly how not to write a ‘Black Mirror’ episode

“Black Mirror” has been one of the most consistently on-target (and consistently frightening) satires of humanity’s relationship with technology. So consistent, in fact, that ChatGPT was able to write an episode of it that series creator Charlier Brooker hated.

In an interview with Empire, the writer and producer of numerous satirical and incisive programs explained that, though he was curious, his experience with ChatGPT left much to be desired.

He asked it to write an episode of “Black Mirror,” which seems itself like the beginning of an episode of “Black Mirror,” until:

“It comes up with something that, at first glance, reads plausibly, but on second glance, is shit,” he said. “Because all it’s done is look up all the synopses of ‘Black Mirror’ episodes, and sort of mush them together. Then if you dig a bit more deeply you go, ‘Oh, there’s not actually any real original thought here.'”

Indeed, as Vannevar Bush once wrote, “For mature thought there is no mechanical substitute.” Even Ada Lovelace said much the same as she worked on the first computational systems ever made. The same holds true today, though of course the computers are rather more convincing in their attempts to counterfeit intelligence.

But Brooker has a talent for wringing insight out of the commonplace, and the mode of failure that ChatGPT took gave him a brain wave. Essentially, if this thing was generating the most predictable pablum possible based on the episodes he’d actually shipped, it was actually a great way to tell him what not to write.

“I thought, ‘I’m just going to chuck out any sense of what I think a ‘Black Mirror’ episode is.’ There’s no point in having an anthology show if you can’t break your own rules. Just a sort of nice, cold glass of water in the face,” he said.

It was never going to be a success, asking a pattern generator to create an original episode of anything, let alone a show cherished for its unexpected and frequently unsettling takes. But that doesn’t mean the language model has no value: As a sort of Goofus figure, or George in “Seinfeld,” if its every instinct regarding original ideas is wrong, then the opposite would have to be right.

Of course, it’s not quite that simple, but it is an interesting example of how this impressive yet ultimately sterile intellectual pretender (ChatGPT, not Brooker) can find a role even when it is completely unfit to the purpose.