Robot Writers Need Something To Say

Robot writers are hot. No, that statement is not a call to fix their air conditioners, much less to date them. It’s to acknowledge that pioneers in artificial intelligence (AI) and automated writing have gotten a lot of media attention (partly because it hits close to home; the deployments are public and noticeable and it invokes the perennial concern about job loss due to automation).

So what does the future hold for robo-writers? Will technology leaders in natural language generation write up everything, relegating humans to mere readers? Some boosters within leading companies seem to imply that possibility, but of course that’s what boosters do; they boost and boast.

I predict that robot writers will emerge, but slowly. The main reason is that writers need to find something to say that is worth reading, whether that something is routine or consists of novel insights of lesser or greater generality. Computers can indeed cover the range from routine event summaries all the way up to novel discovery, but each competency requires work, and there’s no magic bullet, nor magic touchscreen.

To better understand the promise and limitations of robot writing, it helps to remove the specious double standards that people apply to AI-based software and to humans. One double standard consists of asserting that software can’t possibly perform X in circumstance Y, ignoring that people do it all the time. A different double standard is to imagine that if software does X, then it also can do Y, ignoring that people can’t because the skills don’t transfer.

Will robot writers write up everything? Let’s follow a single standard: Does a great and prolific science-fiction author threaten the job of a judge who writes judicial opinions? Of course not. Why not? Everybody learns to write more-or-less well.

The key ability is what to say. Writing enables, but doesn’t carry the day. It’s the content that constitutes the substantial job. Content is king, as was asserted some years ago in a different context.

Let’s apply the same standard to today’s leading automated writing companies that base their technologies on natural language generation. What tasks do these applications carry out? They excel at summarizing and narrating recurrent events that vary in their numeric details, players, attitude, etc. Recurrent events include earnings reports, sports game recaps, weather reports, perhaps even minor local elections.

Doing this well isn’t easy; it takes appreciable skill. Doing it with AI software is of course even harder. But it’s not the same task as the many other tasks that involve writing: Writing this opinion piece, responding to it with a rejection (not this time!), writing a mildly threatening letter to a contractor that loused up your kitchen, etc.

Can the even-hotter AI technology of machine learning be the impetus needed to accelerate new applications of automated writing? I’m skeptical, because machine learning’s starting point is typically many historical examples, which are then analyzed to discern the key attributes that distinguish one outcome from another. Supply lots of examples of judicial opinions that are overruled by the Supreme Court, plus lots that are upheld, and get your learning algorithm to figure out how to write a good opinion? Not likely.

Robot writing will tend to draw on classical AI to develop the knowledge-based systems that are needed to automate the task of finding something to say. That means studying the task logic, figuring out the space of possible outputs, adapting or devising heuristics that point to one output as better than another, programming it all up and so on. This takes time and skill.

Robot writers really need something worth saying. Just like people, they’ll have to pay their dues.