The adversarial persuasion machine: a conversation with James Williams

The attention economy, Twitter, reverse censorship, and fighting AI for our human future

James Williams may not be a household name yet in most tech circles, but he will be.

For this second in what will be a regular series of conversations exploring the ethics of the technology industry, I was delighted to be able to turn to one of our current generation’s most important young philosophers of tech.

Around a decade ago, Williams won the Founder’s Award, Google’s highest honor for its employees. Then in 2017, he won an even rarer award, this time for his scorching criticism of the entire digital technology industry in which he had worked so successfully. The inaugural winner of Cambridge University’s $100,000 “Nine Dots Prize” for original thinking, Williams was recognized for the fruits of his doctoral research at Oxford University, on how “digital technologies are making all forms of politics worth having impossible, as they privilege our impulses over our intentions and are designed to exploit our psychological vulnerabilities in order to direct us toward goals that may or may not align with our own.” In 2018, he published his brilliantly written book Stand Out of Our Light, an instant classic in the field of tech ethics.

In an in-depth conversation by phone and email, edited below for length and clarity, Williams told me about how and why our attention is under profound assault. At one point, he points out that the artificial intelligence which beat the world champion at the game Go is now aimed squarely — and rather successfully — at beating us, or at least convincing us to watch more YouTube videos and stay on our phones a lot longer than we otherwise would. And while most of us have sort of observed and lamented this phenomenon, Williams believes the consequences of things like smartphone compulsion could be much more dire and widespread than we realize, ultimately putting billions of people in profound danger while testing our ability to even have a human will.

It’s a chilling prospect, and yet somehow, if you read to the end of the interview, you’ll see Williams manages to end on an inspiring and hopeful note. Enjoy!

Editor’s note: this interview is approximately 5,500 words / 25 minutes read time. The first third has been ungated given the importance of this subject. To read the whole interview, be sure to join the Extra Crunch membership. ~ Danny Crichton

Introduction and background

Greg Epstein: I want to know more about your personal story. You grew up in West Texas. Then you found yourself at Google, where you won the Founder’s Award, Google’s highest honor. Then at some point you realized, “I’ve got to get out of here.” What was that journey like?

James Williams: This is going to sound neater and more intentional than it actually was, as is the case with most stories. In a lot of ways my life has been a ping-ponging back and forth between tech and the humanities, trying to bring them into some kind of conversation.

It’s the feeling that, you know, the car’s already been built, the dashboard’s been calibrated, and now to move humanity forward you just kind of have to hold the wheel straight

I spent my formative years in a town called Abilene, Texas, where my father was a university professor. It’s the kind of place where you get the day off school when the rodeo comes to town. Lots of good people there. But it’s not exactly a tech hub. Most of my tech education consisted of spending late nights, and full days in the summer, up in the university computer lab with my younger brother just messing around on the fast connection there. Later when I went to college, I started studying computer engineering, but I found that I had this itch about the broader “why” questions that on some deeper level I needed to scratch. So I changed my focus to literature.

After college, I started working at Google in their Seattle office, helping to grow their search ads business. I never, ever imagined I’d work in advertising, and there was some serious whiplash from going straight into that world after spending several hours a day reading James Joyce. Though I guess Leopold Bloom in Ulysses also works in advertising, so there’s at least some thread of a connection there. But I think what I found most compelling about the work at the time, and I guess this would have been in 2005, was the idea that we were fundamentally changing what advertising could be. If historically advertising had to be an annoying, distracting barrage on people’s attention, it didn’t have to anymore because we finally had the means to orient it around people’s actual intentions. And search, that “database of intentions,” was right at the vanguard of that change.

The adversarial persuasion machine

Photo by joe daniel price via Getty Images

Greg: So how did you end up at Oxford, studying tech ethics? What did you go there to learn about?

James: What led me to go to Oxford to study the ethics of persuasion and attention was that I didn’t see this reorientation of advertising around people’s true goals and intentions ultimately winning out across the industry. In fact, I saw something really concerning happening in the opposite direction. The old attention-grabby forms of advertising were being uncritically reimposed in the new digital environment, only now in a much more sophisticated and unrestrained manner. These attention-grabby goals, which are goals that no user anywhere has ever had for themselves, seemed to be cannibalizing the design goals of the medium itself.

In the past advertising had been described as a kind of “underwriting” of the medium, but now it seemed to be “overwriting” it. Everything was becoming an ad. My whole digital environment seemed to be transmogrifying into some weird new kind of adversarial persuasion machine. But persuasion isn’t even the right word for it. It’s something stronger than that, something more in the direction of coercion or manipulation that I still don’t think we have a good word for. When I looked around and didn’t see anybody talking about the ethics of that stuff, in particular the implications it has for human freedom, I decided to go study it myself.

Greg: How stressful of a time was that for you when you were realizing that you needed to make such a big change or that you might be making such a big change?

James: The big change being shifting to do doctoral work?

Greg: Well that, but really I’m trying to understand what it was like to go from a very high place in the tech world to becoming essentially a philosopher critic of your former work.

James: A lot of people I talked to didn’t understand why I was doing it. Friends, coworkers, I think they didn’t quite understand why it was worthy of such a big step, such a big change in my personal life to try to interrogate this question. There was a bit of, not loneliness, but a certain kind of motivational isolation, I guess. But since then, it’s certainly been heartening to see many of them come to realize why I felt it was so important. Part of that is because these questions are so much more in the foreground of societal awareness now than they were then.

Liberation in the age of attention

Greg: You write about how when you were younger you thought “there were no great political struggles left.” Now you’ve said, “The liberation of human attention may be the defining moral and political struggle of our time.” Tell me about that transition intellectually or emotionally or both. How good did you think it was back then, the world was back then, and how concerned are you now?

What you see a lot in tech design is essentially the equivalent of a circular argument about this, where someone clicks on something and then the designer will say, “Well, see, they must’ve wanted that because they clicked on it.”

James: I think a lot of people in my generation grew up with this feeling that there weren’t really any more existential threats to the liberal project left for us to fight against. It’s the feeling that, you know, the car’s already been built, the dashboard’s been calibrated, and now to move humanity forward you just kind of have to hold the wheel straight and get a good job and keep recycling and try not to crash the car as we cruise off into this ultra-stable sunset at the end of history.

What I’ve realized, though, is that this crisis of attention brought upon by adversarial persuasive design is like a bucket of mud that’s been thrown across the windshield of the car. It’s a first-order problem. Yes, we still have big problems to solve like climate change and extremism and so on. But we can’t solve them unless we can give the right kind of attention to them. In the same way that, if you have a muddy windshield, yeah, you risk veering off the road and hitting a tree or flying into a ravine. But the first thing is that you really need to clean your windshield. We can’t really do anything that matters unless we can pay attention to the stuff that matters. And our media is our windshield, and right now there’s mud all over it.

Greg: One of the terms that you either coin or use for the situation that we find ourselves in now is the age of attention.

James: I use this phrase “Age of Attention” not so much to advance it as a serious candidate for what we should call our time, but more as a rhetorical counterpoint to the phrase “Information Age.” It’s a reference to the famous observation of Herbert Simon, which I discuss in the book, that when information becomes abundant it makes attention the scarce resource.

Much of the ethical work on digital technology so far has addressed questions of information management, but far less has addressed questions of attention management. If attention is now the scarce resource so many technologies are competing for, we need to give more ethical attention to attention.

Greg: Right. I just want to make sure people understand how severe this may be, how severe you think it is. I went into your book already feeling totally distracted and surrounded by totally distracted people. But when I finished the book, and it’s one of the most marked-up books I’ve ever owned by the way, I came away with the sense of acute crisis. What is being done to our attention is affecting us profoundly as human beings. How would you characterize it?

James: Thanks for giving so much attention to the book. Yeah, these ideas have very deep roots. In the Dhammapada the Buddha says, “All that we are is a result of what we have thought.” The book of Proverbs says, “As a man thinketh in his heart, so is he.” Simone Weil wrote that “It is not we who move, but images pass before our eyes and we live them.” It seems to me that attention should really be seen as one of our most precious and fundamental capacities, cultivating it in the right way should be seen as one of the greatest goods, and injuring it should be seen as of the greatest harms.

In the book, I was interested to explore whether the language of attention can be used to talk usefully about the human will. At the end of the day I think that’s a major part of what’s at stake in the design of these persuasive systems, the success of the human will.

“Want what we want?”

Photo by Buena Vista Images via Getty Images

Greg: To translate those concerns about “the success of the human will” into simpler terms, I think the big concern here is, what happens to us as human beings if we find ourselves waking up in the morning and going to bed at night wanting things that we really only want because AI and algorithms have helped convince us we want them? For example, we want to be on our phone chiefly because it serves Samsung or Google or Facebook or whomever. Do we lose something of our humanity when we lose the ability to “want what we want?”

James: Absolutely. I mean, philosophers call these second order volitions as opposed to just first order volitions. A first order volition is, “I want to eat the piece of chocolate that’s in front of me.” But the second order volition is, “I don’t want to want to eat that piece of chocolate that’s in front of me.” Creating those second order volitions, being able to define what we want to want, requires that we have a certain capacity for reflection.

What you see a lot in tech design is essentially the equivalent of a circular argument about this, where someone clicks on something and then the designer will say, “Well, see, they must’ve wanted that because they clicked on it.” But that’s basically taking evidence of effective persuasion as evidence of intention, which is very convenient for serving design metrics and business models, but not necessarily a user’s interests.

AI and attention

STR/AFP/Getty Images

Greg: Let’s talk about AI and its role in the persuasion that you’ve been describing. You talk about, a number of times, about the AI behind the system that beat the world champion at the board game Go. I think that’s a great example and that that AI has been deployed to keep us watching YouTube longer, and that billions of dollars are literally being spent to figure out how to get us to look at one thing over another.

James: When sophisticated systems of machine learning are deployed in the service of a business model that is adversarial against people’s interests, it’s going to result in a much more sophisticated adversary. It’s like if you combined Sherlock Holmes who can, at one glance, understand a million things about you with the most persuasive person in the world, I don’t know who that might be.

Greg: Well, if you’re looking for somebody to represent the id to Sherlock Holmes’ ego, Donald Trump has the charisma and sales ability to persuade people to allow themselves to be distracted. He’s our Distractor in Chief, you could say. You cite a study that suggests that he may be worth about $2 billion to Twitter alone, or about a fifth of their overall value. How much do you think that that means that a site like Twitter is actually incentivized to keep us distracted?

If a platform or service is operating under the rules of the attention economy, then whatever gets attention will be valuable.

James: If a platform or service is operating under the rules of the attention economy, then whatever gets attention will be valuable. If it’s the person who gets more attention than anyone else in the world, that’s going to be extraordinarily valuable. It also means other people competing for attention in that environment will learn from him and emulate him. This is particularly disconcerting in view of how completely Twitter has now become the essential infrastructure, and increasingly the cognitive style, of journalism.

Greg: Really of our national conversation among intellectuals these days.

James: I’m not even sure public intellectuals can exist in the age of Twitter. They certainly can’t exist on Twitter. It’s probably one of the worst platforms you could design if you wanted to foster useful conversation between people. Every day I see smart people I know becoming more like Twitter.

Greg: Next week I’ll publish my conversation with Taylor Lorenz, a reporter for The Atlantic on internet culture. She’s heavily critical of the structure of Twitter and Jack Dorsey’s work on it.

James: It’s almost a parodic end-state of the process of tabloidization. Tabloid newspapers were initially designed to give you smaller and smaller articles so you could read more and more of them. But then of course the form inevitably shaped the content and effect, as it always does.

Reverse censorship and the real dangers of collective distraction

Greg: I want to keep pushing on the idea there are real dangers our collective distraction can produce. Because you are a very, let’s say, mild-mannered kind of person. You’re a-

James: Depends on how much Twitter I use.

Greg: Touché. But to me, you come across as such a peaceful, philosophical guy, that I wonder if people sometimes miss how dire your message is. For example, Donald Trump goes to North Korea to distract, essentially, from the Michael Cohen proceedings in Congress. That’s one thing. You can debate that and how seriously we should take it. But something you can’t debate: you pointed out the Chinese government creates 448 million posts on social media per year as part of strategically trying to distract its own people. That sounds to me like a major intervention to keep people, frankly, from rebelling.

James: Yeah. I mean, censorship is commonly talked about in terms of the removal of information that would otherwise be accessible. I think in an information-scarce environment, that’s what censorship has usually looked like.

But when information is abundant, it’s much easier and even more politically palatable to achieve the same effect not by removing information but by redirecting people’s attention. I think in that particular study, if I’m not mistaken, they called it “strategic distraction” or “reverse censorship.” One way of viewing censorship is as the management of information. Another way is as the management of attention.

Greg: Then we get so overwhelmed that we want an authority, even if the authority is not necessarily acting in our best interest. There really seemed, as I started to get towards the end of your book, to be a strong connection between attention manipulation and militarism. You talk about how in the last two decades, Americans approving of military rule have doubled to now 1/6 of the population, and ominously, 35% of rich young Americans now approve of military rule in this country. To what extent do you think these sorts of technologies we’re discussing have fueled that rise in approval of military rule?

Advertising isn’t bounded in the way it used to be. It’s now the entire environment.

James: It’s difficult to comment on that particular metric, in part because it wasn’t my research and in part because drawing causal lines between this sort of stuff is inherently tricky.

The broader point I was making in the book was that we’ve been seeing a rise in populism and authoritarianism, as well as a decrease in commitment to democracy as an ideal, across Western liberal democracies that have diverse cultures and economic situations, but who have a common media landscape as one of the few things they do share.

We seem to be seeing in the West an increased preference for a power-based order over an ideals-based order. Part of combatting this has to do with education, and helping people understand why we value certain ideals. Not to mention certain institutions, especially those that exist to prevent world wars.

But another part of it has to be, and I think this is where we don’t yet have a good drawing of the lines in our understanding, about asking which specific media dynamics are threatening which specific preconditions of democracy. Where and how are certain forms of design undermining our capacity to value an ideals-based order over a power-based order, or to value charitability and the messiness of the freedom of speech in an open society over the regressive, tribalistic style of Twitter mob justice and public shaming?

Minimum viable trust

Greg: Also if it just so happens we are in a crisis where in the next 10 or 20 years we need to get our act together on emissions and on carbon in the atmosphere and all of that, or we’re going to face a severe degradation of our way of life…well it’s certainly inconvenient if we currently lack the ability to have a coherent conversation with one another, isn’t it?

James: Absolutely. I mean, I think living in a society together requires some amount of minimum viable trust, and a sense that we’re somehow part of the same story. I just think that if you wanted to design a set of media platforms to undermine that kind of trust, that kind of common story, you could do a lot worse than what we’ve got now.

I just don’t know where I see any media dynamics right now that are working in the direction of mutual understanding, in the direction of reflection, restraint, charitability, autonomy, or any of the qualities that the maintenance of a free society requires. Where exactly are our forms of media shoring up civilization? They all seem to be working in the other direction. I would love to be proven wrong about this.

What’s next?

Photo by boonchai wedmakawand via Getty Images

Greg: Your book proposes some fascinating potential solutions to all of this. One little detail I loved is the idea of a PBS for digital media. Another is the idea of a “Designer’s Oath,” a kind of alternative, for designers, to the Hippocratic Oath for doctors. We won’t have time in this interview to get into all of them; I want people to read the book. But for one thing, you talk about the huge need for reform of the advertising industry.

Just like every outrage cascade and shame-fest on Twitter, leaning into status-oriented projects of blame only feeds the attention economy further and sets us all back.

James: Well, I don’t think there’s any good definition of what advertising is anymore. This is a key question governments and policymakers ought to be clarifying. One way of putting the question is, [which] forms of psychological and behavioral manipulation should we consider to be acceptable as business models?

As a media dynamic, advertising [in the past] was always an exception to some rule in the medium. It was the commercial break in the program. Or the billboard on the side of the road. Or the little bounded box in the corner of the newspaper page.

What’s happened now is that whole forms of media are designed according to the incentive structures and logic of advertising, so that we have entire platforms, entire attentional environments that are intrinsically persuasive towards goals that serve those incentives.

Advertising isn’t bounded in the way it used to be. It’s now the entire environment. I think South Park got this right, when one or two seasons ago they ran an episode about how everything is an ad. It’s absolutely true. But then if everything’s an ad, what counts as an ad? I think we just need a whole new language for talking about this new persuasive environment.

But it’s just really interesting to me, and also disconcerting, that we can’t even get our language about the basic things right. For instance, I’ve always disdained the term “social media” as a way of categorizing Facebook and Twitter because it suggests that those platforms are actually designed to serve some ultimate social goal, as opposed to it being just a kind of side effect. I really think we’ve missed a linguistic opportunity by not just calling advertising companies “advertising companies.”

The “tech industry” today

Greg: Facebook is an advertising company, essentially?

James: Of course.

Greg: And we should just call it that.

James: I mean, that’s what [many “tech companies”] are. Oil companies sell oil, ketchup companies sell ketchup. Advertising companies sell advertising.

Greg: That is why I decided to study and write about the ethics of technology: because so many tech companies are really something beyond just “tech companies,” and really the tech industry as a whole is more than just an “industry.” I mean, if it were just an industry…I’m a chaplain. I’m not going to stop being a chaplain so I can cover the coal industry or the steel industry or the poultry industry. Tech may have started out as an industry but it has become the fabric of our lives.

James: Yeah, but with other industries, how we name them is based on what they produce, mine, sell, or whatever. With Facebook or Twitter, I suppose we could also call them attention merchants, in Tim Wu’s phrase. I mean, I actually think precision with this stuff is really important. It’s like that line in the Tao Te Ching says, “Naming is the origin of all particular things.”

The phrase “tech industry” seems to me mostly to be shorthand for newly formed companies that have somehow internalized the logic of digital technology on a fundamental level in their culture and business and organization, but it really has nothing to do with … I mean, there are companies that do sell technology, like servers or routers or software, but Facebook, Twitter, et cetera, even if they might hack around with hardware, and use software to build drills for our attention, it’s attention they ultimately sell to advertisers. It’s not even our data. I think that’s another kind of rabbit hole, too. The data capture issue, and I think “data creation” would actually be a more useful and accurate term, is merely incidental to the attentional capture and resale.

Greg: Yeah. It’s almost like they’re selling our existence.

James: Yes. Literally, yes.

Who’s to blame?

Photo by Chip Somodevilla via Getty Images

Greg: Who’s to blame? I have a painting in my office that’s sort of … It looks like shamanistic religious art, but one of the characters is asking, “Where do I go to complain?” I love it because in my days as the Executive Director of a nonprofit building nonreligious community, we didn’t believe in praying to God, or blaming him for that matter. But everybody did seem to want to find someone to blame for our various struggles. You also have this piece of the book where you say, frustratingly but wisely, that there’s no one person or one thing that we can turn to and blame all these problems on.

James: I think blame goes further than complaint. It usually has an added component of status down-ranking the person in some way as a kind of retribution. That sort of thing often, even when well-intentioned, throws us off course.

For example, I was not a fan of having Mark Zuckerberg testify in front of Congress because the entire thing was just so gleefully and transparently engineered to be a theater of apology, humiliation, groveling, and other varieties of ritualistic lowering that I knew would clarify nothing and deliver a symbolic victory at the expense of a real one. And that’s exactly what happened.

Just like every outrage cascade and shame-fest on Twitter, leaning into status-oriented projects of blame only feeds the attention economy further and sets us all back.

Tech as religion

Greg: Once again I find myself thinking religion is somehow tied to all of this, especially to the question of what technology is today, if not simply an industry selling technology. I studied religion for almost 25 years and then began looking closely at the ethics of tech, and it seems almost everyone thinking seriously about technology and ethics has, at some point, compared technology to a kind of new secular religion.

James: Yeah, to the extent that a religion is a bundle of beliefs, values, habits, activities and so on aimed at linking our lives with some perceived higher meaning, I think it’s a really useful frame.

If you were to look at our planet not knowing anything about humanity, and you were to look at what shapes people’s behavior day-to-day, and the first thing we look at when we wake up is a smartphone, and it’s the last thing we look at when we go to sleep, and we gaze upon it and caress it all day, protecting it and worrying about it and even feeling it in our pockets when it’s not there, there’s something very, kind of … I don’t think idolatry would be the right term, but there’s something approaching a kind of worship there, even if only in a procedural sense.

There’s also the way in which digital technology has moved from being a tool to being our environment, which is in a way the opposite move that religion and even culture have made, both of which were once environmental in nature but now, in a pluralistic world, can be understood as mere aspects of a person’s life.

Neil Postman, of course, wrote in Technopoly about the ways our culture has surrendered to technology. And he was writing in the ’80s, when people were going around talking about the “culture wars,” meaning a series of left-vs-right political battles on hot-button social issues.

But if we take this view, if technology is now the new common totalizing environment, then I kind of feel like we have to say technology won the culture war. Neither side won. Technology did an end-run around both of them. You can have something that does what religion used to do in your life without consciously calling it a religion.

Greg: That’s interesting. You’re saying neither liberals nor conservatives won the culture wars, that technology beat them both, and became like a religion in the process?

James: It seems that way. I mean, whether a person is liberal or conservative, to the extent those categories are even comprehensible anymore, if you overwrite everything with a totally new environment like Twitter, both of them will still give in to outrage cascades. Both of them will still impulsively fire off something they later wish they hadn’t said. The medium may even be the only shared environment they have.

Greg: Right, so neither side really is winning by going onto that platform. They’re both serving the end of the technology, which is to have them engage.

James: Yeah. And again, it’s not to say that that the ideas and beliefs are unimportant. It’s just that if religion used to be the environment in which everything happened, and not, say, one spoke off the hub of the wheel of my life, then now technology is that environment and de facto religion in which we can all agree that everything lives and moves and has its being. I think we would be hard pressed to find any other environment that’s a better candidate for this kind of de facto common religion. Again, this isn’t even to say that it’s a bad thing.

Greg: Oh, it’s bad.

James: Well, I don’t think it’s all bad actually. It seems like the question is whether we can relate to and shape it in the way we want. Right now I don’t think we can, but I bet we can get to a point where we do.

Optimism and the human future

Greg: That’s great. Actually, that’s why I don’t think this column that I’m going to do for TechCrunch is going to end up having a name, but if it did, it would probably have the name that will go towards the book that I’m working on for it: “Tech Agnostic.” The idea is that if tech really is a big religion, then I’m not either a devotee or a full-on atheist in that religion. I’m more of an agnostic because I do agree that it’s not all bad.

James: Incidentally, there’s a nice little parallel between the concept of agnosticism and this idea of technology as religion. T. H. Huxley, who coined the term agnostic, said at one point that it’s not a creed, it’s a method. That seems like a good way to understand the way we’ve been talking about technology. It’s not so much ideology as process. Or a “habit of being,” to crib a phrase from Flannery O’Connor.

Greg: Right, so what we need regarding technology is not necessarily a creed, but a method of evaluating it and living with it.

James: Exactly, right. Just like democracy is a method of evaluating and living with collective decision-making.

Greg: I love it. So you’re encouraging me to reclaim the classic, most inspiring definition of the term agnostic?

James: I think so, yeah. I mean, if you wish.

Greg: And so if technology is parallel to a religion, it’s not necessarily parallel to the way we’ve done religion in the modern world. It’s really parallel to ancient times in which religion was was the water in which we as fish simply swam.

James: Yeah. It was an environment and not compartmentalized, not merely a belief system, not merely a set of habits or practices. But part of that environment was the story, and one last thing I’ll mention is that right now we have startlingly few dominant narratives about the possible human future. It’s always either the dystopian Skynet “we destroy the planet” and everything goes to shit narrative, or it’s the utopian “let’s upload ourselves into some computer,” and hope no one unplugs it nonsense. But we’ve never had utopias or dystopias, and we never will. But we do need much better stories.

Greg: How optimistic are you about the prospects for our human future?

James: On the long view, I’m very bullish on the human species. I think the next several decades will probably get bumpy, maybe the next century. A great deal depends on whether or not we can find the right language to even ask the right questions. But yeah, long-term, I fully agree with Faulkner that “mankind will not merely endure, but prevail.”

Greg: Very nice. I’m surprised to hear a fairly optimistic answer. Well, thank you so much!