‘Capitalism generates a lot of wealth depending on the situation’

In an extended interview, writer Ben Tarnoff weighs the ethics of technology

Ben Tarnoff is a columnist at The Guardian, a co-founder of tech ethics magazine Logic and arguably one of the world’s top experts on the intersection of tech and socialism.

But what I think you really need to know by way of introduction to the interview below is that reading Tarnoff and his wife Moira Weigel might be the closest you can get today to following the young Jean Paul Sartre and Simone de Beauvoir in real time.

In September, Tarnoff published a Guardian piece, “To decarbonize we must decomputerize,” in which he argued for a modern Luddism. I’ve casually called myself a Luddite online for many years now:

But I wouldn’t previously have considered writing much about it online, because who in this orbit could possibly identify? Turns out Tarnoff, a leading tech world advocate for Bernie Sanders, does. Which made me wonder: Could Luddism ever become the next trend in Silicon Valley culture?

Of course, I then reviewed exactly who the Luddites actually were and thought, “aha.” Maybe I’ve finally found the topic and the interview that really truly will get me fired from my role as TechCrunch’s ethicist-in-residence; talking to a contemporary tech socialist about the people who famously destroyed machinery because they didn’t feel that it was ethical, humane or in service of their well-being doesn’t necessarily scream “TechCrunch,” does it?

So I began my interview by praising not only his piece on Luddism but several other related pieces he’s written and by asking (with tongue only semi-in-cheek) to please confirm that at least it’s a peaceful Luddism for which he is calling.

Ben Tarnoff (Photo by Richard McBlane/Getty Images for SXSW)

Tarnoff: Thanks for reading the pieces. I really appreciate it.

The historical Luddites were workers in early 19th century England who smashed machinery because the machinery being introduced was threatening their livelihoods. [Machinery] was threatening to deskill their labor or displace it entirely. So, [Luddism] was a very rational response to management coming in and trying to cut costs by pushing out workers, which is a dynamic we’re certainly very familiar with today, [and] we see throughout the history of capitalism. In that sense, Luddism is not really the rejection of technology as such because technology as such doesn’t actually exist. It’s not all that useful of an abstraction to talk about technology.

TechCrunch: Or it’s so ubiquitous as to not really exist, right? I mean, cultivating fire to cook food is technology.

Precisely. Almost everything we’re looking at in this room could be characterized as a technology. That pen, this computer, the table, the chair and so on. Such a broad classification might be useful depending on the kind of theoretical work we’re doing. But when you think of it in terms of the political stakes, to say someone is anti-technology or pro-technology, it’s essentially meaningless.

And anti-technology is what most people think of the Luddite as being, right? That’s what’s become the popular understanding of the word; most people really don’t even think about it beyond that sense. I certainly didn’t before reading your piece.

Not coincidentally, the popular perception and definition of Luddism is derived from the impression its opponents tried to produce in the press. We know about Luddism primarily through the enemies of Luddism, who had every incentive to characterize the Luddites as a bunch of lunatics who were against all forms of progress, another fairly capacious and somewhat meaningless abstraction.

To review: The Luddites were a movement in the second decade of the 19th century in England at a time where, as you’ve written, capitalism was first achieving liftoff. These early milling technologies were enabling certain people to accumulate more than had really been possible before in terms of resources, wealth, comfort, etc. And the Luddites, in a sense, were saying, “not on our backs.” Is that consistent with how you see it?

It is — that’s a good way to characterize it. As you’re describing, capitalism is first emerging probably around the 15th to 16th centuries, but finds its industrialized form in the 18th, 19th centuries. So, you can think of the Luddites as being present at the creation of industrial capitalism, which is happening primarily in England. And as you mentioned, that period is characterized by hockey-stick growth. An explosion of wealth creation. An explosion of labor productivity.

And to clarify: There had always been a royal elite — people who were literally kings, queens, princes, princesses, etc., who had vastly more resources than everyone else around them. But what percentage of the people would you say you’re talking about here? How many additional people beyond that early royal court class had accumulated hockey-stick levels of wealth by that point?

Well, one of the crucial distinctions between the king and queen era — what we might call the feudal area — and the capitalist one, is that the capitalist era is capable of producing self-sustaining growth. In other words, it’s capable of generating growth that continues to grow. And that’s why when you look at, say, the historical data on GDP growth, you see the hockey stick happen around the time of industrial capitalism. In the feudal era, as you mentioned, there are certainly segments of the population that enjoy a great degree of relative wealth and privilege. But society as a whole is not capable of producing wealth at the rate, the scale, that capitalism can. So, capitalism is the best wealth-generating machine in human history. It’s the most dynamic form of social organization in human history.

The question we always have to append to that statement is, how is that wealth distributed? And one could easily make the case that the status of, let’s say, a manual worker in industrial England under industrial capitalism is better than that of a subsistence farmer in pre-capitalist England. But that’s not a very satisfying observation, right? This is often where the apologists of capitalism put the emphasis: “Well, isn’t the manual worker in a Shenzhen factory better off than their parents who maybe were subsistence farmers in the countryside?” And the answer is not always yes, because there are different types of domination, different types of immiseration. But even if the answer is yes, does that mean we stop there?

That’s a critical point. And I got the sense from a different article of yours, for Logic, the magazine you and a few others co-founded, which by the way, for anybody reading this, is a must-subscribe.

Thank you. I appreciate that.

Yeah. Absolutely. If people really want to explore tech ethics in depth, they should definitely read my columns. I’d like that. But they need to subscribe to your publication; preferably get themselves a hard copy.

Thank you.

But anyway, as you write in that piece, which you called “From Manchester to Barcelona,” capitalism and even more so the highly technological capitalism of recent decades produces so many of these personal experiences: “I saw where my family started out and I see where we are now, and there’s so much advancement that of course inequality has to be justified because look at all these families like mine.”

This is my family’s story, too. My mother was a refugee. She’s literally listed for me all of the possessions that she came here with, and it’s not a very long list.

And here I am sitting in a private conference room of the Collaborative Commons at Harvard’s Smith Campus Center, talking to you. And so many others, like Nitin Nohria, the current Dean of Harvard Business School, experience and then tell these stories. And so they and their listeners assume everything must be relatively okay — okay enough. But those stories can draw so much of our attention that we ignore the suffering and marginalization of an extraordinary number of people.

Capitalism generates a lot of wealth depending on the situation. Obviously it varies quite a bit. There are certainly moments in history where a greater share of the population can gain a degree of access to that wealth. And there are countries — we mentioned China — that have experienced a rapid growth over the past few decades in which it’s very common to meet people who have this story, which was, “I grew up starving in the countryside. Now I’m a member of China’s new middle class.” That’s absolutely right. I might add that capitalism does not have some kind of inherent tendency towards equalizing the distribution of wealth. Quite the opposite. It tends to concentrate wealth.

So, bourgeois economists typically see rising standards of living as associated with some type of internal mechanism of capitalism itself. In fact, those better distributions of wealth are almost always the result of class struggle. They’re the result of people agitated for greater access to the social product that they create in common. You see this from the very beginning of capitalism, and it takes many forms. At the height of the classical labor movement, the classical socialist movement in the late 19th century, it typically took the form of demanding a shorter work day. The eight-hour day is kind of a unifying theme of many struggles throughout the industrialized world in that period.

It’s important we keep that in the picture when we think about [the extent to which] capitalism created better distribution of wealth. The mechanism is not capitalism so much as it is intensified class struggle from those who are being excluded from that wealth, but whose labor is nonetheless producing it.

What I’m doing badly as an interviewer, and I’ll try to get us straightened out here, is conflating two big topics you write about. In addition to your piece on Luddism, which I wanted to discuss here, you’re an expert in contemporary socialism and its history.

I’m not sure I would claim the status of an expert, but it’s certainly something that I’ve written about.

You’re a columnist for The Guardian, where you have written about socialism fairly extensively. You’re a commentator to whom people look for insight into people like Jeremy Corbyn and Bernie Sanders, and what their philosophies mean. And in addition to this piece on Luddism, you co-founded Logic, a publication about technology. So, what I want to do is ask you more about how you’re envisioning this idea of Luddism, and then I’d like to talk about how you think technology and socialism come together, in your mind.

Now, about Luddism. In addition to the historical and philosophical background we’ve just covered, in your Guardian piece you discuss the research that emerged just this past year on carbon emissions caused by AI. It’s off the charts, and you start with that as your premise, noting AI and machine-learning are now emitting about as much as the entire nation of South Africa. Within perhaps another decade it will be at about the level of Japan. And those efforts will keep us obsessed with producing more and more data, which will mean more and more computing. It’s to the point where within three years or so, we’ll have 28 and a half billion networked devices?

I think that was the Cisco estimate.

That’s a mainstream estimate, in other words, that we’re well on target for over 28 billion networked devices on the face of the earth within the next few years. So, just from a climate change perspective, you’re saying we’ve got to really seriously look at decomputerizing a lot of our society. And then you get into the human well-being reasons for doing so, as well.

Can you first say a bit more about why we need to decomputerize? And then I really want to know what you think doing so is going to look like.

You laid it out very well: This phenomenon (of mass digitization) has accelerated in recent years, which as I discuss in the article, has different sources. But certainly one prominent source is very fast development in machine learning. And because machine learning has become so much better at finding patterns and data, whether that means facial recognition [or] predicting consumer preferences, that has stimulated this hunger for data from the corporate sector, from governments, and they now have a powerful incentive to acquire as much data as possible and to retain it indefinitely. It yields patterns that serve the profit motive, or in the case of state security agencies, social control.

One of the significant costs of this mass digitization boom, which has, again, largely been driven by the ML boom, is requiring greater and greater energy inputs. And those inputs go into a variety of things. One, as you mentioned, is to power the data centers where these machine-learning models are being trained. Those data centers obviously also store the data, which these models are being trained on. There are also, of course, energy inputs associated with producing those billions of connected devices, in addition to other kinds of inputs like mining certain materials that require often pretty horrific human rights abuse in places like the Congo.

So, given that whole picture, the piece asks the question, is it really reasonable to continue trying to make absolutely everything in our lives into data and to put computers absolutely everywhere? And to the extent that those are goals that are seen as desirable or inevitable, whose interest do they serve?

Whose interest do they serve, indeed? What you are talking about, again, is not, “let’s get rid of all the technology.” I think you’re saying we’re going to need to look very carefully as a society at which of these technologies we really need and which ones we don’t. In your writing on Luddism, you use abolition as a kind of metaphor for what we should do with certain technologies. You’re saying people need to look at which forms of tech need to be abolished.

Exactly. There are a number of cases that I point to where organizations and different communities are taking a stand against certain types of technologies. In Los Angeles, a group called the Stop LAPD Spying Coalition has successfully pushed the LAPD to abandon certain so-called predictive policing programs. We also have gathering momentum around banning facial recognition software by public agencies, which has caught on in San Francisco and in Somerville, Mass., nearby.

Have you read about Atlantic Towers in Brooklyn?

Yes. Yeah. That’s another case where, again, organizers on the ground are coming up with ways to push back against these invasive forms of algorithmic control. I think those organizers are primarily motivated by the social harms that these technologies inflict, rightly so, but that there’s also an ecological component. So when organizers in Los Angeles are fighting something like predictive policing, they are also advancing certain ecological goals because they’re also trying to constrain the use of machine-learning, constrain the spread of ubiquitous smartness, of mass digitization, in ways that also have positive ecological components.

So there’s an intersection of the goals and concerns of the environmental movement and the social justice movement and the tech ethics movement, if that can be described as such?

Yeah.

Now, let me ask you to speculate a little bit.

Please.

I just got back from the Disrupt conference put on by my publication TechCrunch. Every year for quite a number of years now, this has been a gathering of people to think about the startup scene in Silicon Valley and around it, and what’s the hottest new thing in tech, what should people invest in, etc., etc.

You’re saying “slow your roll, pump the brakes, some of that stuff really needs to not exist. Other parts of it really need to be rolled back.” So, help me envision a world where Luddism takes hold in a positive and healthy sense, from your perspective: What kinds of technologies are we using? What kinds of technologies do we want more of? What are some of the ways in which people who have these gifts and skills for creating, designing and implementing technologies that we have in the world today could positively invest their energy in things we would actually need?

We might begin by asking who is getting to make the decisions about how technologies are built and implemented. Currently, that’s happening within a VC-dominated framework. Most of the founders presenting at Disrupt, I imagine, are people who have or are seeking VC funding. And the structure of VC, which is really just an intensified form of other kinds of capitalist investment, is to demand very, very high levels of return in a fairly short timetable. Companies are expected to reach a certain scale very quickly in order to pay out the investors, most of whom tend to be very rich. Major family investors. So, if we were just to step back and look at how our technology is currently created and implemented, I think we could conclude — and I suspect folks at Disrupt would agree with this — that the profit motive is the primary structuring motivator determining how technology is developed. For a long time, many people running the tech industry had not seen any daylight between pursuing the profit motive and fulfilling social good.

Improving the lives of users, as they say. Josh Constine, an editor-at-large at TechCrunch, often talks about fighting for the user.

Right. And I think one of the core elements of, I would say, ideology of the folks who run the industry, but also I think people who broadly compose the ruling class and capitalist societies in general, is to propose that profit-making is indistinguishable and indispensable to maximizing social good. And what we’ve seen from the tech industry, in particular, although of course we see this in all industries as well, in the past few years is, in fact, those things are further apart than many people, at least in the upper rungs of these industries, thought. So, in fact there is a contradiction between what’s good for the user and what’s good for Mark Zuckerberg. Certain things are good for the user in terms of the usability of the site, preserving their privacy, preserving the integrity of their communications from state security agencies, that are not particularly good for the investors and executives at Facebook.

A lot of folks who don’t have particularly radical politics can see that there is, in fact, some distance between these two. This is a long way of answering your question, but it’s an important framing because it helps us to the next question, which is: If we simply try to maximize social good, what’s good for the user, what’s good for society as a whole, then what kinds of environments would we want to develop where a more democratic approach to creating and implementing technology could take place?

If it’s not Sand Hill Road, what’s our alternative? Any time we’re talking about transformative social change, the answers are going to feel a little underwhelming because we’re kind of creating this for the first time. But there are building blocks of what this new world could look like. One example, again, would be the very important work of algorithmic resistance these different community organizations are building.

Another example, to speak more concretely to your original question, would probably be the tradition of participatory design, or what’s sometimes known as co-design, which is a way of thinking about how to build and implement technologies that originally arose from the Scandinavian Labor Movement in the 1970s. At that point, you had very, very powerful unions like the Norwegian Iron and Metalworkers Union who were demanding a say in how new technologies were being introduced into their workplace, and even a say in how those technologies were being designed. So, this is, again, a very unique moment in capitalist history —

What technology, specifically, are we talking about? What products were they building?

There are a lot of different examples. There’s one example, which I don’t think was the Norwegian Iron and Metalworkers Union, but it was how new data systems, essentially early forms of computer databases in the 60s and 70s were being introduced into a tire factory in Norway. One of the key figures in the development of participatory design is a guy named Kristen Nygaard, who’s a famous figure in the history of computer science. He helped create object-oriented programming, a very important creation for the history of software development. Nygaard becomes increasingly concerned about the role technology is having in producing inequality. He becomes an advisor to the Scandinavian Labor Movement and helps them come up with the so-called technology agreements where they can have veto power over the introduction of certain technologies and in some cases they can actually have a seat at the table in designing these new technologies.

When you think about how a technology is designed, whether in Facebook or, let’s say, with industrial automation robots, engineers are not talking to the workers for obvious reasons. They’re talking to management, right? So, what those unions in the 70s were trying to reimagine is, what if we could develop a dialogue with the engineers who were building the technologies? And instead of management, having machines designed to their specifications in terms of what will descale labor, what will displace labor, what will cut costs — what if we had engineers building machines that could actually enhance the quality of our work, enhance the satisfaction that we took from that work?

That’s one tradition we could point to. Another is the history of disability-rights activism. Think about the extraordinary success disability activists have had in this country around improving accessibility around fights like the ADA. Now, that’s a fight that extends to improving accessibility online. And that’s a model of how we might democratize our approach to technology, [with] people saying, “Look. This is not about profit-making. It’s not profitable to serve these people. These are people who are being excluded, and they’re insisting [on] being accommodated in the design process.”


In Part 2 of my conversation with Tarnoff, we examine the notion of tech socialism in greater depth, starting with the Americans with Disability Act as a model for what you could call modern Luddite activism. We’ll also discuss the impact a company like Salesforce, with its giant computerizing influence and its self-proclaimed passion for ethics, is having on Silicon Valley not only culturally, but physically. And finally, we envision the values — and even the spirituality — of a tech world guided by socialist principles.