Will the future of work be ethical? Perspectives from MIT Technology Review

In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT’s famous Media Lab, examined how AI and robotics are changing the future of work.

Greg’s essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls “a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well.” In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy.

Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees.

Below he speaks to two key organizers: Gideon Lichfield, the editor in chief of the MIT Technology Review, and Karen Hao, its artificial intelligence reporter. Lichfield led the creative process of choosing speakers and framing panels and discussions at the EmTech Next conference, and both Lichfield and Hao spoke and moderated key discussions.

Gideon Lichfield is the editor in chief at MIT Technology Review. Image via MIT Technology Review

Greg Epstein: I want to first understand how you see your job — what impact are you really looking to have?

Gideon Lichfield: I frame this as an aspiration. Most of the tech journalism, most of the tech media industry that exists, is born in some way of the era just before the dot-com boom. When there was a lot of optimism about technology. And so I saw its role as being to talk about everything that technology makes possible. Sometimes in a very negative sense. More often in a positive sense. You know, all the wonderful ways in which tech will change our lives. So there was a lot of cheerleading in those days.

In more recent years, there has been a lot of backlash, a lot of fear, a lot of dystopia, a lot of all of the ways in which tech is threatening us. The way I’ve formulated the mission for Tech Review would be to say, technology is a human activity. It’s not good or bad inherently. It’s what we make of it.

The way that we get technology that has fewer toxic effects and more beneficial ones is for the people who build it, use it, and regulate it to make well informed decisions about it, and for them to understand each other better. And I said the role of a tech publication like Tech Review, one that is under a university like MIT, probably uniquely among tech publications, we’re positioned to make that our job. To try to influence those people by informing them better and instigating conversations among them. And that’s part of the reason we do events like this. So that ultimately better decisions get taken and technology has more beneficial effects. So that’s like the high level aspiration. How do we measure that day to day? That’s an ongoing question. But that’s the goal.

Yeah, I mean, I would imagine you measure it qualitatively. In the sense that… What I see when I look at a conference like this is, I see an editorial vision, right? I mean that I’m imagining that you and your staff have a lot of sort of editorial meetings where you set, you know, what are the key themes that we really need to explore. What do we need to inform people about, right?

Yes.

What do you want people to take away from this conference then?

A lot of the people in the audience work at medium and large companies. And they’re thinking about…what effect does automation and AI going to have in their companies? How should it affect their workplace culture? How should it affect their high end decisions? How should it affect their technology investments? And I think the goal for me is, or for us is, that they come away from this conference with a rounded picture of the different factors that can play a role.

There are no clear answers. But they ought to be able to think in an informed and in a nuanced way. If we’re talking about automating some processes, or contracting out more of what we do to a gig work style platform, or different ways we might train people on our workforce or help them adapt to new job opportunities, or if we’re thinking about laying people off versus retraining them. All of the different implications that that has, and all the decisions you can take around that, we want them to think about that in a useful way so that they can take those decisions well.

You’re already speaking, as you said, to a lot of the people who are winning, and who are here getting themselves more educated and therefore more likely to just continue to win. How do you weigh where to push them to fundamentally change the way they do things, versus getting them to incrementally change?

That’s an interesting question. I don’t know that we can push people to fundamentally change. We’re not a labor movement. What we can do is put people from labor movements in front of them and have those people speak to them and say, “Hey, this is the consequences that the decisions you’re taking are having on the people we represent.” Part of the difficulty with this conversation has been that it has been taking place, up till now, mainly among the people who understand the technology and its consequences. Which with was the people building it and then a small group of scholars studying it. Over the last two or three years I’ve gone to conferences like ours and other people’s, where issues of technology ethics are being discussed. Initially it really was only the tech people and the business people who were there. And now you’re starting to see more representation. From labor, from community organizations, from minority groups. But it’s taken a while, I think, for the understanding of those issues to percolate and then people in those organizations to take on the cause and say, yeah, this is something we have to care about.

In some ways this is a tech ethics conference. If you labeled it as such, would that dramatically affect the attendance? Would you get fewer of the actual business people to come to a tech ethics conference rather than a conference that’s about tech but that happened to take on ethical issues?

Yeah, because I think they would say it’s not for them.

Right.

Business people want to know, what are the risks to me? What are the opportunities for me? What are the things I need to think about to stay ahead of the game? The case we can make is [about the] ethical considerations are part of that calculus. You have to think about what are the risks going to be to you of, you know, getting rid of all your workforce and relying on contract workers. What does that do to those workers and how does that play back in terms of a risk to you?

Yes, you’ve got Mary Gray, Charles Isbell, and others here with serious ethical messages.

What about the idea of giving back versus taking less? There was an L.A. Times op ed recently, by Joseph Menn, about how it’s time for tech to give back. It talked about how 20% of Harvard Law grads go into public service after their graduation but if you look at engineering graduates, the percentage is smaller than that. But even going beyond that perspective, Anand Giridharadas, popular author and critic of contemporary capitalism, might say that while we like to talk about “giving back,” what is really important is for big tech to take less. In other words: pay more taxes. Break up their companies so they’re not monopolies. To maybe pay taxes on robots, that sort of thing. What’s your perspective?

I don’t have a view on either of those things. I think the interesting question is really, what can motivate tech companies, what can motivate anybody who’s winning a lot in this economy, to either give back or take less? It’s about what causes people who are benefiting from the current situation to feel they need to also ensure other people are benefiting.

Maybe one way to talk about this is to raise a question I’ve seen you raise: what the hell is tech ethics anyway? I would say there isn’t a tech ethics. Not in the philosophy sense your background is from. There is a movement. There is a set of questions around it, around what should technology companies’ responsibility be? And there’s a movement to try to answer those questions.

A bunch of the technologies that have emerged in the last couple of decades were thought of as being good, as being beneficial. Mainly because they were thought of as being democratizing. And there was this very naïve Western viewpoint that said if we put technology and power in the hands of the people they will necessarily do wise and good things with it. And that will benefit everybody.

And these technologies, including the web, social media, smart phones, you could include digital cameras, you could include consumer genetic testing, all things that put a lot more power in the hands of the people, have turned out to be capable of having toxic effects as well.

That took everybody by surprise. And the reason that has raised a conversation around tech ethics is that it also happens that a lot of those technologies are ones in which the nature of the technology favors the emergence of a dominant player. Because of network effects or because they require lots of data. And so the conversation has been, what is the responsibility of that dominant player to design the technology in such a way that it has fewer of these harmful effects? And that again is partly because the forces that in the past might have constrained those effects, or imposed rules, are not moving fast enough. It’s the tech makers who understand this stuff. Policy makers, and civil society have been slower to catch up to what the effects are. They’re starting to now.

This is what you are seeing now in the election campaign: a lot of the leading candidates have platforms that are about the use of technology and about breaking up big tech. That would have been unthinkable a year or two ago.

So the discussion about tech ethics is essentially saying these companies grew too fast, too quickly. What is their responsibility to slow themselves down before everybody else catches up?

Another piece that interests me is how sometimes the “giving back,” the generosity of big tech companies or tech billionaires, or whatever it is, can end up being a smokescreen. A way to ultimately persuade people not to regulate. Not to take their own power back as a people. Is there a level of tech generosity that is actually harmful in that sense?

I suppose. It depends on the context. If all that’s happening is corporate social responsibility drives that involve dropping money into different places, but there isn’t any consideration of the consequences of the technology itself those companies are building and their other actions, then sure, it’s a problem. But it’s also hard to say giving billions of dollars to a particular cause is bad, unless what is happening is that then the government is shirking its responsibility to fund those causes because it’s coming out of the private sector. I can certainly see the U.S. being particularly susceptible to this dynamic, where government sheds responsibility. But I don’t think we’re necessarily there yet.

In terms of the MIT Technology Review itself, I’m interested in your plans for coverage of ethics. What should people watch out for from Technology Review and/or what’s your advice for other people in the field?

I want us to continue to cover debates around what the role of these companies should be and what the role of government and civil society should be. I talk about the makers, the users, and framers. Like the makers of technology, the users, and the people who legislate for it. And you could also add a fourth group, which is the funders, who create the financial frameworks for it. The conversation of tech ethics right now is focused on what should be the responsibility of the makers. And I think we need to evolve that conversation to what are the responsibilities of government and of people? And how does all that play out together?

We should also just be telling more stories about individual people, companies, and organizations who are facing the quandaries. The best way for other people to learn how to understand these issues is to see [them] playing out in an individual case and on a personal level.

The human side of the story gets people thinking about the issue itself. That’s been my experience as well.

I think that’s always true. We had a story that Erin Winick wrote last summer about how she had briefly worked at a company and was doing some work helping them figure out automation. Part of that involved talking to one of the workers on the factory floor about the processes the company ran, to better understand how automation would affect those. And she realized that in fact she was helping automate him out of a job. And so it’s a story about him, and about her experience of dealing with this. And it’s told from the first person. I think that was a very effective way of getting across some of those issues.

Last question for you: a two-part question. The second part I ask everybody but I want to add something onto it for you. You’re as knowledgeable about these technological issues as anybody I can imagine. And you’re thinking about them within a global context and within a social context. So, what does a better shared human future look like?

Wow. This is tricky. I feel like this is a really controversial question.

It is. I’m asking you the most controversial question I can ask you.

Right. Because, you know, you have people like Bill Gates and Steven Pinker who say, oh, we’re already experiencing a much better human present that we have-

Right. Steve is a friend of mine. I officiated at his wedding. I love him. I also love Rebecca Goldstein, who happens to be his wife and is an absolutely brilliant philosopher and novelist. My organization gave Steve a big award, when he came out with his book Enlightenment Now. But I’m worried that some of the ideas in that book are where I might part ways with him, intellectually.

Alright. Let’s see. I’m not sure how coherent this will come out, but, I want to say something like, [a better shared human future is] one in which society has decision-making processes that allow…Anything that happens, some people are not going to be satisfied with. But if society has decision-making processes that reduce the level of dissatisfaction some people have over the outcomes, then that would be a better future. In other words, it’s one in which people feel more bought into the society that they are part of. And the way that it takes decisions. Whether they’re economic, political, or whatever.

The way that our politics is going, more people are feeling like the political process doesn’t work, and they don’t have any ability to influence it in the direction that they want. And that it’s harder and harder for them to live with the decisions it arrives at. And so in that sense it feels like it’s worsening. A reversal of that would be a better human future.

Inequality really mitigates against that, doesn’t it? Isn’t that the biggest challenge for what you’re describing?

Yes. Inequality mitigates against it. But being bought in doesn’t necessarily mean we all agree on the decisions that have been taken. Maybe you’re less under the thumb of the government or of the corporations that are taking decisions that affect how you will live or what you’ll consume. The ability to feel like, I don’t have to opt into the same things that everybody else is opting in to. That might be a way that you achieve a better future. So it doesn’t actually deal with the problem, necessarily, of disparity. You could end up with disparities, but you will also end up with choice.

Last question, and this is sort of a tagline I put at the end of many of my interviews: how optimistic are you about that shared human future?

Not especially optimistic. I tend to be infinitely optimistic in the very long term, because in the very long term, you know, none of it matters. The species disappears. And I’m pretty pessimistic in the short term. In other words, I think we’re probably going through another decade or two of a lot of global upheaval and a lot of misery and a lot of strife and conflict. And I may be relatively optimistic in sort of the end of the century terms.

Karen Hao is the artificial intelligence reporter at MIT Technology Review. Image via MIT Technology Review

It seems like you’re covering ethics in many ways, but how would you describe the subject matter they’ve assigned you to cover?

Karen Hao: I am the artificial intelligence reporter. But as the AI reporter, to my knowledge I’m one of the first people that has come to the organization with an assignment for a beat that already existed. For all the other beats we have, we basically have one sole owner of the beat. But when they wanted to bring in another person to cover AI, it was just exploding. For Will Knight, the Senior AI Editor, there was just a lot of stuff to cover and they thought it would be helpful to bring a second person in.

The understanding between Will and I was, there are certain areas he loves covering and certain areas he felt he needed more help on, and ethics was one of them. Fortunately, that was one of the areas I really enjoy. It’s not my sole focus; I do also spend a lot of time covering research and companies. Will and I have a lot of overlap. But ethics interests me so much because it really has become more of a mainstream conversation in the tech industry, but a lot of the mainstream conversation isn’t very substantive. I wanted to try to bring more substance to that conversation, given that it is so important.

There is one person whom I really trust in terms of their ethical commitment, and who has unusual access to people at places like MIT. I recently said to them, “Getting into this field, I tend to naively assume people are doing great work and they’re really trying to help other people.” To which they replied, “No, I see a lot of mailing it in.”

In your experience, when people say they’re going to talk about ethics in technology, how much seriousness is being brought to that?

For me, ethics is a process. It’s not a goal, it’s not a destination. There’s no point in which you reach some state where you’re finally ethical and your job is done. One of the easiest ways to differentiate between who actually thinks about this deeply is whether they even acknowledge that fact and whether they’re actually trying to put processes in place at their organization to grapple with these ethical questions.

A lot of times when I speak with organizations, they have principles that they establish, but then there’s nothing behind the principles. Like they didn’t actually translate those principles into systems, processes, teams, or roles that are actually making sure that every project goes through some kind of vetting process and is looked at against the principles. And that’s when it becomes more like, “Okay, you’re paying lip service. You’re not actually trying to do this genuinely.”

Is there pressure on organizations to at least pay lip service these days?

Oh for sure, it’s very trendy. Part of it is because they want to be perceived positively in the public light. But it’s also a nice way to try to avoid regulation if you say, “Oh, we are really ethical. We are able to self regulate without hard policies containing our actions.”

You also want your talented engineers to be able to sleep at night and not quit, right? Is that another factor?

Yeah, definitely. There’s definitely a lot more employee unrest and activism that has come out of people realizing that the companies they work at are sort of too big and out of their control and that the technology they’re contributing to no longer … They no longer have quite the ability to really contain potential consequences. The only mechanism a lot of engineers find they have now is to really push back on the bigger decisions and use activism as a way to keep company executives in check.

I’ve been talking to a lot of people about that. Because you’re known as a reporter I’m imagining people DMing you their stories. Is that happening?

No.

No?

No. I mean … I don’t know how typical this is, but I used to work in tech, I graduated from MIT, I was an engineer by training. I kind of came into the journalism world from the tech world. So my relationship with sources oftentimes is a little bit different in that a lot of them are my friends. There’s a different kind of dialog that happens when we’re just sitting in a living room and chatting and having dinner or whatever. That helps me gauge what the ground truth conversation is that’s actually happening. So that’s sort of some of the reporting process that I go through, is just listening to what my friends are saying, looking at my Facebook newsfeed because all of my friends work in tech and they’re constantly sharing their thoughts and opinions on what’s happening.

How do you feel about the transition to reporting on these issues? What were the main motivators for you to switch over from working to covering?

I was only in tech for a very short period of time, but during that short period of time, I had some key experiences that made me realize that I was just not cut out to work in tech. And what I mean by that is the first startup that I was working for, it was very mission-driven, and it was my dream job in many ways. Very quickly in the course of months, the CEO and Founder of the company was fired by the board because it was too mission-driven and it wasn’t making any profit. That was a shakeup for me, in realizing I don’t think I’m cut out to work in the private sector because I am a very mission-driven person. It is not palatable for me to be working at a place that has to scale down their ambition or pivot their mission because of financial reasons.

I didn’t really see an alternative in Silicon Valley. Most of the companies I saw were already very, very trapped by capitalistic incentive structures. So I became interested in journalism.

Because a lot of investment is very capitalistic in nature, the way to affect what is valued and what is not is by creating public conversation around issues. And hopefully if you are successful, investments will flow to the things that align with societal values.

Is it hard being an ethics journalist? Are there sacrifices you feel you’ve had to make?

I don’t know if I’ve made sacrifices, but it’s certainly hard in that the stakes are very high. I am only one person and I am also limited by my own education in ethics. I sometimes feel like I don’t have enough of a foundation to really be talking about these issues on such a public platform. And I’m sure that’s basically what [the entire] tech industry feels too. They’re realizing they don’t really even know how to talk about these things sometimes.

Karen, you have a conscience, so you should be talking. It’s the people that never worry about whether or not they should be talking I worry about.

I want to ask you about a recent piece you wrote on the climate impact of AI. ere you surprised by the information that you came across?

This is a story I wanted to write for a long time, but I was basically waiting for a paper to come out to confirm my suspicions. I used to be a data scientist and I did machine learning. I knew that it took energy. I was doing it in an industry setting where you’re applying things. In a research setting, it’s even worse because you’re trying to develop these technologies, so you’re training models over and over again. So I wasn’t that surprised when this paper came out saying that training a model can have exorbitant carbon emissions.

What’s interesting to me about this particular story is the incentives in AI research, and how they’re kind of misaligned with some of the values that we hopefully would want the system to have. The researcher wrote the paper out of frustration; she found that in order to do noteworthy research, or paper-worthy research in the industry, you basically only have to prove your accuracy for a model was better than existing accuracies. As long as you can prove your model has a marginally higher accuracy, you can publish a paper. But before that happens, it’s kind of difficult to get attention within the research community. While [higher accuracy] is a good thing to strive for, it also causes perverse incentives when it’s the only thing you’re optimizing for. Because now you can throw energy resources at these models, just to try to get that accuracy.

So a lot of people who read my article criticized the headline I chose [as] a little sensationalist…

Including her?

Right. Because it was picking the most extreme scenario, but outliers do show you perverse incentives. And outliers, if we’re not careful, become mainstream. The fact that an outlier like this can even exist shows something needs to be fixed.

When we design AI systems, we ask them to optimize for certain things. Sometimes they end up producing unethical results. Not because it was intentional, but because they were optimizing for one thing and as a result, ended up creating all these other perverse incentives that that creators of the technology didn’t really anticipate.

I thought you were very measured. Thank you!