Expert Panel: What even IS ‘tech ethics’?

It’s been a pleasure, this past month, to launch a weekly series investigating issues in tech ethics, here at TechCrunch. As discussions around my first few pieces have taken off, I’ve noticed one question recurring in a number of different ways: what even IS “tech ethics”? I believe there’s lots of room for debate about what this growing field entails, and I hope that remains the case because we’re going to need multiple ethical perspectives on technologies that are changing billions of lives. That said, we need to at least attempt to define what we’re talking about, in order to have clearer public conversations about the ethics of technology.

Fortunately, I was recently able to gather a group of three whipsmart thinkers who are each emerging as leaders in the tech ethics field, and who each do “big-picture” work, looking at the (enormous) field as a whole rather than being limited to knowledge of a single narrow technology or sector. As you’ll see below, none of the three offers a one-size-fits-all definition of tech ethics, which is a good indicator of why their perspectives are particularly trustworthy. If you want to understand a field this big and this new, always look to the kind of thoughtful, introspective leaders you’ll find below, rather than settling for quick and easy answers.

Kathy Pham is a computer scientist, product leader and serial founder whose work has spanned Google, IBM, Harris Healthcare Solutions, and the federal government at the United States Digital Service at the White House, where she was a founding product and engineering member. As a Fellow at the Harvard Berkman Klein Center, Kathy co-leads the Ethical Tech Working Group and focuses on ethics and social responsibility with an emphasis on engineering culture, artificial intelligence, and computer science curricula. Kathy also is a Senior Fellow and Adjunct Faculty at the Harvard Kennedy School of Government.

 

Hilary Cohen, a former Program Strategist at the Obama Foundation and analyst at McKinsey & Company, is currently leading a new initiative on Ethics and Technology at Stanford University’s Center for Ethics in Society. She recently managed the process of creating a popular new, team-taught Stanford course, “Ethics, Public Policy, and Technological Change.”

 

Jessica Baron holds a Ph.D. in History and Philosophy of Science and is a prolific and widely read freelance writer and educator on the ethics of technology, among other issues. I am a big fan of her regular tech ethics writing, for Forbes.

 

When ethics and tech collide

Greg Epstein: Thank you all so much for joining me. I have been really looking forward to this conversation, because I find myself after a year of somewhat immersing myself in the subject, still trying to figure out exactly what tech ethics actually is.

It seems to me that each of us, including myself, has had a bit of a transition to get into this field. Could you each introduce yourselves, and maybe talk about what you were transitioning from and how you transitioned into what you’re doing now. Who would like to start?

Jessica Baron: Not it.

Kathy Pham: I can. I started my career in software engineering and computer science, and I don’t actually think mine is a story of transition. [Tech Ethics is] something I’ve been thinking about for as long as I’ve been in computer science. Now it just happens to be a big topic that the rest of my peers are now talking about. I studied CS, but always was really curious about where community, social impact, and the public interest played into technology. That had to do with my upbringing – my parents came to this country as refugees.

Growing up with very little, seeing the opportunities I had, [I tried] to think of how to bring other voices into the room. Then I studied computer science, where we were [primarily] taught the technical concepts of how to build technology, to stay in our lane pretty much. Then I went to work at places like Google, IBM, and then later on at the White House at the United States Digital Service, building bridges between the private and public sector. But there was always this vein of society, and humans, in my work in some way. In the last few years, because of the [increasing popularity of work on] ethics and tech, I found a lot of the work that I care about at the forefront. So I was able to jump into the Berkman Klein Center and work with a really interdisciplinary group of folks to think about how we build technology differently, fusing ethics, and social responsibility into what we build.

Now I’m at the Berkman Center; I teach a class called product management in society at the Harvard Kennedy School, and I also help run and co-lead the Responsible Computer Science Challenge to integrate ethics into computer science curriculum with the Omidyar Network and Mozilla.

Hilary Cohen: I totally agree, even though by resume it may seem like a transition, my story also doesn’t feel like one of transition. I think it’s because this thing called tech ethics has swallowed so much of what are the core foundational questions that we as a society are grappling with. So to be working on it doesn’t feel like a stark transition from other things. It actually feels like continued work on the very questions that were likely animating you in a different space. So whether it was studying philosophy as an undergraduate where you’re asking fundamental questions about what constitutes a life well lived, or most recently I was at the Obama foundation where we were thinking about how to ensure and support a healthy, vibrant, civic life, both of those aspirations are now either threatened by or in need of support from a healthy relationship with technology. Part of what makes me excited, but also feel an obligation reckon with some of these questions is that they are so fundamental to either a good individual life or a good collective life.

And then I think it would be not intellectually honest not to say that part of it is probably just timing. I formed my political and professional identity when tech was at its apex, whether it was about connecting all the world’s people, or making all the world’s information accessible. And then right when that was so exciting and aspirational, it also became pretty clear that some of the very companies and tools at the center of that could cause a lot of harm.

Kathy: I’m so with you there, Hilary. That last bit is exactly how I feel, too. It’s the timing.

Greg: Jessica, you went as far as to get a PhD in what seems to me to be a different field. I mean, history and philosophy of science. Was this a transition for you?

Jessica: My transition most recently was out of an academic space, and into writing – I write for Forbes, and a couple other tech spaces, and also for corporate clients, military contractors, and things like that. But my interest in tech ethics, that’s been pretty solid since high school. I can pinpoint it to 1995: 10th grade, when there was a Scientific American issue about the five-year report after the Human Genome Project. [I remember] reading about the ethical, legal, and social impacts of the project; it was the first scientific project to ever incorporate that from the very beginning. For whatever reason, I was obsessed. Instead of posters of musicians and movie stars on my wall, I would cut out pictures from the Scientific American articles on the Human Genome Project and post them. I was actually cool in high school, but I was a closeted nerd, with those on my wall.

[In] college, [I found] biology really difficult, but I worked in a molecular biology lab for a long time and became obsessed with computer science. That was a time when there was a cultural and language divide between the students at my state school and the people teaching [CS], and the students wanting to learn it, and I couldn’t get people to call on me in class. That’s when I found the humanities aspect of science, so I majored in medical anthropology, and dabbled in classics. My Ancient Greek professor told me about the history of philosophy in science. He said it was a field, and I explored it. I took a lot of ethics classes in HPS. I’m a historian of medicine by training, of the feminist philosophy of science, and philosophy of science was a huge part of my training. The questions that you’re not really allowed to ask in science classes are fair game in a field like that.

My dissertation was not on ethics, but I always knew ethics was something I was going to pursue – since 10th-grade biology. I had a lot of good science teachers. That’s the one thing I do miss about an academic space, is having that impact on students as a teacher.

Greg: Wonderful. Perhaps when I said that everybody had made a transition, I was projecting my own experience onto all of you, which is one of the number one things we tell chaplains not to do. So, great, I’m learning what not to do here.

Kathy: It’s okay, we’ll push back.

Defining “tech ethics”

via Flickr / Trevor / https://www.flickr.com/photos/tcp909/6304394866/

Greg: Next question, the big one that we can work around in a number of ways. What even is tech ethics?

Hilary: I think at various times, depending on the conversation, tech ethics refers to at least three distinct things. The first, probably the most narrow, is an actual branch of analytic philosophy that looks at the impact of technology on society. That’s philosophers working to ask new questions on these topics, then systemizing and giving us vocabulary to understand them. That’s how I see the work of Nick Bostrom, or I know you recently were in conversation, Greg, with James Williams. That’s how I see what [Williams] did, during his work at Oxford.

The second category of tech ethics is a set of dedicated attempts to get technologists to engage with the social and ethical dimensions of their work, as they’re building products, or writing code, or studying computer science in school. You see efforts on this front both at universities and within companies. I think at least Kathy and I have been involved with this in some respect. Kathy’s helping to build, promote, and scale up nascent efforts that are underway.

The third [category] is probably the least clear, but also the reason for the question. [Tech ethics has] now become a placeholder for all kinds of questions about a society that’s in flux economically, culturally, and politically, in which many of the most profound shifts have either been driven by or exacerbated by the emergence of new technologies.

Hilary: I think those questions, when asked well, then extend to the most important question arguably, and hopefully one that’s at the center of the tech ethics field, if there is one moving forward. Which is how, in light of those big shifts and the plurality of visions for what a good society looks like in this country and around the world, we actually make choices at the political, societal, and individual levels, about the role technology should play in communities and in our lives.

Jessica: Greg, I loved the way you phrased the question. I showed it to my husband and he’s like, “Did you name that? What even is tech ethics?” He’s like, “That’s how you talk.” And despite the fact that I call myself a tech ethicist, I actually really struggled with thinking about what [the term “tech ethics”} actually means. To echo some of what Hilary said, I see tension in wanting to just define it as principles used to govern technology – that very static professional definition that I think makes some people nervous because it’s a little academic.

Jessica: While I do think it’s important for [tech ethics] to be a field, have a methodology, and have a vocabulary so there are things that we can agree on and not constantly talk past each other, the more I thought about it, the more I thought it was more helpfully defined as a conversation about how we want to manage risks and rights, specifically when it comes to the new tools that we use. And of course, there’s a long list of things, like equality of access, and accountability, and privacy, and environmental impact, and freedom, and safety, and privacy, and automation, and transparency. So the tension is between defining it philosophically and making it more accessible by defining it as a conversation.

Kathy: I think we all struggle with how we define ethics. Putting on my product and engineering computer science hat, what I think about is, how do we talk about ethics in the language that the designers, technologists, engineers resonate with? Jessica touched a little bit on this talking about privacy, safety and security, environmental impact, etc. One of my colleagues here at the Berkman Klein Center, Salome Viljoen, talks about how it’s so important for us to speak the same language as we’re trying to impact each other’s fields.

When I think of tech ethics [in terms of] what resonates with product managers and engineers, people who really hold the key to what gets into a product and out the door,  I think about building security into a product, about accessibility in our products. [Not just] red and green because someone’s color blind, or text to speech, but [for example] if you’re designing for a social service, what are all the ways where your tool itself would exacerbate the problems you’re trying to solve, or make inequalities even worse, because you didn’t think about different ways of including the community?

We have lots of classes in our engineering curricula to teach us about how to build data systems and to question our data. But we don’t really ask, where is the data coming from? If we’re building a new tool, for example, if you are a company thinking about determining safe neighborhoods, are you just going to go and look at crime data and understand the deep roots of racism? When I think about ethics, I think about the words and concepts that currently exist in the tech space, whether it’s algorithms, or data systems, or red teaming or quality assurance testing, and about how to think about those processes differently, to [consider] society and impact. [It means] thinking about how what we build impacts people versus just thinking about how efficient something is, or how to optimize, or how to write really clean code.

Helping people speak the same language

Hilary: On this last point, about translating some of the languages of other fields into language that technologists who are building new products or services are likely to use and understand — on the one hand, I’m sympathetic. Part of the collective task is making the methodology of being a technologist, or engineer, or product manager more responsive to the needs of other people outside of the company. But one of the things that makes me a little bit nervous is, if we cede all the territory so that technologists have final say or are finally in charge of what the ethical principles are, or how ethics gets substantiated in the things that we all use.

I wonder how we make more room for citizens, or regulators, or policymakers in broad definitions of tech ethics.

Kathy: I’m so glad you brought that up. There’s definitely a hierarchy of people, whether we like to admit it or not. Companies will even label people as engineering or non-engineering; suite or non-suite, etc. It’s very problematic; it creates this hierarchy where engineering decisions are the best and everything should live in the tech bubble. I think that we need to completely change that, whether it starts … I don’t actually know where it starts, at the academic level or what, but [we need] multiple voices in the room and different perspectives on the same level playing field.

I’m so curious to see in this class, Hilary, the new [tech ethics] class that you, and Rob Reich, and the team are teaching, if those three different disciplines come together, and they’re on an even playing field, and what that’s like. That doesn’t exist now and it has to exist. I’m in this environment at Berkman where you’ll have people like Mary Gray, who’s a social scientist and technologist, and like Salome Viljoen, who’s a lawyer, and Luke Stark, who’s a historian, in the same room. And we’ll talk about these topics as peers and equals. We learn so much from each other.

Jessica: At the end of what Hilary said, and Kathy picked up on it too, was this idea of not just confining [tech ethics] to an academic space. We have to formulate a vocabulary that citizens, for a lack of a better word, can understand. And we have to make room for them to participate in it, too, because nobody wants a bunch … I mean, I don’t even want a bunch of academics deciding what’s best or what’s good or bad. And so [tech ethics requires] opening up and really making it a conversation we can all have.

Greg: I’m hearing a lot of zooming in and zooming out. There is, as Kathy raised, the question of how to speak the language of what’s understandable to technologists themselves. There are questions Hilary raised in response: how do we bring in regulators, policy makers, philosophers. And then you have to zoom out even more, as Jessica just pointed out, to think about all citizens – all citizens of the world even. How should tech ethicists toggle back and forth between those different perspectives, systematically?

Jessica: Systematically – that’s the issue, right? I mean, we see news all the time: that survey says the public thinks this about driverless cars. And it’s very unclear where they’re getting their data. Are they standing in a parking lot of a grocery store or are they using a pop-up ad on a website? What site is that, who reads it, who are we polling? Getting a clear picture of what people want would be ideal, but I think impossible. At some point, you have to do something. You can’t wait for every single person in the world to weigh in on it.

We need more academics to talk about what people want. And that’s tough because it seems like for me, at least, the most feedback we get from the public is on social media. And social media feels like such a wasteland lately, of uninformed opinions. So I actually don’t have an answer, but that’s just to complicate your question even more.

Hilary: I wouldn’t say it’s the work of tech ethicists to solicit broad public input on how the field should evolve, or how technology products should be shaped. And putting the burden on every individual consumer or citizen to think through what protections of privacy should I have, or who bears the responsibility for ensuring the dignity of work, that sort of thing, isn’t the best idea. But when you look at the task of building a new app or service, it doesn’t naturally lend itself to thinking, “okay, wait, when this is on the ground in some community that I’ve never visited, how might it be weaponized, or just used in ways that I never intended, but also wouldn’t be proud of [or want to be] morally responsible for in some way? It’s not that natural a logic bridge to build, but we should expect it of people.

And again, I wouldn’t say it’s squarely on the technologist’s shoulders to think about the protection of citizens in various ways. That’s why we have government, and civil society, and other spheres of a healthy, functioning society. Right now it’s easy to point your finger politicians or policymakers and say, “They’re not equipped to govern a society that’s being transformed by technology.” And therefore, let’s let the “experts,” at the technology companies, make decisions. But actually, the right answer in the face of government that doesn’t seem fully equipped to govern these challenges is to have different people running for office, to staff up legislative aids to have some technical fluency, and to build support structures around that, rather than just [saying], “politics is broken.”

Tech, government and the bigger picture

Image via of Shutterstock/Kheng Guan Toh

Greg: Hilary, of the four of us, you are the one who is currently in Silicon Valley. And it strikes me that it’s more likely that you rather than most anybody that I’m speaking to, at least, are going to hear the sentiment you just referenced – that we should turn things over to the technologists, because government is broken. How often do you actually hear that kind of sentiment? And when you do hear it, how do you think about it from a tech ethics perspective?

Hilary: I hear it frequently. Whether from students or people who currently work in industry, there’s a real skepticism about the technological savvy or competence of our policymakers. A lot of that is warranted. We’ve seen Senators and Congresspeople showing they don’t even have a modicum of understanding of how some of these products work. Yet I grow frustrated by the sentiment because it assumes that will always be the case rather than to say, “Wait, government is a reflection of me and my community. I have some responsibility for improving it rather than just taking advantage of the fact that it’s currently [dysfunctional].”

Our team is trying to prepare the next generation of [leaders] to have technological skills, coding skills, or basic understanding of computer science. It’s really important for some of them to take those skills and use them not just in service of building new companies, but also in figuring out [how to] create policy and legislation that preserves the innovation at the heart of technological aspiration, but also protects our communities and society from its negative consequences.

Kathy: I completely agree our technologists and engineers should think about running for office, going to work in government, etc. That’s what I did. I left working at a big tech company to go help the USDS, which is inside the federal government. It’s only about 200 people trying to help bring technologies into government…and I love all the models that are springing up to think of different ways to make our government better when we’re making tech decisions.

Something I thought was really interesting and I’m glad Greg pointed out, was that when I was living in the Bay Area, there was this whole sentiment: our government doesn’t know anything, they can’t regulate the same way we will self-regulate. But then leaving, being in D.C. and now in Boston, if anything, Boston has this sentiment of we’ll just regulate everything.

Hilary: Even if the answer ends up being some self-regulatory agent consortium, something that looks like a motion picture association where you have actors from the private sector doing some of the regulatory work in a fast-changing space: that would be maybe an okay outcome. The more important thing is that that outcome is democratically discussed and chosen rather than just asserted as the case because there’s been no action from the civic sphere. There’s a difference in something that the public has chosen to allow to be self regulated, or the government has said, “Okay, we’re gonna work in partnership with a private sector rather than a set of companies who are unaccountable and obviously unelected.

Kathy: Hilary mentioned that often times we’re building technologies and we don’t lift our gaze up and think about our impact on society, as well. Whatever people’s intentions are, sometimes teams are just so hyper focused on building a thing, and don’t think about everything else in the ecosystem. How do we solve for that?

Hilary: There are two different things at work. One is, you’re laser focused on trying to ship something or get a product out the door. And the second is actually the language and mechanisms for improvement that you use internal to a company sometimes obfuscate the impact on the ground. Talking to friends or colleagues that work at tech companies, [people sometimes] say, “I don’t know why the public thinks we’re so focused on profit maximization. We’re just trying to build a product that people love.”

But actually, when you think about the metric that people are using to determine what is a product that people love, it is some metric around engagement, which is basically a proxy for revenue. You’re just not using language that would feel more nakedly capitalistic, like, ‘we’re trying to maximize profit’. So you’re layers removed from what you’re actually talking about, in ways that make it even harder to understand, or keep your attention on how this is playing out when abstracted.

Kathy: Can we teach a class together? I would love to just collaborate with you.

Hilary: I’m down. That would be fun.

Kathy: That would be so fun.

New approaches to an evolving field

Greg: One of the reasons I thought this particular panel would be so good is that each of you, while you may have your own particular interests within the tech field, are doing work that looks very broadly at tech ethics. I’d love to hear more about your individual approaches to that work.

Greg: Kathy, I’m fascinated by the Harvard Berkman Klein Center working group on tech ethics that you’re co-chairing because you’re looking at a very broad range of topics, right? I mean, there’s a wonderful diversity of scholars, and thinkers, and people at the Berkman Center. As issues come up in your group, how do you decide what you’re going to work on, what you’re not going to work on?  How do you decide what falls in your purview? Is there an example of something that was perfectly worthwhile that was brought to you as a group that you said, “No, we’re not going to work on this because it’s not what we define as tech ethics?

Kathy: Part of that is reframing what the ethical tech working group actually is. We’re a cohort of people that are Berkman folks and beyond, so it’s anyone who might be interested in this field. And it’s a pretty open group. Anyone can essentially join the mailing list and then come in person, or not; we meet weekly and also collaborate outside. We didn’t know what this would be like when we started a few years ago. We were like, “We’re just going to pilot this idea of getting together, bringing people from really different backgrounds, lived experiences, places around the world, academic studies. And we’ll just share what’s our top of mind every time, and just keep doing that.”

Over time we’ve built a strong community of trust with people from different disciplines, in backgrounds ranging from practitioners, to industry people, academics, philosophers, historians, et cetera. We get together and talk, and it could be [that someone is] writing a book, or someone is working inside of one of the big tech companies and is faced with a decision testing something with users, or implementing a new feature that incorporates crime data. So it’s less a matter of what the group decides to take on or doesn’t take on, more providing a high trust platform with people that have really deep experiences, from really different backgrounds, well beyond just [different] demographics or academic studies.

Some of our industry folks, I won’t say who they are, but they’re Berkman affiliates, and they’ll just come with a question under the Chatham House Rule, or safe space, and just get advice from people who’ve been touring the country, talking about the future of work, or people who do ethnographic studies around the world. We share ideas in a less formal way than perhaps a big company hiring a consulting firm. Then we go back into our own spheres [with a] much more rich understanding of our particular areas of interest. So it’s less, how do we decide what we’re gonna work on. We could essentially work on everything that comes to the table each week, as a pretty strong connector to different areas of expertise I myself might not have on my own, sitting in my office trying to build a piece of technology by myself.

Jessica: You’re living the dream.

Kathy: It is the dream. It is by far the richest community I’ve ever been a part of. I didn’t know that coming in and it just turned out to be the case.

Kathy: And it actually makes me think a lot about the safe space that’s required. So we, for example, recently had an engineer come to the table with a new feature [they were] trying to put in. If that same person said it out loud to a newspaper or on Twitter, they probably would get demolished, yelled at. It was a crazy feature, but we had historians who bought in reasons why that particular feature would be problematic. We had a couple social scientists who have been around the US thinking about how people have been impacted by technologies like that. So it’s a safe space where the tech person feels comfortable, but also the non … I don’t like to use the term non-tech because I don’t think that’s right, but [people who are], at least, in other fields can share and exchange. I think it takes that kind of a safe space to be really effective.

Jessica: Well, if I ever come into a billion dollars, that’s the kind of thing I want to build.

Greg: Jessica, one thing I’ve enjoyed about your tech ethics writing is your very broad set of interests. You have a consistent and compelling style no matter what you’re writing on, and you write pieces all across the range of what can be considered technology ethics. How do you decide which subjects to take on?

Jessica: I wish it was a more sophisticated process, but since I’m a contributor rather than a staff writer for Forbes, I don’t get assignments. Rather, I get to write about whatever I find interesting and potentially useful to people within my ‘swimlane’ of the social implications of new technology. And I tell students I teach at workshops: you can’t be a writer unless you’re an avid reader. And so any time I’m not writing, I’m reading press releases, news, business reports, financial reports, government documents, and things like that to try to find new stories.

Sometimes I just come across a press release about an interesting piece of research, that because I do have a scientific background I feel like I can write it up in a responsible way that isn’t too, “peanuts cure cancer!,” but can really talk about the way the researchers performed. I’m interested in how scientists and technologists pull together information that turns into a press release.

A lot of it just comes from my shower thoughts. All your best ideas in the shower. If you read enough, you’ll think of a question.

Jessica: For now, I’m just throwing things at the wall and seeing what works. And not just based on clicks, but on what I really like or what I get feedback about from people I really respect. Or what I get the least amount of hate mail about.

A philosophical approach to product

Image via Getty Images / PayPau

Greg: Hilary, you’ve just had this experience of managing the process of assembling a new team taught course called “Ethics, Public Policy, and Technological Change,” that was just taught at Stanford. You just finished the course’s first semester, with hundreds of students and a few distinguished professors; the syllabus was wonderfully broad-ranging. In looking it over, I found myself reading everything from Facebook manuals about how to moderate content involving dying or wounded people, to the neoliberal economist Milton Freedman, to different kinds of literature, like Ursula Le Guin’s short story, “The Ones Who Walk Away from Omelas,” or an essay on agriculture by Wendell Berry, the poet and farmer.

Hilary: “Solving for pattern,” yeah.

Greg: And of course there were philosophical readings, like Foucault on “Panopticism.” Could you describe how constructed the course?

Hilary: There was a team of four of us working: myself, and then computer scientist Mehran Sahami, who has played a really significant role in growing Stanford’s own computer science department to be the department that confers the most degrees, and is most popular among students; he also worked in the industry for a little bit of time. Political philosopher Rob Reich who has spent time looking at the normative dimensions of different things from philanthropy, to education, to now technology. And then Jeremy Weinstein who is a political scientist by training and also spent a bunch of time in the Obama administration, so had the lens of the pragmatic policymaker in addition to the empirical social scientist. The four of us together spent about a year trying to think, from the ground up: [what] if we were to try to holistically examine the impact of technology on society from three different perspectives; the technologist’s perspective, the philosopher’s perspective, and the social scientist or policy maker’s perspective? What would that look like?

We set out [to achieve] an ambitious, meaningful integration of disciplines that we hadn’t seen done before. Historically when you have an ethics class in a computer science department, maybe you have a philosopher come in for a day or two and lecture on Kant, or something like that, but it’s an attachment onto what is mostly a computer science discussion. We were trying to say, how do you bring each of these three distinct lenses to bear on a set of topics equally and in ways that complement each other? That was our ambition going in to structure the syllabus. And like you described there, we pulled on a range of different types of materials. We also pulled on a range of different types of assignments. Students did coding exercises where they had to build and audit risk prediction algorithms; but then also philosophy papers where they had to defend and identify a normative conception of the value of privacy; a policy memo assignment where they had to survey different stakeholders in the community about a transition to autonomous vehicles; and then some assignments that integrated all of the disciplines.

We wanted to take seriously the technical breakthroughs on things like privacy, or algorithmic accountability, or the structure of platforms, but also ask real, important questions that technology itself can’t ask, including social scientific questions of how to think systematically about the impact of a new technology on behavior, or institutions. [We wanted students] to think about how these technologies should be governed. And then [there is] the philosopher’s perspective, which is not just the personal ethics, or how does your own moral compass come into play when you’re building a technology, but also professional ethics and political ethics about the distribution of various goods.

Greg: How did the serious tech folks feel about reading Foucault on Panopticism, and how did the people coming in from a purely humanities or philosophy perspective feel about having to design a new form of social network?

Hilary: The amazing thing about undergraduates is that maybe their identity is in part set because they identify with the major they’ve declared, but their identity is also flexible enough to accommodate different threads that are equally important in their learning. So there was some resistance among the technically minded students about the reading load, and there was some question from the social scientists or humanists in the room about why we were even looking to technology to solve problems that were problems of humanity, or culture, or politics. But by and large, we had students who were omnivorous.

Greg: A lot of the reading was very challenging. Not just technically challenging, but morally challenging. Could you give an example of a reading that provoked some of the most introspection among students – some of the most painful or difficult feelings?

Hilary: There were a bunch. And honestly, the level of introspection and reflection among the students was inspiring. But one of the readings you mentioned was the short story by Ursula Le Guin, the premise of which is a utopian society that can only exist because there is one child being tortured underground, who they have to confront once a year at this annual festival. Some people, upon seeing the child, decide to walk away from the utopia all together and leave Omelas. The question we put to students, that they had to sit with for a bit, was, “are the people who walk away from Omelas morally heroic or moral cowards?”

It’s not a far stretch to apply that to your own life: is it better to wipe my hands clean and not work on something that is compromised, tainted, or imperfect? Or is that actually the most cowardly thing I can do because then the system remains intact and [while] I can feel good about my own non-contribution to it, I still haven’t done anything to change it.

[That was the reading for the last session of the course], and ultimately students left the room with that question in their mind as they [thought] about what they’re going to do once they leave Stanford. What obligations do I have work on a problem once I’ve seen it and internalized it? And how can I, in a world where a lot of things are compromised or imperfect, feel good about the contribution that I as a single individual am able to make?

Greg: Kathy, Jessica, have you encountered ‘Leaving Omelas’ moments of your own? What are some of the more difficult decisions you’ve seen people have to make, or some of the most difficult struggles that you’ve seen people grapple with in your time in this work on tech ethics?

Tackling the ethical struggles of tech

Image via Getty Images / Mitch Blunt

Kathy: Hilary, I don’t know if Jeremy Weinstein had a whole lot of input from that part of the class, but for folks who stayed on through administration changes, it was a real struggle. I started a tech organization while under the Obama administration. We spanned through the Trump administration. Often times, technologies you build, technology is not agnostic, and it can be used for different purposes. And people had to really grapple with staying on, leaving, leaving, what it mean to stay on different agencies, how different tools are being used. Was it better to leave or to stay and have a say in the direction of where things were going, or to leave and say I’m done? But you see this in the tech companies, too, where there’s a big fiasco, something happened. Do you leave and say forget this, this company’s irresponsible, or you stay and say, “Well, even if it’s irresponsible, it affects billions of people. So if I smart engineer and product person, or any other person in any other role leave, is that better or is it worse? I myself and many of my peers in industry and government have dealt with exactly [those questions].

Jessica: I wish I had something that profound, but when I was teaching tech ethics at Notre Dame, which poses its own unique set of challenges as a Catholic school, the students had always been told that technology is neutral and it’s the people that use it that aren’t. I think that’s wrong. That was a hurdle that we had to get over, but in one of the projects we did in the last tech ethics class I taught, the students helped me put together a top ten list, which I’ve been putting out for seven or eight years now, of ethical dilemmas and policy issues, and science, and technology.

There are thousands of people around the world that use it now in classrooms, which is really rewarding, but having the students help me put it together, I think, illuminated for them the kinds of issues that people should care about [and] didn’t.

[There was] a conversation we had maybe a year and a half ago now about predictive policing – something students thought was an objective good. If we can prevent crime from happening, why shouldn’t we? They would get angry at first when you tried to discuss that this is not necessarily a common good, that there’s a lot wrong with it. I think I hope, that they all came out of it better people. Not better people in that they agreed with me, but that they could thoroughly think through [the issues].

Kathy: To add on to what I just shared, administration aside, our government technologies are broken. And when our social services are broken, it is a matter of human rights and ethical concern. And so [technologists debate whether] to say, “Forget this. This is a burning fire and I’m gonna run away because it’s too hard. I’m just gonna go make a whole lot of money, work in tech instead.” Or do you stay and make the government a lot better, and have a much bigger impact there.

Jessica: My experience in D.C. was with a consortium of social science associations where we did some lobbying. Not any registered lobbying, but I spent a couple years doing fly in days on Capitol Hill, after Obama left office. I’m from Indiana, and so as a constituent, those are the offices that you go into. And I tend not to agree with a lot of Indiana politicians, but their staffs, these young people, they’re so smart and thoughtful. We have done a good job, I think, with this most recent generation. These millennials that we keep ripping on, they’re the most educated generation we’ve ever had. There’s some hope in the government, I think.

Greg: Last question: how optimistic are you about the prospects for human future?

Jessica: Ouch.

Kathy: I’m an optimist. I’m an optimist when it comes to technology, as well. I feel like I spend my time talking about the harms of technology. But generally, I’m an optimist. I remember walking through the museum in Washington D.C. and seeing how we’ve had problems for as long as humans have existed. Now we just have to figure out the [solutions to the] current problems.

Jessica: I laughed, just based on the conversation that we’ve been having, but I agree completely with Kathy. I write about the harms that technology can do, but I’m an optimist of the future. I wouldn’t write for people if I didn’t think something could be done to make things better.

Hilary: My optimism is existent, but fragile, and contingent upon us being morally awake and civically engaged.

Greg: I really am so grateful for the conversation.