Security

Why AI needs more social workers, with Columbia University’s Desmond Patton

Comment

GettyImages 878457548
Image Credits: LuckyStep48 / Getty Images

Sometimes it does seem the entire tech industry could use someone to talk to, like a good therapist or social worker. That might sound like an insult, but I mean it mostly earnestly: I am a chaplain who has spent 15 years talking with students, faculty, and other leaders at Harvard (and more recently MIT as well), mostly nonreligious and skeptical people like me, about their struggles to figure out what it means to build a meaningful career and a satisfying life, in a world full of insecurity, instability, and divisiveness of every kind.

In related news, I recently took a year-long paid sabbatical from my work at Harvard and MIT, to spend 2019-20 investigating the ethics of technology and business (including by writing this column at TechCrunch). I doubt it will shock you to hear I’ve encountered a lot of amoral behavior in tech, thus far.

A less expected and perhaps more profound finding, however, has been what the introspective founder Prayag Narula of LeadGenius tweeted at me recently: that behind the hubris and Machiavellianism one can find in tech companies is a constant struggle with anxiety and an abiding feeling of inadequacy among tech leaders.

In tech, just like at places like Harvard and MIT, people are stressed. They’re hurting, whether or not they even realize it.

So when Harvard’s Berkman Klein Center for Internet and Society recently posted an article whose headline began, “Why AI Needs Social Workers…”… it caught my eye.

The article, it turns out, was written by Columbia University Professor Desmond Patton. Patton is a Public Interest Technologist and pioneer in the use of social media and artificial intelligence in the study of gun violence. The founding Director of Columbia’s SAFElab and Associate Professor of Social Work, Sociology and Data Science at Columbia University.

desmond cropped 800x800
Desmond Patton. Image via Desmond Patton / Stern Strategy Group

A trained social worker and decorated social work scholar, Patton has also become a big name in AI circles in recent years. If Big Tech ever decided to hire a Chief Social Work Officer, he’d be a sought-after candidate.

It further turns out that Patton’s expertise — in online violence & its relationship to violent acts in the real world — has been all too “hot” a topic this past week, with mass murderers in both El Paso, Texas and Dayton, Ohio having been deeply immersed in online worlds of hatred which seemingly helped lead to their violent acts.

Fortunately, we have Patton to help us understand all of these issues. Here is my conversation with him: on violence and trauma in tech on and offline, and how social workers could help; on deadly hip-hop beefs and “Internet Banging” (a term Patton coined); hiring formerly gang-involved youth as “domain experts” to improve AI; how to think about the likely growing phenomenon of white supremacists live-streaming barbaric acts; and on the economics of inclusion across tech.

Greg Epstein: How did you end up working in both social work and tech?

Desmond Patton: At the heart of my work is an interest in root causes of community-based violence, so I’ve always identified as a social worker that does violence-based research. [At the University of Chicago] my dissertation focused on how young African American men navigated violence in their community on the west side of the city while remaining active in their school environment.

[From that work] I learned more about the role of social media in their lives. This was around 2011, 2012, and one of the things that kept coming through in interviews with these young men was how social media was an important tool for navigating both safe and unsafe locations, but also an environment that allowed them to project a multitude of selves. To be a school self, to be a community self, to be who they really wanted to be, to try out new identities.

I went on to my first academic job at Michigan, where a news story came across my desk around two rappers in Chicago, beefing on Twitter: Chief Keef and Lil’ JoJo. Chief Keef was very well known at the time. Lil’ JoJo posted his address on Twitter in a public taunt, to say, ‘if you have a problem with me, meet me here.’

Three hours later he was killed at that location. So I was really interested in how and why that happened and to what extent the technology could help me understand triggering moments and elucidate any new reasons why violence is such a big problem in Chicago.

Epstein: Study of and thinking about online violence and its implications in the real world has, for lack of a less ironic word, blown up recently.

Patton: Right.

Epstein: And, of course not in exactly the way that your work anticipates. Maybe once again here, in the coverage of violence, we have a situation where white people do something (shoot people in public) and it gets lots of attention even though it has been studied in black communities for longer.

Patton: Right.

Epstein: Well, you’ve now spent longer than almost anybody studying the ways in which violence is emerging online. What’s your initial reaction to this dramatic series of incidents of white supremacist violence spilling over from online into the real world, and all the coverage it has received?

Patton: Probably the biggest finding I have is, it’s not really about violence. Violence is the last resort. [Violence] has drawn the most attention and gets us all very concerned and worried, but in the work I do in Chicago and even in the ways in which these mass murderers have communicated in their own lives, they often leave trace expressions of trauma, or hatred, or they are worried about economics.

There is typically some other issue or concern that precedes the more aggressive, more threatening comment, that may lead to a violent one-off action.

My social work education pushed me to focus on this idea that people aren’t inherently violent, across the board. There is a lot of help-seeking and digital narratives [of the deep] pain and trauma of humanity, yet we only care and only react when it is violent. We continuously miss an opportunity to execute real prevention work and to support people in and when they are expressing themselves this way online.

Image via Getty Images / Feodora Chiosea

Epstein: Let’s get into your argument that AI needs more social workers and other non-tech folks. There has been a lot of study of late of online content moderators and how to address the stress their work causes them.

Part of what I think you’re saying is, people with social work training and similar skills are trained to listen, to pay attention, to watch for signs of violence and trouble and trauma — and tech companies should flat out hire a lot more of them.

Patton: For the past four years I have worked with Kathy McKeown and Shih-Fu Chang, who are both computer scientists at Columbia, and we have been leveraging artificial intelligence, including machine learning and computer vision, to understand pathways in trauma and aggression and violence. What has been so important about the collaboration between social work and computer science is that we [social workers] are the blindspotters in this space.

We anticipate the challenges of interpreting language, culture and context and nuance. We think about the integration and the deployment of these AI systems in diverse communities and what might go wrong, what could go wrong. We hire domain experts, community members, some youth, some former gang-involved, some former incarcerated, that really bring in their experience to help us think about how we should make meaning of language and images and how we can create the best ethical tools.

What I’ve learned from this collaboration is how important that blindspotting social work approach is [to] creating more humane and ethical tech. When I come into AI spaces to talk about my work, I am almost always the only social worker in the space.

While I’m grateful for the attention, the things I say are what I learned as a masters student at the University of Michigan. These are things that my colleagues, my classmates, do in their everyday social work practice or research.

That is: treating people like human beings. Working with communities. Not making decisions for them. Letting ethics drive your engagement. Building rapport and trust with anyone that is going to touch or be affected by these algorithm systems. These are all things you learn as a Master’s student. So [there is an] immense utility in leveraging a social work approach and social workers in every part of the creation and deployment of algorithmic systems.

A lot of people think you need to be a data scientist or a coder in order to be involved in AI. That is simply not the case. The reality is, AI can’t do anything without human effort or human intelligence.

AI is really bad at emotions. It cannot detect context. So we need people who have training and understanding of how emotions work and the role of context to ensure that we are creating the most ethical tools possible. So people trying to hire the best data scientists should also be trying to hire the best social worker.

[For example] with content moderation, we need to treat the way we label data like we would want [to approach a social work] group session. We have to be able to talk about the impact of what we see on our lives, to be able to take breaks, to be able to process these things.

[Like online moderators for Facebook or YouTube,] the people in my lab are also looking at very challenging and deeply emotional posts via text or image or an emoji. There’s a lot of pain in the world that’s expressed online.

The social work approach enables us to get through what we’re seeing. So there’s an important role for social workers in this space that’s just completely underutilized.

GettyImages 924636730
Image via Getty Images / PeterSnow

Epstein: I love how you described in your article that you were intimidated initially by working in a tech environment, but that you doubled down on your social work training and found an important and confident place there. I think a lot of people out there who are not traditional tech folks – like myself – can feel intimidated by computer experts who have been doing this their whole lives, and who literally speak a different language. But I believe they and their work are in dire need of people like yourself.

So you’re in a room, trying to design AI, and you’re a social worker. What is that like?

Patton: It starts from inception, with defining the problem and defining the process by which to identify the solution to the problem. It begins with, ‘who should be in the room making that decision?’

The social worker would understand the voices that are omitted from this conversation, particularly the ‘AI for good’ space. And they would help to consider, and even uplift or empower those voices to be heard in that space.

Then, it’s important to think about and anticipate how a technological solution operates and is activated in diverse communities. This is where the idea of blindspotting comes in. Identifying issues of race, power, privilege and oppression need to be itemized and laser-focused in the deployment and creation of these algorithmic tools.

Some points to consider once you have a tool: the user experience around the tool. Social workers are always about meeting people where they are. So how do you create a tool that works for everyone?

These are some of the challenges with facial recognition. I guarantee you no social worker was ever part of a conversation when developing facial recognition software because it did not race and how these tools materialize, particularly within the criminal justice system.

There are multiple challenges there: with not identifying people, and then with over-identifying people. And then, before the integration and deployment of these tools, how should we work with communities to think about the utility of these tools in their communities? I think there will be a chain of operation where social worker input, advice, consultation, and expertise can be leveraged in every part of the process of creating an algorithm.

Epstein: There’s so much one could say about facial recognition’s failings and how perhaps it should never have been invented at all, at least not in its present forms. But that’s for another time.

I’d love to hear a bit about the process of your hiring people who had formerly participated in gangs or who have been in some way involved with gang violence, to help you improve algorithms, and to help address biases and understand “context cues.”

Patton: We’ve worked with people, who we call domain experts, for years now. We partner with a local nonprofit organization that [serves] to identify young people interested in giving back to their communities, that want research experience, that are really trying to move forward in life.

In Chicago we partner with the YMCA and with a social worker there who was also previously gang-involved, to identify young people that want to get some research experience. In the early days essentially we would hire these young people as vendors, and then my lab coordinator at the time would go back and forth to Chicago to work with them on translating and interpreting language, culture and context in social media data. They would also have a mentor that lived in Chicago.

We would give them an iPad and they would be given a spreadsheet. Each week we’d ask them to do some labeling annotation of the training data.

What became really clear to us with AI is that the data you’re training on can be inherently biased, but if you are one of the people who have an immense hyper-contextual understanding of the language, that helps to reduce some of the bias by unpacking backstories and different ways of interpreting the text that you perhaps wouldn’t have if you didn’t have that domain expertise available to you. So that’s one way in which we work with domain experts; in that particular project we work for a year.

Most recently, we’ve been working to develop a set of immersive simulations with two labs at MIT where we’re trying to create web-based practice bases to help young people think about their digital footprint, and on the other end force key stakeholders like educators, attorneys, judges to think critically about how they’re interpreting social media.

There we partner with an organization in Brooklyn where they already had groups [and] were paying people to do internships. So they were able to pay the young people and then every Thursday, my social work interns would go to Brownsville and do essentially gang-designed workshops with them, so that they could be a part of designing the immersive simulations. By the end of a six-month program, they co-developed two of our new simulations that we’re about to do testing with in the next few months.

So those are two examples where we’re incorporating young people and, again, they bring about an understanding of their world that we would just never have. They help us think about what would happen if this AI system were to identify trauma: how would we think about that experience is a way that’s actually helpful for the community?

Image via Getty Images / Rawpixel

Epstein: Thank you for helping me to think about this in terms of domain experts. It’s a wonderful point to recognize that there are a lot of different domains in this world, and with their biases and their own cultures. You’re working with people who have expertise in domains that are important to understand.

Patton: That’s the key: to work with young people that there’s already kind of a negative narrative about who they are and what their contributions can be, because they’re young and because they’re black or Latino. By privileging their life experience as the expertise that we have to have in order to get this right really flips that narrative on its head.

Epstein: It seems almost like a more tech-savvy and maybe a more scholarly version of some of the work done by somebody I think I can even call a friend: the Reverend Jeffery Brown, here in Boston if you’ve ever heard of him; Chadwick (Black Panther) Boseman is slated to play Rev. Brown in a movie for Paramount.

Patton: Oh yeah. Absolutely.

Epstein: I’ve worked with Brown a number of times and really admire what he and several colleagues in the Boston’s Ten-Point Coalition did: walking the streets in the most violent areas in Boston and just talking to the youth that were involved in gangs and violence and getting to know them better, which then enabled them too much more effectively broker peace treaties which dramatically reduced violence and homicide. They were able to get a lot of the youth involved in much more positive activities by offering them good jobs, as well.

I truly love what he does and he ended up reaching out to me because, although he’s a minister and a lot of his colleagues are Christian ministers as well, their experience on the streets was that young people did not necessarily want to hear from a religious perspective all the time. He wanted my perspective on non-religious values to kind of help round out his work, which was super fascinating to me.

Patton: Yeah. That’s great.

Epstein: Anyway, I digress. I also want to ask you a little about your recent paper on parenting in a digital age. You and other researchers looked at online spaces around cyber-bullying and adolescent risk-taking, and you found that parental warmth is consistently associated with lower cyber-bullying as both victims and perpetrators.

Parental emotional warmth, you found, was also a best practice for encouraging teens to disclose online activity: basically get them talking, as an alternative to them getting more immersed in the violence. Any parent would want to read about that. What do you think the readers of TechCrunch should know about it?

Patton: Sure. I wasn’t the lead author on that paper, so I can’t speak to everything, but what it boils down to is, having caring parental figures in your life does wonders for a host of social-economic factors including the ways in which young people engage technology. I think when parents are involved, caring, and ask questions about your life, they become an avenue to process things being discussed on social media.

Many parents have no idea who their kids are online and if you tell them the identity their kid has assumed online, they typically are shocked. They think their kid is doing one thing online and they’re doing a completely different thing.

Having a strong relationship with your kid that includes conversations around their digital life [as well as] their physical life is a strength. That’s what we learned from that paper.

GettyImages 1134471963
Image via Getty Images / Lisitsa

Epstein: What it also brings up for me is, if close family bonds are a huge factor in decreasing violence and risk-taking, and in encouraging a whole host of prosocial traits, and if you have communities in this country, for example, where their family structures have been systematically attacked for hundreds of years-

Patton: Right.

Epstein: Where family bonds have been systematically destroyed or abused by those in power for hundreds of years, well, that’s just another window into understanding some of the inequality in this country today.

Patton: Exactly.

Epstein: You were also interviewed for a piece in the Atlantic on the desire to livestream violence, where you talk about how young people forever have made dumb decisions and mistakes, but where this is embedded in a violent culture, and where that violence is just tolerated and not addressed, that’s where the real problem lies.

Given what’s going on in our culture right now, do you fear that live-streaming violence is going to be a growing phenomenon?

Patton: We’re going to continuously advance our technology and live-streaming and virtual reality are going to become very normative in our society. It’s interesting to wrestle with why young people post and engage in social media and live-streaming the way they do.

Being known and being connected to peers is one of the most important aspects of youth development. Having that peer to peer support, but also peer to peer affirmation is extremely important and to some extent may override concern about the consequences of your behavior online or offline.

So while I don’t empirically know why young people do the things they do, I think we have enough historical and sociological theory and evidence that would suggest that there’s a pull to being viral, to being celebrated, to having celebrity, that is interesting and exciting for all and yet particularly for young people, the ability to consider consequences is not fully formed yet and so it becomes a real issue.

Our education system hasn’t caught up to digital practices and behaviors. so everyone is trying to learn to code and learn about AI and take computer science classes, and that’s all fine and dandy, but we really haven’t figured out, what does it mean to be a digital citizen? What does it mean to be a citizen of a digital world? We need to integrate that into our early education experiences.

Epstein: Let me bring this point back to what we discussed earlier from the paper you co-authored about parental warmth. For kids that don’t have access to as much warmth, support emotionally as they should or could, there’s a universal human drive to go out and seek love and attention and belonging.

Patton: Yep.

Epstein: And as you say, that drive is going to override safety and a number of other concerns if it’s not met in healthy ways.

Patton: Right.

Epstein: And even the best families can’t offer warmth all the time.

Patton: Right.

Epstein: And so these online worlds we’re creating essentially put the distribution of esteem and belonging, which are even more important than basic safety to an adolescent brain, in the hands of a technological tool that’s really almost addictive. That’s very, very dangerous, isn’t it?

Patton: I agree.

GettyImages 675412424 1
Image via Getty Images / vasabii

Epstein: Men who are leaders in the tech industry will often explain to me, well you understand that when people like us say or do dumb things, when we come across as privileged or inconsiderate, when we’re relying on philosophies we shouldn’t necessarily rely on, it’s because we ourselves are suffering. We’re overworked, we’re insecure, we’re competitive, et cetera.

From your perspective as a social worker and a professor, how do you think people are doing, overall, in the tech field? Do they need social workers just to talk to about their own issues? What do you think they’re suffering from?

Patton: One thing we have been talking a lot about at Columbia [School of] Social Work is: “Power, Race, Oppression and Privilege (PROP). We have a set of courses to help social workers think about these issues and how they work in communities. Tech could clearly benefit from what we call PROP or a “decolonizing lens.” The scary part for them is, it might mean they lose money because the things that they [would do differently from a “PROP” perspective] may not lend themselves to more money.

But they have to really contend with whether ethics are important, morality is important or whether capitalism is more important than ethics and morality. I’m in no place to help them make the decision, but if you’re going to talk about AI for all, AI for good, and your practices do not humanize everyone, then you really have to stop and think about what you’re doing.

I’ve been lucky. I do some consulting at Facebook and I get to work with colleagues at Facebook that are really thinking about these issues. I have really good conversations with them about how to do things better. They’re listening. They want to do something different. The challenge is how do you move those conversations up the ladder?

What I’m really saying is, the top-level really needs to engage and wrestle with some of these tensions.

Epstein: Thank you. I believe it was Frederick Douglas who said, “power concedes nothing without a demand. It never did and it never will.”  What I’m hearing from you is that tech billions and VC billions will not concede without a demand to do better, either. That basically a lot of the social problems we see in tech today would be fixable but not in a way that would be good for the short term bottom line.

Last question: how optimistic are you about our shared human future?

Patton: I’m very hopeful, and the cliche is right: young people are our future. The young folks I work with are thoughtful, caring, engaged, and way more open-minded than earlier generations. I’m excited to see how they will move us forward.

More TechCrunch

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

1 day ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

1 day ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI