AI

Women in AI: Rashida Richardson, senior counsel at Mastercard focusing on AI and privacy

Comment

illustration of Rashida Richardson
Image Credits: Bryce Durbin / TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Rashida Richardson is senior counsel at Mastercard, where her purview lies with legal issues relating to privacy and data protection in addition to AI.

Formerly the director of policy research at the AI Now Institute, the research institute studying the social implications of AI, and a senior policy advisor for data and democracy at the White House Office of Science and Technology Policy, Richardson has been an assistant professor of law and political science at Northeastern University since 2021. There, she specializes in race and emerging technologies.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

My background is as a civil rights attorney, where I worked on a range of issues including privacy, surveillance, school desegregation, fair housing and criminal justice reform. While working on these issues, I witnessed the early stages of government adoption and experimentation with AI-based technologies. In some cases, the risks and concerns were apparent, and I helped lead a number of technology policy efforts in New York State and City to create greater oversight, evaluation or other safeguards. In other cases, I was inherently skeptical of the benefits or efficacy claims of AI-related solutions, especially those marketed to solve or mitigate structural issues like school desegregation or fair housing.

My prior experience also made me hyper-aware of existing policy and regulatory gaps. I quickly noticed that there were few people in the AI space with my background and experience, or offering the analysis and potential interventions I was developing in my policy advocacy and academic work. So I realized this was a field and space where I could make meaningful contributions and also build on my prior experience in unique ways.

I decided to focus both my legal practice and academic work on AI, specifically policy and legal issues concerning their development and use.

What work are you most proud of in the AI field?

I’m happy that the issue is finally receiving more attention from all stakeholders, but especially policymakers. There’s a long history in the United States of the law playing catch-up or never adequately addressing technology policy issues, and five-six years ago, it felt like that may be the fate of AI, because I remember engaging with policymakers, both in formal settings like U.S. Senate hearings or educational forums, and most policymakers treated the issue as arcane or something that didn’t require urgency despite the rapid adoption of AI across sectors. Yet, in the past year or so, there’s been a significant shift such that AI is a constant feature of public discourse and policymakers better appreciate the stakes and need for informed action. I also think stakeholders across all sectors, including industry, recognize that AI poses unique benefits and risks that may not be resolved through conventional practices, so there’s more acknowledgement — or at least appreciation — for policy interventions.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

As a Black woman, I’m used to being a minority in many spaces, and while the AI and tech industries are extremely homogeneous fields, they’re not novel or that different from other fields of immense power and wealth, like finance and the legal profession. So I think my prior work and lived experience helped prepare me for this industry, because I’m hyper-aware of preconceptions I may have to overcome and challenging dynamics I’ll likely encounter. I rely on my experience to navigate, because I have a unique background and perspective having worked on AI in all industries — academia, industry, government and civil society.

What are some issues AI users should be aware of?

Two key issues AI users should be aware of are: (1) greater comprehension of the capabilities and limitations of different AI applications and models, and (2) how there’s great uncertainty regarding the ability of current and prospective laws to resolve conflict or certain concerns regarding AI use.

On the first point, there’s an imbalance in public discourse and understanding regarding the benefits and potential of AI applications and their actual capabilities and limitations. This issue is compounded by the fact that AI users may not appreciate the difference between AI applications and models. Public awareness of AI grew with the release of ChatGPT and other commercially available generative AI systems, but those AI models are distinct from other types of AI models that consumers have engaged with for years, like recommendation systems. When the conversation about AI is muddled — where the technology is treated as monolithic — it tends to distort public understanding of what each type of application or model can actually do, and the risks associated with their limitations or shortcomings.

On the second point, law and policy regarding AI development and use is evolving. While there are a variety of laws (e.g. civil rights, consumer protection, competition, fair lending) that already apply to AI use, we’re in the early stages of seeing how these laws will be enforced and interpreted. We’re also in the early stages of policy development that’s specifically tailored for AI — but what I’ve noticed both from legal practice and my research is that there are areas that remain unresolved by this legal patchwork and will only be resolved when there’s more litigation involving AI development and use. Generally, I don’t think there’s great understanding of the current status of the law and AI, and how legal uncertainty regarding key issues like liability can mean that certain risks, harms and disputes may remain unsettled until years of litigation between businesses or between regulators and companies produce legal precedent that may provide some clarity.

https://techcrunch.com/2024/02/20/the-women-in-ai-making-a-difference/

What is the best way to responsibly build AI?

The challenge with building AI responsibly is that many of the underlying pillars of responsible AI, such as fairness and safety, are based on normative values — of which there are no shared definitions or understanding of these concepts. So one could presumably act responsibly and still cause harm, or one could act maliciously and rely on the fact that there are no shared norms of these concepts to claim good-faith action. Until there are global standards or some shared framework of what is meant to responsibly build AI, the best way one can pursue this goal is to have clear principles, policies, guidance and standards for responsible AI development and use that are enforced through internal oversight, benchmarking and other governance practices.

How can investors better push for responsible AI?

Investors can do a better job at defining or at least clarifying what constitutes responsible AI development or use, and taking action when AI actor’s practices do not align. Currently, “responsible” or “trustworthy” AI are effectively marketing terms because there are no clear standards to evaluate AI actor practices. While some nascent regulations like the EU AI Act will establish some governance and oversight requirements, there are still areas where AI actors can be incentivized by investors to develop better practices that center human values or societal good. However, if investors are unwilling to act when there is misalignment or evidence of bad actors, then there will be little incentive to adjust behavior or practices.

More TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to…

Women in AI: Rep. Dar’shun Kendrick wants to pass more AI legislation

We took the pulse of emerging fund managers about what it’s been like for them during these post-ZERP, venture-capital-winter years.

A reckoning is coming for emerging venture funds, and that, VCs say, is a good thing

It’s been a busy weekend for union organizing efforts at U.S. Apple stores, with the union at one store voting to authorize a strike, while workers at another store voted…

Workers at a Maryland Apple store authorize strike

Alora Baby is not just aiming to manufacture baby cribs in an environmentally friendly way but is attempting to overhaul the whole lifecycle of a product

Alora Baby aims to push baby gear away from the ‘landfill economy’

Bumble founder and executive chair Whitney Wolfe Herd raised eyebrows this week with her comments about how AI might change the dating experience. During an onstage interview, Bloomberg’s Emily Chang…

Go on, let bots date other bots

Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news. This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for…

Why Apple’s ‘Crush’ ad is so misguided

The U.K. Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and academia…

U.K. agency releases tools to test AI model safety

AI startup Runway’s second annual AI Film Festival showcased movies that incorporated AI tech in some fashion, from backgrounds to animations.

At the AI Film Festival, humanity triumphed over tech

Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society.

Women in AI: Rachel Coldicutt researches how technology impacts society

SAP Chief Sustainability Officer Sophia Mendelsohn wants to incentivize companies to be green because it’s profitable, not just because it’s right.

SAP’s chief sustainability officer isn’t interested in getting your company to do the right thing

Here’s what one insider said happened in the days leading up to the layoffs.

Tesla’s profitable Supercharger network is in limbo after Musk axed the entire team

StrictlyVC events deliver exclusive insider content from the Silicon Valley & Global VC scene while creating meaningful connections over cocktails and canapés with leading investors, entrepreneurs and executives. And TechCrunch…

Meesho, a leading e-commerce startup in India, has secured $275 million in a new funding round.

Meesho, an Indian social commerce platform with 150M transacting users, raises $275M

Some Indian government websites have allowed scammers to plant advertisements capable of redirecting visitors to online betting platforms. TechCrunch discovered around four dozen “gov.in” website links associated with Indian states,…

Scammers found planting online betting ads on Indian government websites

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe