AI

NYC’s anti-bias law for hiring algorithms goes into effect

Comment

Image of a person talking to two colleagues via videoconferencing.
Image Credits: Olga Strelnikova (opens in a new window) / Getty Images

After months of delays, New York City today began enforcing a law that requires employers using algorithms to recruit, hire or promote employees to submit those algorithms for an independent audit — and make the results public. The first of its kind in the country, the legislation — New York City Local Law 144 — also mandates that companies using these types of algorithms make disclosures to employees or job candidates.

At a minimum, the reports companies must make public have to list the algorithms they’re using as well an an “average score” candidates of different races, ethnicities and genders are likely to receive from the said algorithms — in the form of a score, classification or recommendation. It must also list the algorithms’ “impact ratios,” which the law defines as the average algorithm-given score of all people in a specific category (e.g. Black male candidates) divided by the average score of people in the highest-scoring category.

Companies found not to be in compliance will face penalties of $375 for a first violation, $1,350 for a second violation and $1,500 for a third and any subsequent violations. Each day a company uses an algorithm in noncompliance with the law, it’ll constitute a separate violation — as will failure to provide sufficient disclosure.

Importantly, the scope of Local Law 144, which was approved by the City Council and will be enforced by the NYC Department of Consumer and Worker Protection, extends beyond NYC-based workers. As long as a person’s performing or applying for a job in the city, they’re eligible for protections under the new law.

Many see it as overdue. Khyati Sundaram, the CEO of Applied, a recruitment tech vendor, pointed out that recruitment AI in particular has the potential to amplify existing biases — worsening both employment and pay gaps in the process.

“Employers should avoid the use of AI to independently score or rank candidates,” Sundaram told TechCrunch via email. “We’re not yet at a place where algorithms can or should be trusted to make these decisions on their own without mirroring and perpetuating biases that already exist in the world of work.”

One needn’t look far for evidence of bias seeping into hiring algorithms. Amazon scrapped a recruiting engine in 2018 after it was found to discriminate against women candidates. And a 2019 academic study showed AI-enabled anti-Black bias in recruiting.

Elsewhere, algorithms have been found to assign job candidates different scores based on criteria like whether they wear glasses or a headscarf; penalize applicants for having a Black-sounding name, mentioning a women’s college, or submitting their résumé using certain file types; and disadvantage people who have a physical disability that limits their ability to interact with a keyboard.

The biases can run deep. A October 2022 study by the University of Cambridge implies the AI companies that claim to offer objective, meritocratic assessments are false, positing that anti-bias measures to remove gender and race are ineffective because the ideal employee is historically influenced by their gender and race.

But the risks aren’t slowing adoption. Nearly one in four organizations already leverage AI to support their hiring processes, according to a February 2022 survey from the Society for Human Resource Management. The percentage is even higher — 42% — among employers with 5,000 or more employees.

So what forms of algorithms are employers using, exactly? It varies. Some of the more common are text analyzers that sort résumés and cover letters based on keywords. But there are also chatbots that conduct online interviews to screen out applicants with certain traits, and interviewing software designed to predict a candidate’s problem solving skills, aptitudes and “cultural fit” from their speech patterns and facial expressions.

The range of hiring and recruitment algorithms is so vast, in fact, that some organizations don’t believe Local Law 144 goes far enough.

The NYCLU, the New York branch of the American Civil Liberties Union, asserts that the law falls “far short” of providing protections for candidates and workers. Daniel Schwarz, senior privacy and technology strategist at the NYCLU, notes in a policy memo that Local Law 144 could, as written, be understood to only cover a subset of hiring algorithms — for example excluding tools that transcribe text from video and audio interviews. (Given that speech recognition tools have a well-known bias problem, that’s obviously problematic.)

“The … proposed rules [must be strengthened to] ensure broad coverage of [hiring algorithms], expand the bias audit requirements and provide transparency and meaningful notice to affected people in order to ensure that [algorithms] don’t operate to digitally circumvent New York City’s laws against discrimination,” Schwarz wrote. “Candidates and workers should not need to worry about being screened by a discriminatory algorithm.”

Parallel to this, the industry is embarking on preliminary efforts to self-regulate.

December 2021 saw the launch of the Data & Trust Alliance, which aims to develop an evaluation and scoring system for AI to detect and combat algorithmic bias, particularly bias in hiring. The group at one point counted CVS Health, Deloitte, General Motors, Humana, IBM, Mastercard, Meta, Nike and Walmart among its members, and garnered significant press coverage.

Unsurprisingly, Sundaram is in favor of this approach.

“Rather than hoping regulators catch up and curb the worst excesses of recruitment AI, it’s down to employers to be vigilant and exercise caution when using AI in hiring processes,” he said. “AI is evolving more rapidly than laws can be passed to regulate its use. Laws that are eventually passed — New York City’s included — are likely to be hugely complicated for this reason. This will leave companies at risk of misinterpreting or overlooking various legal intricacies and, in-turn, see marginalized candidates continue to be overlooked for roles.”

Of course, many would argue having companies develop a certification system for the AI products that they’re using or developing is problematic off the bat.

While imperfect in certain areas, according to critics, Local Law 144 does require that audits be conducted by independent entities that haven’t been involved in using, developing or distributing the algorithm they’re testing and that don’t have a relationship with the company submitting the algorithm for testing.

Will Local Law 144 affect change, ultimately? It’s too early to tell. But certainly, the success — or failure — of its implementation will affect laws to come elsewhere. As noted in a recent piece for Nerdwallet, Washington, D.C., is considering a rule that would hold employers accountable for preventing bias in automated decision-making algorithms. Two bills in California that aim to regulate AI in hiring were introduced within the last few years. And in late December, a bill was introduced in New Jersey that would regulate the use of AI in hiring decisions to minimize discrimination.

More TechCrunch

The decision to go monochrome was probably a smart one, considering the candy-colored alternatives that seem to want to dazzle and comfort you.

ChatGPT’s new face is a black hole

Apple and Google announced on Monday that iPhone and Android users will start seeing alerts when it’s possible that an unknown Bluetooth device is being used to track them. The…

Apple and Google agree on standard to alert people when unknown Bluetooth devices may be tracking them

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: Watch here

A human safety operator will be behind the wheel during this phase of testing, according to the company.

GM’s Cruise ramps up robotaxi testing in Phoenix

OpenAI announced a new flagship generative AI model on Monday which they call GPT-4o — the “o” stands for “omni,” referring to the model’s ability to handle text, speech, and…

OpenAI debuts GPT-4o ‘omni’ model now powering ChatGPT

Featured Article

The women in AI making a difference

As a part of a multi-part series, TechCrunch is highlighting women innovators — from academics to policymakers —in the field of AI.

2 hours ago
The women in AI making a difference

The expansion of Polar Semiconductor’s facility would enable the company to double its U.S. production capacity of sensor and power chips within two years.

White House proposes up to $120 million to help fund Polar Semiconductor’s chip facility expansion

In 2021, Google kicked off work on Project Starline, a corporate-focused teleconferencing platform that uses 3D imaging, cameras and a custom-designed screen to let people converse with someone as if…

Google’s 3D video conferencing platform, Project Starline, is coming in 2025 with help from HP

Over the weekend, Instagram announced it is expanding its creator marketplace to 10 new countries — this marketplace connects brands with creators to foster collaboration. The new regions include South…

Instagram expands its creator marketplace to 10 new countries

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

Four-year-old Mexican BNPL startup Aplazo facilitates fractionated payments to offline and online merchants even when the buyer doesn’t have a credit card.

Aplazo is using buy now, pay later as a stepping stone to financial ubiquity in Mexico

We received countless submissions to speak at this year’s Disrupt 2024. After carefully sifting through all the applications, we’ve narrowed it down to 19 session finalists. Now we need your…

Vote for your Disrupt 2024 Audience Choice favs

Co-founder and CEO Bowie Cheung, who previously worked at Uber Eats, said the company now has 200 customers.

Healthy growth helps B2B food e-commerce startup Pepper nab $30 million led by ICONIQ Growth

Booking.com has been designated a gatekeeper under the EU’s DMA, meaning the firm will be regulated under the bloc’s market fairness framework.

Booking.com latest to fall under EU market power rules

Featured Article

‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Estate is an invite-only website that has helped hundreds of attackers make thousands of phone calls aimed at stealing account passcodes, according to its leaked database.

7 hours ago
‘Got that boomer!’: How cybercriminals steal one-time passcodes for SIM swap attacks and raiding bank accounts

Squarespace is being taken private in an all-cash deal that values the company on an equity basis at $6.6 billion.

Permira is taking Squarespace private in a $6.9 billion deal

AI-powered tools like OpenAI’s Whisper have enabled many apps to make transcription an integral part of their feature set for personal note-taking, and the space has quickly flourished as a…

Buy Me a Coffee’s founder has built an AI-powered voice note app

Airtel, India’s second-largest telco, is partnering with Google Cloud to develop and deliver cloud and GenAI solutions to Indian businesses.

Google partners with Airtel to offer cloud and GenAI products to Indian businesses

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to…

Women in AI: Rep. Dar’shun Kendrick wants to pass more AI legislation

We took the pulse of emerging fund managers about what it’s been like for them during these post-ZERP, venture-capital-winter years.

A reckoning is coming for emerging venture funds, and that, VCs say, is a good thing

It’s been a busy weekend for union organizing efforts at U.S. Apple stores, with the union at one store voting to authorize a strike, while workers at another store voted…

Workers at a Maryland Apple store authorize strike

Alora Baby is not just aiming to manufacture baby cribs in an environmentally friendly way but is attempting to overhaul the whole lifecycle of a product

Alora Baby aims to push baby gear away from the ‘landfill economy’

Bumble founder and executive chair Whitney Wolfe Herd raised eyebrows this week with her comments about how AI might change the dating experience. During an onstage interview, Bloomberg’s Emily Chang…

Go on, let bots date other bots

Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news. This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for…

Why Apple’s ‘Crush’ ad is so misguided

The U.K. AI Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and…

UK agency releases tools to test AI model safety

AI startup Runway’s second annual AI Film Festival showcased movies that incorporated AI tech in some fashion, from backgrounds to animations.

At the AI Film Festival, humanity triumphed over tech

Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society.

Women in AI: Rachel Coldicutt researches how technology impacts society

SAP Chief Sustainability Officer Sophia Mendelsohn wants to incentivize companies to be green because it’s profitable, not just because it’s right.

SAP’s chief sustainability officer isn’t interested in getting your company to do the right thing

Here’s what one insider said happened in the days leading up to the layoffs.

Tesla’s profitable Supercharger network is in limbo after Musk axed the entire team