Fear and liability in algorithmic hiring 

It would be a foolish U.S. business that tried to sell chlorine-washed chicken in Europe — a region where very different food standards apply. But in the high-tech world of algorithmically assisted hiring, it’s a different story.

A number of startups are selling data-driven tech tools designed to comply with U.S. equality laws into the European Union, where their specific flavor of anti-discrimination compliance may be as legally meaningless as the marketing glitter they’re sprinkling — with eye-catching (but unquantifiable) claims of “fairness metrics” and “bias beating” AIs.

First up, if your business is trying to crystal-ball-gaze something as difficult to quantify (let alone predict) as “job fit” and workplace performance, where each individual hire will almost certainly be folded into (and have their performance shaped by) a dynamic mix of other individuals commonly referred to as “a team” — and you’re going about this job matchmaking “astrology” by working off of data sets that are absolutely not representative of our colorful, complex, messy human reality — then the most pressing question is probably, “what are you actually selling?”

Snake oil in software form? Automation of something math won’t ever be able to “fix?” An impossibly reductionist dream of friction-free recruitment?

Deep down in the small print, does your USP sum to claiming to do the least possible damage? And doesn’t that sound, well, kind of awkward?

An automated hiring system may mean the difference between someone getting a job or not getting one, so there are clearly substantial individual impacts flowing from the use of such tools. An application triage system may even determine if a person’s CV ever passes in front of human eyeballs. And if an AI decides to exclude someone from the pool of possible hires, it’s automating zero chance of employment. The computer already said “no.”

A recent Vice article reported on the absurd tricks many jobseekers are using to beat the bots and get their resumé considered, such as keyword-stuffing or paying a third-party service to read like a robot for you.

Meanwhile, it’s becoming ever clearer how good AI is at reflecting and repeating problematic patterns it finds in data. So the stakes are high indeed. And algorithms that pick up ingrained societal prejudices from data are a liability to the businesses using them, as well as to the individuals they’re biased against.

Of course, no one would claim manual hiring processes are flawless or bias free. But automation of imperfection risks systematizing unfairness and copy-pasting bias. And damage scaled looks quantifiably worse than locally containable harm — where there may at least be a better chance of fixing the problem versus trying to crack open someone else’s proprietary algorithmic black box.

It’s instructive that under critical scrutiny, startups in the automated hiring assistance game can be very quick to apply a thick layer of caveats about what their systems actually do. Under the gloss, there can be an awful lot of floss.

The problem with [a ‘fairness’ metric that’s known as ‘equalized odds at training’] is that it assumes that you have equal representation in your training data set — which frankly we do not have,” said Lewis Baker, chief data scientist at algo hiring startup Pymetrics, during an AI ethics workshop at a recent industry conference.

At the same time, traditional hiring processes are expensive and roundly hated (“the status quo sucks,” as Baker put it; aka: “hiring is awful”), while automation and data-fed modeling is in vogue and increasingly cheap to do, and regulation around equality and employment is a complex and at times fuzzy patchwork that does not set universal rules for how algorithmic hiring systems should be used.

So, yes, startups have spied a business opportunity — to automate pieces of the recruitment puzzle, including, in some cases, applying AI to grease a stubbornly sticky pipe.

Pymetrics’ Baker said he left academia “because I was sufficiently convinced there was a need in the market to do something good,” saying at the top of his talk that the company is “dedicated to making fair and precise hiring recommendations.” But as soon as he got into the details, the caveats came thick and fast.

In the workshop setting he had this to say of “de-biasing:” “I hate the term, it’s marketing — but we have to reduce bias in our models in some way.” Just not by “demographic norming,” which he noted is explicitly illegal in the U.S. Instead he said the bias reduction path Pymetrics is currently using is “a combination of feature selection and optimization in order to find… ‘the most predictive, least discriminatory alternative’ ” — with Baker directly quoting the letter of “the legal language” and ending with the final qualification, “that is what we’re aiming for.”

In reality, hiring decisions — as amply illustrated by all that verbal hedging — are legally complicated. This is because there are many relevant national laws intersecting and wrapping this area, including equality law, employment law, data protection and sometimes restrictions on automated processing itself.

It’s certainly a lot more legally involved than selling soap online or serving digital status updates to a global “community” of eyeballs (which has its own knotty liability issues too).

All of which raises the question of what it means for automated hiring systems (AHSs) to be making claims about “solving” discrimination — as a bunch of these startups imply in their marketing.

Academics behind a research paper presented at the ACM FAT* conference in Barcelona last month, where Baker was speaking, set out to address exactly that question — conducting an analysis of three AHSs that are selling into the U.K. market: Applied, HireVue and Pymetrics (the latter two being developed in the U.S.) — to highlight assumptions and limitations related to the design and implementation of the tools in a U.K. socio-legal context.

“Identify the traits of your top-performing employees and hire people like them, but without the discriminatory bias of traditional recruiting,” was how we reported Pymetrics’ pitch for a games-based job capacity testing platform back in 2017, when it announced $8 million in new funding for what it touted as AI-enabled “fairer” hiring.

In a nutshell, employers get their best staff to play Pymetrics’ games and the platform generates a custom model to compare against and assess potential future hires using a proprietary “fit-to-role” score, which it tests for its version of fair treatment of protected groups in a bid to mitigate bias. Specially it aims to comply with U.S. employment equality regulations, which set a limit on the difference in rates of selection for the highest-passing and lowest-passing candidates, which is colloquially referred to as the four-fifths rule.

While HireVue, a relative veteran of the digital recruitment space — which talks at length on its website about “beating bias with AI” — offers pre-screening tools for employers to whittle down the list of applicants for a job, via video interviews and (also) games. Its system extracts a range of performance indicators that have (somehow) been mapped to professional roles. It uses demographic parity to measure bias and tries to reduce unfairness by removing indicators with a known impact on protected groups, retraining and retesting models until the outcome passes the aforementioned U.S. equality bar.

Fresher-faced U.K. startup Applied, whose seed funding we covered back in 2018, also touts “fairer” hiring. Its approach involves deconstructing the CV and replacing it with an online testing platform focused on assessing “job-relevant” skills but which relies on candidate scoring, not AI-based candidate-to-job matching — and thus the claim is it eschews bias by pushing employers away from processes that might result in skews toward candidates that fit the historic profile of someone they previously hired. Applied tells us it doesn’t measure for fit (“for reasons that it often looks, like affinity bias”) — but says it does “help teams identify what skills matters to the role and how to measure for mission fit and culture add.”

It’s semi-automating discrimination monitoring, rather than fully taking over candidate assessment — which also sets it apart from the other two AHSs the paper considered.

One key takeaway from the researchers’ analysis is that the two U.S. products are applying U.S. regulatory standards of bias mitigation in an employment equality context. Even though the Guidelines on Employee Selection Procedures set by the U.S. Equal Employment Opportunity Commission (EEOC) has no standing in EU (or U.K.) law.

At the same time, there’s a plethora of U.K.-specific employment and equality rules such tools need to be complying with when selling in the market, as well as also EU data protection law, which has been transposed into U.K. law. The latter provides citizens with certain rights around automated processing when there are legal or similarly substantial effects (and hiring would seem to be a pretty strong case for significant impacts).

Lilian Edwards, a professor of internet law at Newcastle University and a co-author of the aforementioned paper, went so far as to suggest during a Q&A at the conference, that under the pan-EU General Data Protection Regulation (GDPR) there’s a strong case that using automated hiring tools on their own, without any possibility of a human decision/review, is illegal, period. Edwards asserted that employers would need to rely on the consent of the user to legally process their data in this scenario, yet, in the context of hiring, such consent likely could not meet the GDPR standard of being freely given, given the obvious power imbalance between a job applicant and a potential employer.

Her observation earned the spontaneous applause of a cross-disciplinary research community dedicated to interrogating issues of algorithmic fairness, accountability and transparency.

“If these are actually being used as solely automated hiring systems, if we just imagine that — even for triage — that’s still a decision then there ought to be a right to say ‘no I won’t consent to this, I want there to be a human that makes this decision’,” argued Edwards in a follow-up interview. “And that takes you to two more thoughts: One is, if you ask for a human, do you just never get the job, right? Which would dissuade people from asking. But the other, which is more legalistic… is: Theoretically the only grounds for making a solely automated decision involving personal data in the EU are either explicit consent or that there’s a public sector interest… [which in a commercial hiring tools scenario there isn’t. So] then it has to be consent.”

“The general line that’s been taken by the A29 Working Party [which set guidance for EU data protection law] is that consent in an employment scenario cannot be effective,” she also noted. “The employers have got to use something else — like legitimate interests. Which they can. But you can’t use legitimate interests in this case because you’ve only got consent… So if it’s only consent and consent isn’t valid in an employment context, then these systems are illegal.”

So that seems to be one key constraint on algorithmic hiring systems operating in the EU: automation can’t be the only option for accessing job vacancies. In a European setting, such systems have to be s(emi)AHSs in the way they’re applied.

The other issue is the lack of a handily regulated statistical measure in the U.K. that’s attached to compliance with equality standards. Because while national law against discrimination of workers with protected characteristics might intend a similar impact as U.S. law, there’s no set mathematical “rule of thumb” equivalent to the U.S.’ four-fifths rule for proving compliance — it’s “much vaguer,” per Edwards.

“In one of the big [U.K.] cases it does say that you can do this in a mathematical way — you can ‘express it as a proportion’, is what they say… But it does not say four-fifths,” she said. “It just says that the difference has to be significant and that it depends on context — including things like the size of the pool and the numbers behind the proportions. So this might be very different if you had three workers and two of them were black, as opposed to having 3,000 workers. Basically, ours is much vaguer. And my guess is that there is almost no way to be shown to implement that within software. Whether it’s rule-based or machine learning.

“To get big enough data from case law — which is what you’d need — to show what’s a proportionate number, what’s a significant difference — would require you to have surely a few hundred or thousand cases. And you haven’t got them.”

The inspiration for the paper was an earlier piece of research by another group of academics, which also sought to understand how a number of U.S.-based AHSs function and mitigate bias — examining the claims being made for proprietary systems via their marketing/public materials but doing so from a U.S. socio-legal perspective.

Those researchers found various products had built the U.S. four-fifths rule into their systems. “Obviously that’s an easy thing to do because it’s a mathematical calculation,” noted Edwards. “It’s a lot easier than saying ‘did you act reasonably, or did you act with due diligence?’ How the hell do you code any of these things?”

When she and her co-authors applied a similar analysis to AHSs selling into the U.K. market — now looking at them from a European socio-legal perspective — they found two of the three companies were still using the exact same rule of thumb, despite operating under different legal and regulatory regimes.

“What we found of course — because of the two that came from America — was it was the same stuff that was used in America! In other words they were using the four-fifths rule to demonstrate compliance with the law. Except that that demonstrated compliance with U.S. law! Not U.K. law,” said Edwards.

“If you were selling chlorinated chicken into this country because it met American health and safety rules and it didn’t meet ours then there would be people checking at the airport and if it was reported you would be fined a lot of money and your shops would be closed down and there would be certification and kite-marking,” she went on. “But this — because it’s buried in software that no one can see the innards of unless they go and pile through this impenetrable blurb PR — no one knows or cares! I was just appalled, really. That they thought it was okay to comply with U.S. law to comply with U.K. law.”

Asked about this discrepancy, Pymetrics sent us this statement:

pymetrics proactively tests every model we build for fairness, which is only possible because we hold ourselves to a quantitive definition of discrimination and never deploy technology that violates this rule. While other jurisdictions have not necessarily settled on a firm threshold, like the 4-5ths rule mentioned by the authors, the underlying concept of proving discrimination statistically is not unique to the U.S.

There is nothing inherent in the design of pymetrics’ technology that requires the U.S. standard to be used in all contexts, but we have found that the rationale has translated easily across diverse clients.  In working with many international employers, pymetrics regularly explain our default to the 4-5ths rule to companies around the world, because we believe transparency is critical for building trustworthy automated HR solutions. If employers or governments were to develop their own statistical definitions of discrimination, we could easily incorporate these into our systems. In fact, pymetrics’ clients often ask for a more rigorous standard of fairness than what U.S. law requires.

The company also objected to the researchers’ use of what it describes as “outdated material” about its methodology — “such as a YouTube video of a presentation from several years ago” — for conducting their analysis of publicly available information about the product, saying it meant the paper makes “several incorrect claims about our process and methodology.”

Albeit the challenges of external study of AHSs — in order to properly understand and hold to account proprietary systems that impact individuals and are making loud claims around fairness and bias — was another salient point the researchers were making.

“As we take issues related to data ethics and privacy very seriously, pymetrics is in full compliance with all regulations in the EU, as well as in every region where we operate,” the startup also told us, adding: “We welcome frank conversations about our operating principles to those who have further questions.”

We also asked HireVue for comment on its use of the four-fifths rule. It too lamented the lack of a “quantitative fairness metric” coded into U.K./EU law, telling us:

HireVue uses the 4/5ths rule as well as other fairness metrics as a basis to measure group differences in terms of average scores, score distributions, and error rates. Though we believe they absolutely should, the UK/EU does not have quantitative fairness metrics like this coded into law, so we incorporate the US guidelines and other fairness metrics into our view of algorithmic bias as a best practice.

It also claimed its system provides customers with “the necessary tools and flexibility” to enable them to comply with EU law — such as by giving candidates notice that they will be evaluated by an algorithm and offering “an alternative process involving human scoring,” as well as providing them with “the ability to offer explanations of outcomes to candidates upon request.”

“HireVue is a data processor under GDPR and its processes are in fact compliant with GDPR requirements regarding data processing,” the company’s statement added.

The sole U.K. startup the paper looked at, Applied, confirmed to us that it has clients in the U.S., as well as in its home market of Europe.

It also told us it applies “a more in-depth measure” than the four-fifths rule in all markets where it operates — with CEO and co-founder Kate Glazebrook dubbing the latter a “rather blunt instrument for analysing discrimination.”

“For all our clients (whether in the U.S. or elsewhere) we actually analyze and present a more in-depth measure of adverse impact than the four-fifths rule,” she told us. “Instead, we measure statistically significant differences in performance by all demographic groups — we can do this not only at the level of who gets hired, but also down to the level of who passes each stage of assessment (i.e. who passes a sift stage, who passes interviews, etc.) and even down to the level of every question that candidates are asked. This then informs corrective actions that might need to take place.

“Since our process is already designed to mitigate bias in the hiring process (i.e. we anonymize candidate applications, randomize them, chunk them up and allow multiple people to score them), we already avoid the vast majority of bias that could result in differential success rates for different candidates; but we continue to measure so that we can identify any remaining areas that require remedial actions.”

“In general, we’re hugely positive of the rate at which the debate around bias in hiring has evolved, and welcome the interest of academics in evaluating what works to reduce discrimination (including by evaluating organizations like ours),” Glazebrook added. “Applied’s mission is to help organizations find the best person for the job, regardless of their background. Unlike most other tools in this space, we don’t use AI to make selection decisions, instead we guardrail human decisions from the biases that can infiltrate hiring — we do this using behavioral science and have worked with the likes of Professor Iris Bohnet at Harvard and Professor Adam Grant at Wharton, to design the platform.”

In additional remarks about the research she pointed to a gap, saying the paper did not consider the discrimination risk of not using any technology at all.

“In general, we felt the paper rightly focuses on some of the risks to poor technological design (which we share!), but probably didn’t distinguish the differences in the platforms enough nor did they describe the risk of not using technologies to help at all. The rates of discrimination in typical CV-style hiring processes are abysmal, so it seemed a little unnecessarily focused on what they perceive as risks without at least presenting the counter-factual,” she said.

Glazebrook also picked up on another limitation of AHSs discussed in the paper. Specifically, how tools aren’t measuring the cumulative disadvantages that may be experienced by protected groups — saying that going beyond measuring “single-dimensional diversity” to considering intersectionality presents specific privacy challenges.

“This is an area we should all be focusing more on, though it raises some important product and privacy questions — namely that the smaller units you use for data, the more likely it is that you inadvertently identify someone,” she said, adding, “since we make commitments to candidates that their demographic data will not be identifiable or traceable, we’re very sensitive to that risk, and are considering product changes carefully.”

Such sensitivity may also be informed by the local context since EU data protection law also includes a requirement that systems processing people’s information apply a privacy by design and default approach.

The researchers behind the paper limited their analysis to AHSs that provided enough public material to carry out an analysis of bias-mitigation methods, made explicit claims of de-biasing and were selling into the U.K. — hence the small sample size. The paper also notes that some of the systems on the U.K. market made no de-biasing claims at all — which they suggest as a fertile area for future research.

Edwards suggested there are likely greater rights risks associated with the use of algorithmic technologies for “datafied micromanagement” across a pool of non-contracted labor— such as precarious workers in the gig economy — versus the use of AHSs higher up the funnel to assist in filling vacancies. But she’s still convinced there are problematic risks being embedded in hiring software.

“To some extent, perhaps, these up market hiring systems are probably not a huge problem — but they are a problem,” she added.

Pymetrics’ Baker’s top-line caveat during the AI ethics workshop was: “We’re definitely working on ways to make this the best possible thing.”

“We try to incorporate ethics at every stage in what we’re doing to try to make the best hiring decisions possible,” he said, before giving an overview of the processes it uses to try to “make this good,” which includes having an “IO-psychology backed team” to perform job analysis and come up with “the best possible criterium” for models and success metrics; creating “custom roles for every role which are “optimized for variance and fluidity;” using measures “selective to be predictive of job performance while also being the least related to protected demographics as possible;” evaluating models for fairness (with the caveat that this isn’t at all easy) and asking applicants to opt-in to share data which it uses to audit the system and improve models for other clients.

He ended with an exhortation to workshop participants to engage with discussion around “how we can make this better.”

Asked what she thinks the responsibilities are of employers using AHSs and startups developing such tools, Edwards said, “both legally and morally I think they should be doing impact assessments. If they’re building a high stakes machine learning system — even if it’s partially machine learning — and employment is definitely high stakes, then they should definitely be doing data protection impact assessment. And they should be looking into privacy by design.

So then you get onto all the clever stuff. How much can you require? Are there ways in which they could minimize the data they could collect? Are there ways in which they could anonymize it? Are there ways in which they could throw away the data after a certain time? What if somebody’s performance improves? What if somebody’s performance is affected by a period of ill health? Do these datafications stay in the system?

“I can just think of so many issues, actually. And I’m not an employment lawyer,” she added.

Discussing the regulatory response to AI applications targeting employment, Edwards suggested the category hasn’t been high on the list of oversight bodies thus far — in part because other areas where decisions are also being automated have been prioritized, such as the public sector or criminal justice.

“Those areas have perhaps been top of the list as perhaps they should be — where it’s being used for sentencing, bail, child welfare assessment, fraud detection, tax — all these kind of areas where obviously [considering impacts] is very crucial. But employment is very, very crucial,” she said.

“The other point people keep bringing up — which I don’t know the answer to — is would wholly human hiring be any better? Would they not be just as biased and erroneous and not caring?,” she continued. “Part of the argument is trying to get the argument out of inherent bias. Inherent bias is a huge part of it but it’s also simply about getting things right — not being erroneous, not basing it on the wrong data, taking the right kind of care, using these qualitative assessments, either wrongly or rightly. Things like ‘do you fit into the workplace?’ Is that ever something that should be automated? I think I could easily argue not.”

Edwards said the key takeaway from the U.S. study was of the need for more transparency into how AHSs function in order to be able to hold accountable the claims they make.

“What we were both trying to do was establish if it was possible to discover anything from just looking at external evidence — and I suppose the answer was ‘yes, we did.’ But it’s a bare start,” she said.

With so much uncertainty, both around how AHSs function and how to apply relevant laws in the context of learning algorithms and automation, there’s no doubt employers that choose to use such tools are opening up a new liability front for themselves — with the risks depending on where and how they’re being used.

On this, Nadine Simpson-Ataha, an employment lawyer at Taylor Wessing in London, said U.K. employers making use of AHSs need to think about equality legislation in the same way they already do in the physical recruitment process.

“There is a duty not to unlawfully discriminate in relation to the arrangements made for deciding who to offer a job to. That duty now needs to be thought about when using a specific AHT beyond the way that it has more commonly been considered (in relation to things like the location and timing of interviews or the content of an application form),” she told us.

Simpson-Ataha gave an example of a law firm using an AHT to source potential candidates for an employment law role from publicly available information, such as that found on professional networks — which fails to identify her as a potential hire.

“I, as a woman of dual heritage (Black Caribbean and White British) am a potential candidate for any law firm that needs an employment lawyer qualified to practice in this jurisdiction. Let’s say a firm uses an AHT that sources potential candidates from publicly available information (professional networks, images, publications etc.). However, this particular AHT at best, doesn’t recognize my beige face to fit neatly into one of its pre-programmed data sets. At worst, it skips over me completely because it can’t harvest all of the information it needs. Either way, I’m not picked up as a potential candidate. An un-nuanced but clear example of unlawful discrimination in the arrangements made by this imaginary law firm for deciding who to offer employment to.”

“Employers using AHTs need to be alive to this risk because, in reality, it will emerge from activity that is far more discreet,” she added, flagging the risk for an employer of liability being embedded in proprietary software.

“Irrespective of how blatant unlawful discrimination is, the legal recourse for the person affected by it is still with the would-be employer,” she told us. “Users of AHTs could seek indemnities against discrimination-based risks when entering into service agreements. This is especially the case in the absence any uniform measurable rule, like the four-fifths rule in the U.S., that provides clarity as to when a recruitment practice will be legally held to have an adverse effect against someone with a protected characteristic.”

Neil Brown, an internet, telecoms and tech lawyer at U.K.-based firm Decoded Legal, agrees that employers using AHSs need to be “very careful” not to fall foul of national equality law.

“If an employer configured their automated hiring system to automatically reject, or otherwise treat less favorably, anyone who was, say, female, or Muslim, then that would be direct discrimination.”

He also flags the risk of “unintentional algorithmic bias” leading to indirect discrimination around protected characteristics such as age, disability, religion or belief and sex — which is, of course, the major uncertainty and concern with the use of such systems.

“From the point of view of English equality law, would-be employers must not discriminate on the basis of protected characteristics, either directly or indirectly,” he added.

On GDPR, Brown takes the view that EU law makes the use of AHSs “tricky in many situations — but not outright unlawful or impossible.”

“The GDPR does not expressly prohibit the use of automated hiring systems. That’s not a particular surprise, since the GDPR does not expressly prohibit any particular system. However, an employer who was subject to the GDPR would need to make sure that their use of an automated hiring system complied with it — and that’s not straightforward,” he said, adding,”there are specific rules about the use of automated decision-taking, if that processing has a legal, or similarly significant, effect.”

“Deciding that you don’t want to employ someone probably doesn’t have a ‘legal’ effect, but I expect a regulator or court would consider the outcome to be ‘similarly significant,’ ” he continued. “Because of this, if the system is indeed fully automated, so that there is no human element in the decision-making process, the would-be employer cannot use the system unless they can satisfy one of the three specific tests outlined in the GPDR [Article 22].”

“I am doubtful that, in most cases, an employer could prove that wholly automated hiring was ‘necessary’ to enter into a contract with the would-be employee, as ‘necessary’ is a high standard,” Brown added. “In other words, if there is an effective but less privacy-intrusive means available, the employer would have to use that instead.”

While, on consent as a legal ground, he agreed with Edwards that it would be difficult for employers to rely on this — as “there is likely to be a power imbalance in most situations” ergo, it’s hard to see how consent could be “freely given.”

Another quirk of U.K. law which Brown nods to is that not all direct discrimination is unlawful. “For example, it is lawful for a would-be employer to treat someone less favorably because of their age, if they can show that doing so is a proportionate means of achieving a legitimate aim.”

So that’s a further consideration for designers of AHSs hoping to copy/paste their U.S. socio-legal approach to anti-discrimination in a U.K. context.

Edwards also makes the point that U.S. case law around employment discrimination tends to be focused on hiring, whereas in the U.K. it’s mostly about firing (or else unfair working conditions). In other words, “the opposite of America.” Which presents further challenges for U.S.-to-U.K. AHSs, unless they substantially reprogram their approach in each market.

While, on the flip side, U.K. AHSs face a challenge in trying to synthesize a neat mathematical formula to prove their product isn’t discriminating unfairly — because national law is creatively ambiguous and there’s not enough tested examples out there.

“To get big enough data from case law which is what you’d need to show what’s a proportionate number, what’s a significant difference [under U.K. law] would require you to have surely a few hundred or thousand cases,” she said. “And you haven’t got them.”

“The whole field is extremely problematic,” Edwards added. “Often what these products are looking for is a kind of ‘fit’ to the existing culture of the workplace and — to me — that’s a license to discriminate in various ways that may not be illegal but are certainly perhaps unhelpful for inclusion and diversity.

“These systems look like you just plug in the profiles, the CVs, the hiring pattern and your last 100 employees — strike a button and away it goes sort of thing. But actually it’s mostly just a kind of adjunct to more conventional hiring methods. No one really thinks this stuff is being used as solely automated systems.”

Which brings us back to the original question — what are these systems actually selling?