AI desperately needs regulation and public accountability, experts say

Image Credits: Aniwhite / Shutterstock

Artificial intelligence systems and creators are in dire need of direct intervention by governments and human rights watchdogs, according to a new report from researchers at Google, Microsoft and others at AI Now. Surprisingly, it looks like the tech industry just isn’t that good at regulating itself.

In the 40-page report (PDF) published this week, the New York University-based organization (with Microsoft Research and Google-associated members) shows that AI-based tools have been deployed with little regard for potential ill effects or even documentation of good ones. While this would be one thing if it was happening in controlled trials here and there, instead these untested, undocumented AI systems are being put to work in places where they can deeply affect thousands or millions of people.

I won’t go into the examples here, but think border patrol, entire school districts and police departments, and so on. These systems are causing real harm, and not only are there no systems in place to stop them, but few to even track and quantify that harm.

“The frameworks presently governing AI are not capable of ensuring accountability,” the researchers write in the paper. “As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.”

Right now companies are creating AI-based solutions to everything from grading students to assessing immigrants for criminality. And the companies creating these programs are bound by little more than a few ethical statements they decided on themselves.

Google, for instance, recently made a big deal about setting some “AI principles” after that uproar about its work for the Defense Department. It said its AI tools would be socially beneficial, accountable and won’t contravene widely accepted principles human rights.

Naturally, it turned out the company has the whole time been working on a prototype censored search engine for China. Great job!

Google is reportedly planning a censorship-friendly search service for China

So now we know exactly how far that company can be trusted to set its own boundaries. We may as well assume that’s the case for the likes of Facebook, which is using AI-based tools to moderate; Amazon, which is openly pursuing AI for surveillance purposes; and Microsoft, which yesterday published a good piece on AI ethics — but as good as its intentions seem to be, a “code of ethics” is nothing but promises a company is free to break at any time.

The AI Now report has a number of recommendations, which I’ve summarized below but really are worth reading in their entirety. It’s quite readable and a good review, as well as smart analysis.

They’re good recommendations, but not the kind that can be made on short notice, so expect 2019 to be another morass of missteps and misrepresentations. And as usual, never trust what a company says, only what it does — and even then, don’t trust it to say what it does.

Latest Stories