Can predictive analytics be made safe for humans?

Plus, the more moral relativity of Huawei vs. America

Massive-scale predictive analytics is a relatively new phenomenon, one that challenges both decades of law as well as consumer thinking about privacy.

As a technology, it may well save thousands of lives in applications like predictive medicine, but if it isn’t used carefully, it may prevent thousands from getting loans, for instance, if an underwriting algorithm is biased against certain users.

I chatted with Dennis Hirsch a few weeks ago about the challenges posed by this new data economy. Hirsch is a professor of law at Ohio State and head of its Program on Data and Governance. He’s also affiliated with the university’s Risk Institute.

“Data ethics is the new form of risk mitigation for the algorithmic economy,” he said. In a post-Cambridge Analytica world, every company has to assess what data it has on its customers and mitigate the risk of harm. How to do that, though, is at the cutting edge of the new field of data governance, which investigates the processes and policies through which organizations manage their data.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.

“Traditional privacy regulation asks whether you gave someone notice and given them a choice,” he explains. That principle is the bedrock for Europe’s GDPR law, and for the patchwork of laws in the U.S. that protect privacy. It’s based around the simplistic idea that a datum — such as a customer’s address — shouldn’t be shared with, say, a marketer without that user’s knowledge. Privacy is about protecting the address book, so to speak.

The rise of “predictive analytics,” though, has completely demolished such privacy legislation. Predictive analytics is a fuzzy term, but essentially means interpreting raw data and drawing new conclusions through inference. This is the story of the famous Target data crisis, where the retailer recommended pregnancy-related goods to women who had certain patterns of purchases. As Charles Duhigg explained at the time:

Many shoppers purchase soap and cotton balls, but when someone suddenly starts buying lots of scent-free soap and extra-big bags of cotton balls, in addition to hand sanitizers and washcloths, it signals they could be getting close to their delivery date.

Predictive analytics is difficult to predict. Hirsch says “I don’t think any of us are going to be intelligent enough to understand predictive analytics.” Talking about customers, he said “They give up their surface items — like cotton balls and unscented body lotion — they know they are sharing that, but they don’t know they are giving up their pregnancy status. … People are not going to know how to protect themselves because they can’t know what can be inferred from their surface data.”

In other words, the scale of those predictions completely undermines notice and consent.

Even though the law hasn’t caught up to this exponentially more challenging problem, companies themselves seem to be responding in the wake of Target and Facebook’s very public scandals. “What we are hearing is that we don’t want to put our customers at risk,” Hirsch explained. “They understand that this predictive technology gives them really awesome power and they can do a lot of good with it, but they can also hurt people with it.” The key actors here are corporate chief privacy officers, a role that has cropped up in recent years to mitigate some of these challenges.

Hirsch is spending significant time trying to build new governance strategies to allow companies to use predictive analytics in an ethical way, so that “we can achieve and enjoy its benefits without having to bear these costs from it.” He’s focused on four areas: privacy, manipulation, bias and procedural unfairness. “We are going to set out principles on what is ethical and and what is not,” he said.

Much of that focus has been on how to help regulators build policies that can manage predictive analytics. Because people can’t understand the extent that inferences can be made with their data, “I think a much better regulatory approach is to have someone who does understand, ideally some sort of regulator, who can draw some lines.” Hirsch has been researching how the FTC’s Unfairness Authority may be a path forward for getting such policies into practice.

He analogized this to the Food and Drug Administration. “We have no ability to assess the risks of a given drug [so] we give it to an expert agency and allow them to assess it,” he said. “That’s the kind of regulation that we need.”

Hirsch overall has a balanced perspective on the risks and rewards here. He wants analytics to be “more socially acceptable,” but at the same time, sees the needs for careful scrutiny and oversight to ensure that consumers are protected. Ultimately, he sees that as incredibly beneficial to companies that can take the value out of this tech without risking provoking consumer ire.

Who will steal your data more: China or America?

The Huawei logo is seen in the center of Warsaw, Poland

Jaap Arriens/NurPhoto via Getty Images

Talking about data ethics, Europe is in the middle of a superpower pincer. China’s telecom giant Huawei has made expansion on the continent a major priority, while the United States has been sending delegation after delegation to convince its Western allies to reject Chinese equipment. The dilemma was quite visible last week at MWC Barcelona, where the two sides each tried to make their case.

It’s been years since the Snowden revelations showed that the United States was operating an enormous eavesdropping infrastructure targeting countries throughout the world, including across Europe. Huawei has reiterated its stance that it does not steal information from its equipment, and has repeated its demands that the Trump administration provide public proof of flaws in its security.

There is an abundance of moral relativism here, but I see this as increasingly a litmus test of the West on China. China has not hidden its ambitions to take a prime role in East Asia, nor has it hidden its intentions to build a massive surveillance network over its own people or to influence the media overseas.

Those tactics, though, are straight out of the American playbook, which lost its moral legitimacy over the past two decades from some combination of the Iraq War, Snowden, WikiLeaks and other public scandals that have undermined trust in the country overseas.

Security and privacy might have been a competitive advantage for American products over their Chinese counterparts, but that advantage has been weakened for many countries to near zero. We are increasingly going to see countries choose a mix of Chinese and American equipment in sensitive applications, if only to ensure that if one country is going to steal their data, it might as well be balanced.

Things that seem interesting that I haven’t read yet

Obsessions

  • Perhaps some more challenges around data usage and algorithmic accountability
  • We have a bit of a theme around emerging markets, macroeconomics and the next set of users to join the internet
  • More discussion of megaprojects, infrastructure and “why can’t we build things?”

Thanks

To every member of Extra Crunch: thank you. You allow us to get off the ad-laden media churn conveyor belt and spend quality time on amazing ideas, people and companies. If I can ever be of assistance, hit reply, or send an email to danny@techcrunch.com.

This newsletter is written with the assistance of Arman Tabatabai from New York.

You’re reading the Extra Crunch Daily. Like this newsletter? Subscribe for free to follow all of our discussions and debates.