How Informed Consent Has Failed

“That is like saying a ride on horseback is materially indistinguishable from a flight to the moon.”

— Chief Justice Roberts in Riley v. California, dismissing the comparison of smartphones to physical items

The quote above from Chief Justice Roberts in Riley v. California has implications far beyond the holding of that case. In rejecting the government’s strained analogies to wallets and address books, the Chief Justice recognized that technology has fundamentally changed things. Smartphones are different and the law can no longer ignore that difference.

Informed consent is a subject long overdue for such a wake-up call. Two recent stories demonstrate how the traditional informed consent model for consumer data has failed utterly to adapt to changes in the way technology uses data.

In late June, researchers published results of a study that manipulated the News Feed of some Facebook users. Their purpose was to determine whether changes in the tone (positive or negative) of a user’s News Feed would affect that user’s emotions, as evidenced by subsequent posts. This research caused an uproar that was surprising only to Facebook. Facebook argues that its use of data was appropriate because it was “part of ongoing research (that) companies do to test different products.” Facebook’s Data Use Policy at the time contained no specific reference to testing or research, although such language was added a few months after this study.

Meanwhile, Google announced at its recent developer conference a new application programming interface, or API, that will make it easier for developers to use Gmail data in their apps. Apps could access email before, but because the existing email interface, IMAP, is cumbersome, few developers have used it. By making access easier with the new API, Google hopes to cement Gmail’s popularity with users by increasing access to apps that will improve user experience.

Informed consent is a fundamental protection for consumer privacy. The underlying notion is that there are many uses of private information to which consumers will willingly agree, particularly if it means improved service or greater convenience. But each consumer is different, so they need sufficient information to make an informed decision. The traditional model for obtaining consent is to provide information in writing and seek agreement. With digital uses of data, this information usually comes in “terms of service” that are long and dense. Consumers rarely make their way through the information, and when they do they often find it complex and vague.

Would anyone seriously argue that Facebook users <em>expected</em> this kind of manipulation of their News Feed or examination of their data for this purpose?

The Facebook story demonstrates the fundamental breakdown of this informed consent model. Facebook argues its Data Use Policy covered the emotion research. Others disagree. But even if Facebook is right – indeed, even if the newer language that mentions research were in play – this is an informed consent failure. Would anyone seriously argue that Facebook users expected this kind of manipulation of their News Feed or examination of their data for this purpose? Some consumers would knowingly consent to research like this, but it is unlikely that a single one actually did.

The Google story raises a different problem: volume. Google, which already scans Gmail content, is now opening its users’ data to access by countless app developers. App developers are mostly small companies, often unsophisticated about privacy and legal issues. As a result, apps are notoriously risky, lagging far behind other technologies when it comes to protection of privacy and data security.

So the result will be a torrent of new uses of sensitive personal data – on Gmail now, but presumably other email services will follow. Some apps will have privacy policies, some will not. Users may be asked to “opt in” to the app’s use of their data, but they are likely to know little about what that means. And they will be inundated with these opportunities. The current informed consent model is incapable of keeping up.

If the traditional mechanism for ensuring informed consent is hopelessly antiquated, what should replace it? First, companies must finally step up. In a technology industry acclaimed for its innovation, we have seen almost no creative thinking about how to acquire meaningful consent. The data industry might see little benefit: users’ confusion means more flexibility for them. But that is short-sighted. The cycle of revelation, outrage, and apology that has dogged data-dependent companies will only intensify as technology accesses increasingly sensitive data and privacy concerns grow.

Companies should look for ways to minimize private data use. (Google’s API has a useful feature that allows developers to restrict data access only to information needed to send an email.) They should also seek simple, clear, technology-relevant ways to inform of specific data uses that depart from consumer expectations. Government, too, must adjust. The FTC has been active in this area, taking action against more extreme privacy violations. But it is not yet demanding an innovative approach or recognizing the fundamental failure of the current model.

The first step – and the hardest – is the one the Supreme Court took in Riley. We must recognize that the way companies deal with consumer data is different now. Informed consent policy can no longer ignore that difference.

Editor’s note: Mary DeRosa serves as a Distinguished Visitor from Practice at Georgetown Law School, where she focuses on national security law and teaches courses on national security and cybersecurity. Previously, Ms. DeRosa served as Deputy Assistant and Deputy Counsel to the President and National Security Council Legal Adviser in the Obama Administration. She is also a senior adviser to The Chertoff Group, a global security advisory firm that advises clients on cyber security.