To Prevent Another Flint, Make All Open Data Machine Readable

The lead poisoning of the entire city of Flint, Michigan was preventable and should never have happened.

Numerous pundits and industry experts have said this. Most of them, however, explain that if government had functioned properly, the environmental agencies would have properly communicated to their higher-ups and the problem would have been spotted much sooner.

Those with a more cynical view intone that the government in Ann Arbor was not terribly interested in the plight of the largely poor residents of long-beleaguered Flint, a casualty of the Rust Belt Collapse.

I think these experts have it wrong. Open data can help us, the people of the United States, prevent the next Flint. More specifically, real-time, machine-readable, regularly reported open data that is transparent from collection all the way through analysis.

We cannot and should not rely on the government to always keep us safe. This is not an indictment. Governments are fallible, just as any other large organization is fallible. But 100 years ago, there was no way to easily access, analyze and monitor government activities. Today, there is no excuse not to do so.

In the case of Flint, if state and federal environmental authorities had placed in a timely fashion the raw water testing results into an open data platform like Socrata, Junar or CKAN, then any citizen could have run a quick analysis on the results and, with a modicum of education, judged for themselves whether something was amiss.

Developers, too, could have accessed this data via an API and piped it into any number of data analytics platforms to spot aberrant samples, anomalies and other red flags.

Equally important, such systems could detect changes in testing procedures that might indicate manipulation. In the case of Flint, a number of positive test results for lead were not reported. Had these additional samples been included in the reports, then the samples would have crossed critical thresholds that would have forced the government to engage in deeper investigation and possibly even shut down the supply.

Let Flint be a rallying cry.

A smart analytics system, pointed at the reporting data, might have noted the dramatic drop in sample sizes or spotted anomalies.

Unfortunately, today much government data is provided to the people in dirty, unreadable formats. Often, it comes to us one or two years late. Imagine if Google Analytics gave you data with a one- or two-year lag-time. It’s not a perfect analogy, but it’s not that far off.

We have shown that we can digitize and report in a very timely fashion many types of critical analog government data. Check the real-time crime maps of your city or ZIP code for a taste of this. Crime data is messy data, collected by humans. But when the reporting structure is normalized and enforced, machine readability is automatic and painless.

To be clear, much of the government data that is relevant to our lives is not even big data. There are a comparatively small number of water samples in Flint taken each year. If you put all the water samples taken for every water system in the United States for the past 50 years into a database, it would be dwarfed by the monthly clickstream analysis database of Netflix, Amazon or any other major web property. A smart blogger was able in a single afternoon to analyze all the parking ticket data of New York City and find the “most expensive” fire hydrants in the city. His tool of choice? Microsoft Excel.

We already see people taking matters into their own hands to collect and correct government data around critical matters. Brian Burghart runs Fatal Encounters, a nonprofit crowdsourced effort by a distributed team of volunteers to catalog every instance in the United States where a citizen is killed during an incident with law enforcement. Getting an accurate read on the data about how often people are dying in these incidents would seem to be a critical political function in an accountable society.

We cannot and should not rely on the government to always keep us safe.

But the Federal Bureau of Investigation has long struggled to collect the data from local authorities who are, in turn, reluctant to self-report and self-police. Bergdorf’s team takes raw media reports and turns them into structured data. You can see it here in an online spreadsheet (or use it for your own analysis).

At present, that Google Sheet is the most comprehensive historical record of deaths at the hands of police officers. It also served as the basis for critically important media projects on this topic by The Washington Post and The Guardian.

Between active citizens and an active media, open data created by the people in the absence of a good government data reporting structure is already having an impact. There is a growing dialogue and many people are now asking why thousands of citizens die in the United States at the hands of law enforcement when this literally does not happen in comparable developed societies in Europe or Asia.

To tie this back to Flint, parents and community activists across the country are keenly interested in the state of their environment. Even the cynical can make the obvious economic connection that homes in a neighborhood marked by lead poisoning or other environmental woes will lose significant value.

For the parents of Flint, they face the most unspeakable horror. Their children will be less likely to graduate from college, more likely to go to jail and less likely to hold down a good job and live the American dream. Lead exposure at early ages results in permanent loss of IQ points.

The lead poisoning of the entire city of Flint, Michigan was preventable and should never have happened.

One can’t help but think if you replace Flint with, say, Palo Alto or Noe Valley in the dialogues, the technosphere would be up in arms not only asking for fixes to the water supply but also demanding that they receive the test data as soon as it happens in a format they can easily analyze. In other words, not test data trapped in a PDF or a badly formatted spreadsheet with loads of merged rows and columns that takes significant time to flatten and clean.

And that’s the point. What’s good for Flint is good for Palo Alto. Open data can equally help rich and poor, young and old, by allowing us to hold our government more accountable and judge for ourselves. For governments, a mandate to move to machine-readable, near real-time open data would be a massive blessing in disguise because finally we could apply to the analog world the powerful tools we use to analyze online data.

Let’s be clear. Open data is not a panacea. It will not heal the sick. It won’t automatically repair the pipes in Flint. And open data does not mean all data should be open. Personnel files in many cases must remain closed, as should private records.

But open data for crime, the environment, spending, lobbying and all other data collected about our cities, counties and states is a very good start to letting both governments and citizens see their world with eyes wide open, through the lens and analytical power of data. Maybe the alarms in Flint would have gone off at the first signs of trouble.

Maybe not. But at least they would have had a chance. If the next president of the United States wants a moonshot, here’s my suggestion: Put us on a 10-year mandate for machinereadable open data at every level of government. Let Flint be a rallying cry. Make all government data machine readable and open the curtains to let in the light and the prying eyes.