“I think a lot of journal articles should really be blogs,” says The New York Times‘ election prediction expert, Nate Silver. “You kind of throw different information at people that way and entrust the reader to come to their own conclusions.” Now that Silver has managed to puncture the once pundit-dominated news cycle with statistical prudence, he’s on a mission to rekindle our collective faith in statistics by making nuance and uncertainty sexy with his new book, The Signal and the Noise. Silver tells TechCrunch that intelligent prediction is messy, biased, and iterative — all the characteristics that don’t lend themselves to grand pronouncements in 30-second soundbites. Blogs, instead, lend themselves to an honest back-and-forth about the sausage of statistical conclusions, which can, hopefully, create a more respected class of experts and a more informed public.
“The thing that people associate with expertise,” he says, “authoritativeness, kind of with a capital ‘A,’ don’t correlate very well with who’s actually good at making predictions.” Silver was referencing a seminal study by Berkeley political psychologist, Philip Tetlock, that rocked the academic world: Political scientists weren’t much better at prediction than the public, especially those who were the most confident about their findings.
For instance, Silver writes in his book, despite Fox News polling expert Dick Morris predicting that Donald Trump had a “damn good” chance of winning the Republican nomination and that Hurricane Katrina would help Bush’s election, the lively Morris is still in the regular TV lineup.
Academic publishing has its own faults, known as “publication bias,” where researchers and journals only bother publicizing research if something interesting is found (even though such a finding may be an outlier).
As opposed to TV or academic journals, blogging is a more honest approach to science:
1. Statistics Are History Foretold
“I prefer more to kind of show people different things than tell them ‘oh, here’s what you should believe’ and, over time, you can build up a rapport with your audience,” says Silver, who argues that good forecasters are also good historians.
Many of variables that will influence an election are unknown, such as if someone like Republican candidate Todd Akin shockingly proclaims that a woman’s omnipotent vagina automatically knows to shut down during rape to prevent pregnancy. A responsible model can take this into account and predict, based on past experiences, how often politicians misstep (and how badly).
Silver writes that an honest discussion of the economic models leading up to Obama’s infamous claim that the recession could cause unemployment above 10 percent (it actually got much worse) could have helped the administration save face. The economic recession was unprecedented, so any economic models should have been taken with a (big) grain of salt. Rather than divine a number from a frail statistical prediction, an honest back-and-forth could have helped everyone prepare for much worse and save statisticians from being wrongly labeled quacks.
2. Be Honest About Your Bias
In an extended rant, Silver explains his philosophy of honesty and the trouble with traditional journalism:
That’s not how a lot of journalism works, where it’s supposed to be the disembodied voice of The Washington Post, or The New York Times; it’s supposed to be so authoritative that people are terrified when they’re admitting that ‘Hey, there are things I don’t know; there are things that I bring from my point of view that not everyone might agree with.’ And, that makes it harder to be informal and I think really develop a trust with your audience in the long run.
Statistical models are, ultimately, based on assumptions of human behavior. If we expect homeowners to be more intelligent consumers and bankers to not behave like sheep, we might radically underestimate how badly the housing market can get (and many did). For example, former Federal Reserve Chairman Alan Greenspan once admitted that he “made a mistake in presuming that the self-interest of organizations, specifically banks and others, were such that they were best capable of protecting their own shareholders and their equity in the firms.” All the fancy models crumbled once that simple assumption fell through.
As a result, Silver prefers to state the psychological assumptions in his models. In a blog post earlier this month, Silver explained why his forecasts didn’t put as much stock into polls taken after the Republican National Convention, given that the patriotic hysteria that quickly dissipates after the grand speeches are forgotten. Additionally, the less-than-stellar bounce that Obama would get could have been colored by Silver’s impression that Obama’s speech wasn’t the strongest of the convention.
Informality and bias are hallmarks of the blogging style; a style that could lead to a more sensible discussion of science.
3. Bayesian Nuance
Before he merged with the New York Times, Silver gained notoriety after being one of the only people to predict that Barack Obama would win North Carolina over Hillary Clinton. His secret: Bayesian methods, a statistical technique used to improve prediction by adding outside information. In Silver’s case, he famously began weighting polls by their historical track record and how well they sampled voters likely to swing the vote. In North Carolina, Silver tells us, polls underestimated the impact of African American voters: “It’s fairly basic in some sense, right, but, you know, it did defy the polls.”
Because Bayesian statistics take into account more than just the number of people sampled, they beg the kinds of explanation that are best for a meaty blog post. For instance, in his book, Silver writes about a well-known counterintuitive finding in public health: Breast cancer mammograms are 75 percent accurate, yet if a woman gets news that a mammogram screening detected cancer, she only has about a 10 percent probability of actually having cancer. Confused? When surveyed with this question, most people are.
See, most women don’t have breast cancer, and though false positives are rare, the sheer number of women who will receive a mammogram means that there will be a lot more false positives than true positives (here is a blog post with the numbers worked out). In a population of 1,000 screenings, maybe 11 women will have cancer and be correctly diagnosed, and about 100 will get false positives.
Ardent followers of the 538 blog know that Silver spends most of his time explaining his models and the intricacies of each source. He doesn’t get to make grand pronouncements, such as President Obama will win because “people want change,” but it does provide him the room to explain his unusual, and often accurate, findings.
4. No Shame In Changing
“Another misconception is that a good prediction shouldn’t change,” writes Silver. “Some people don’t like this type of course-correcting analysis and mistake it for a sign of weakness.”
Academic publications, especially, are confined to the kind of static pronouncement that haunts good science. Reasonable conclusions should change constantly, with every new study, exception, and piece of news. Just below the probability of winning, Silver places the trendline of how his prediction has shifted over time.
“If you have reason to think that yesterday’s forecast went wrong, there is no glory in sticking to it.”
Many traditional academic publications, such as The Harvard Business Review, have started to augment their publication cycles with blogs. Perhaps in the near future, more will heed Silver’s advice and toss out the old periodic model for a 21st-century upgrade.
[Image Credit: Wikimedia user randy stewart]