Editor’s note: Michael Housman is chief analytics officer at Evolv, a startup that uses predictive analytics and academic research to improve hiring and managing people.
Academia is governed by a process called “peer review,” which ensures research is validated and reasoning is sound. More simply, it weeds out the real insights from the fluff. Meanwhile, most corporate data (think surveys, white papers), is made public after only an internal, inherently biased review. With recent backlash toward Facebook’s emotions experiment, being cognizant and transparent with data is more important than ever.
As an academic, my expectation when I joined the private sector was that the white papers and collateral released by companies in this space would be based on large sample sizes, best-in-class empirical methods and rigorous interpretation. Suffice to say, that was not the case.
There are relatively few tech companies that support analytics-minded academics and are willing to put in the time to hold the company’s work accountable to a much higher standard. Peer review is an incredibly time-intensive process, but worth it for stronger, more accurate content. And though recent news of a “peer review ring” does show the system isn’t without its faults, with technology’s help, the methods can scale. Here are my top three reasons why companies should invest in academic peer review.
Content Is More Credible
In the peer-review process, once a team of researchers has completed a paper, they submit it to a journal that then circulates it to two to three peer experts for critique. Academic journals use a double-blind process so the authors don’t know the identity of the reviewers and vice versa, with the intention being to ensure that the process is as rigorous and objective as possible.
Although the finished product is inevitably better than the initial submission, the process requires a tremendous amount of work, and can last as long as two years from start to finish. Utilizing a process even near this intensive ensures that research is really real. Outside, educated analysis makes sure that the facts and framing are sound and not biased. This adds much more credibility to content than, say, a survey revealing the importance of secure storage, vetted by a security software vendor.
Thought Leadership Is More Authentic
Clearly most companies will choose to conduct research and develop content in areas relevant to their business – it makes sense to grow voice and market share in your space. But there is a fine line between self-serving and authentically helpful content that benefits current and potential customers. A company isn’t just saying it knows about a space; it is offering concrete evidence validated by others.
The Bar for Ethics and Accuracy Rises
In my role as a chief analytics officer, I often play internal referee – I refuse to release any research from our big dataset that wouldn’t be submitted to a top-tier academic journal. That means I’m responsible for both suppressing good content that isn’t sufficiently backed up by the science, as well as releasing unpopular results when that’s what the data says. This process substantiates our data.
Even if readers of our researchers do not perceive the difference between including fixed effects or clustering standard errors, they appreciate the value of the quality of analytical methods used and knowing what they are reading is truly fact-based. The spurious conclusions companies share in the public domain can sometimes be appalling. Since individuals and companies often base actual purchase decisions on this research, the improper actions and decisions that may result from fluffed up facts are of concern.
The level of rigor that goes into academic papers versus industry papers represents two extremes. One is a highly regimented process that can require many months or years of painstaking work in order to button up the analysis to an incredibly exacting level of detail. And in the other there is no quality-control process at all; analysts can literally release whatever they want into the world without being forced to explain their methods, present raw results or release their data if an outside party wishes to replicate their results.
My hope is that the tech industry can find a middle ground. We should educate the public about how to be more scrutinizing consumers of research – teach people to ask the right questions and look for the answers that have substance. It’s not only the higher-integrity path but also the logical one, as it plays to our strengths as technologists and innovators.
We’ve already seen results in the technology industry that have similar ethos, such as open source projects, which have given us a better infrastructure on the web, in databases and more. We can take that system, implement academic rigor and detail and keep those peer-review circles open, transparent and diverse. Close and careful scrutiny on the back-end will only benefit the industry.