PayPal co-founder and venture capitalist Peter Thiel commonly harps on the tech community for overusing buzzwords like “cloud” and “big data.” He’s not the only one who’s been saying this, but the message still doesn’t appear to be sinking in with most enterprises.
Companies often tout all their terabytes and petabytes of data, and their massive teams of data scientists running huge Hadoop clusters with Apache Kafka streams that are such a competitive advantage.
The truth is, most of them suffer from one of the old adages in computing: garbage in, garbage out. Not only do most of them actually not have Big Data in terms of data complexity or volume, but most of them actually have Crappy Data, and it’s probably hurting their business. According to Experian Data Quality, inaccurate data affects the bottom line of 88 percent of organizations and impacts up to 12 percent of revenues.
Good Big Data
Some companies actually have good data and know how to use it. From mature, web-native companies like Google to engineering-based companies like Boeing, the companies listed below have successfully managed enormous amounts of data and used it to make true data-driven decisions.
Netflix: Giving Its Users What They Want. Accounting for a third of peak-time Internet traffic in the U.S., Netflix collects massive amounts of data about its users’ viewing habits, and can break it down by region, time of day, watching hours and a plethora of other data. This has put them in a unique position of being able to accurately predict what viewers want.
Case in point, Netflix has expanded well beyond a DVD and streaming service to becoming its own production company, with hit shows like House of Cards and Orange Is the New Black. They’ve also shirked the traditional pilot episode model to confidently produce full seasons of their original series.
IBM And The Weather Company: Understanding How Weather Affects Business. IBM has teamed up with the Weather Company to combine two very large sets of data and accurately analyze how the weather impacts business. Spanning everything from retail to insurance, they’ll be able to accurately provide real-time insights into how temperature changes impact sales or how insurance companies can save dollars by advising their clients to move their cars.
Icahn School Of Medicine At Mount Sinai: Predicting Patients’ Health. The New York City-based school has tasked Jeff Hammerbacher, famously known as Facebook’s first data scientist, to lead the development of a computer that analyzes the medical information they’ve collected from the half a million patients they treat per year.
Working with the head of Mount Sinai’s Institute for Genomics and Multiscale Biology, they’re working to make predictions that could cut the cost of healthcare — from assessing a patient’s medical history and risk factors to determine how often they’ll need healthcare to allowing doctors to prescribe treatments based on risk models gathered from genomics and lab data.
Amazon: Setting A New Bar For Customer Service. Amazon has access to unprecedented insights about its users — from what books they’re reading to how often they’re restocking cotton balls. While other companies have backburnered customer support, Amazon has made it a key to its business by emphasizing the importance of communication and direct relationships with their consumers. Amazon uses its wealth of data about their users to immediately provide representatives with relevant information about a customer the moment they need support, streamlining the process and solidifying their loyalties.
Xerox: Improving Employee Retention. Whereas past work experience has often been the model for hiring new employees, Xerox found that hiring for its call centers had an entirely different basis for success. Using big data, the organization found that a potential employee’s personality was the real predictor of whether they would stay — creative people tended to stick it out, inquisitive people did not. Armed with this information, and a hiree survey rather than a hiring manager, they were able to cut their employee turnover rate at all their call centers by 20 percent in six months.
However, most companies don’t use data well.
Bad Big Data
Enterprises have historically spent far too little time thinking about what data they should be collecting and how they should be collecting it. Instead of spear fishing, they’ve taken to trawling the data ocean, collecting untold amounts of junk without any forethought or structure. Deferring these hard decisions has resulted in data science teams in large enterprises spending the majority of their time cleaning, processing and structuring data with manual and semi-automated methods.
Building an enterprise with smart, usable data is what every company should strive to create.
DJ Patil, the recently appointed Chief Data Scientist of the White House, summarizes the data problem well, noting that “you have to start with a very basic idea: Data is super messy, and data cleanup will always be literally 80 percent of the work. In other words, data is the problem.”
But it’s not all bad news. According to the industry research firm Wikibon, 52 percent of data tool investments are being spent on technologies for ingesting and organizing data so that it can be more readily accessible and prepared for analysis. However, the key to tackling this properly isn’t just spending on more or better tools.
Applying Big Data To Your Business
To truly turn an enterprise into a data company, here are some guidelines and methods that have been performed by some of the best data companies in the world.
Know Thyself. Start by understanding the type of data you need to analyze first — is it event data, financial data, graph data or something else? This is the most important factor in determining whether you a need to capture data at the most atomic level or in some other format.
Don’t Over-Delegate. Many businesses hand off setting up analysis to developers or IT without involving the actual business users — it’s critical that those who are actually going to be using the data are involved with understanding exactly how it is being collected and aggregated to avoid critical problems down the road.
Define The Use Cases. As a corollary to don’t over-delegate, don’t let business users either give generic use cases (e.g. “we want to track lead sources”) or spec out irrelevant use cases. Every piece of data needs to fit into an analytical framework and be part of solving a problem. Appoint either a highly technical business user or business-savvy tech lead to own the final signoff here.
Stop At The Source. Garbage in, garbage out; make sure you understand the source and types of data. Where does your data originate? Is it accurate? If you don’t know the answers to these questions, start looking into it now.
Use The Right Tool For The Job. There are many great analytical tools out there. Undertake a formal “bake-off” process once you’ve defined your key use cases for your business and end users, and evaluate against your needs versus potential cool features you may never end up using.
Big data alone is silly. Building an enterprise with smart, usable data is what every company should strive to create.