Startups

Big data can’t bring objectivity to a subjective world

Comment

Image Credits: ImageDB (opens in a new window) / Getty Images

Simon Chandler

Contributor

Simon Chandler is a writer and journalist, contributing articles on culture, politics and technology.

More posts from Simon Chandler

It seems everyone is interested in big data these days. From social scientists to advertisers, professionals from all walks of life are singing the praises of 21st-century data science.

In the social sciences, many scholars apparently believe it will lend their subject a previously elusive objectivity and clarity. Sociology books like An End to the Crisis of Empirical Sociology? and work from bestselling authors are now talking about the superiority of “Dataism” over other ways of understanding humanity. Professionals are stumbling over themselves to line up and proclaim that big data analytics will enable people to finally see themselves clearly through their own fog.

However, when it comes to the social sciences, big data is a false idol. In contrast to its use in the hard sciences, the application of big data to the social, political and economic realms won’t make these area much clearer or more certain.

Yes, it might allow for the processing of a greater volume of raw information, but it will do little or nothing to alter the inherent subjectivity of the concepts used to divide this information into objects and relations. That’s because these concepts — be they the idea of a “war” or even that of an “adult” — are essentially constructs, contrivances liable to change their definitions with every change to the societies and groups who propagate them.

This might not be news to those already familiar with the social sciences, yet there are nonetheless some people who seem to believe that the simple injection of big data into these “sciences” should somehow make them less subjective, if not objective. This was made plain by a recent article published in the September 30 issue of Science.

Authored by researchers from the likes of Virginia Tech and Harvard, “Growing pains for global monitoring of societal events” showed just how off the mark is the assumption that big data will bring exactitude to the large-scale study of civilization.

More precisely, it reported on the workings of four systems used to build supposedly comprehensive databases of significant events: Lockheed Martin’s International Crisis Early Warning System (ICEWS), Georgetown University’s Global Data on Events Language and Tone (GDELT), the University of Illinois’ Social, Political, and Economic Event Database (SPEED) and the Gold Standard Report (GSR) maintained by the not-for-profit MITRE Corporation.

Its authors tested the “reliability” of these systems by measuring the extent to which they registered the same protests in Latin America. If they or anyone else were hoping for a high degree of duplication, they were sorely disappointed, because they found that the records of ICEWS and SPEED, for example, overlapped on only 10.3 percent of these protests. Similarly, GDELT and ICEWS hardly ever agreed on the same events, suggesting that, far from offering a complete and authoritative representation of the world, these systems are as partial and fallible as the humans who designed them.

Even more discouraging was the paper’s examination of the “validity” of the four systems. For this test, its authors simply checked whether the reported protests actually occurred. Here, they discovered that 79 percent of GDELT’s recorded events had never happened, and that ICEWS had gone so far as entering the same protests more than once. In both cases, the respective systems had essentially identified occurrences that had never, in fact, occurred.

They had mined troves and troves of news articles with the aim of creating a definitive record of what had happened in Latin America protest-wise, but in the process they’d attributed the concept “protest” to things that — as far as the researchers could tell — weren’t protests.

For the most part, the researchers in question put this unreliability and inaccuracy down to how “Automated systems can misclassify words.” They concluded that the examined systems had an inability to notice when a word they associated with protests was being used in a secondary sense unrelated to political demonstrations. As such, they classified as protests events in which someone “protested” to her neighbor about an overgrown hedge, or in which someone “demonstrated” the latest gadget. They operated according to a set of rules that were much too rigid, and as a result they failed to make the kinds of distinctions we take for granted.

As plausible as this explanation is, it misses the more fundamental reason as to why the systems failed on both the reliability and validity fronts. That is, it misses the fact that definitions of what constitutes a “protest” or any other social event are necessarily fluid and vague. They change from person to person and from society to society. Hence, the systems failed so abjectly to agree on the same protests, since their parameters on what is or isn’t a political demonstration were set differently from each other by their operators.

Make no mistake, the basic reason as to why they were set differently from each other was not because there were various technical flaws in their coding, but because people often differ on social categories. To take a blunt example, what may be the systematic genocide of Armenians for some can be unsystematic wartime killings for others. This is why no amount of fine-tuning would ever make such databases as GDELT and ICEWS significantly less fallible, at least not without going to the extreme step of enforcing a single worldview on the people who engineer them.

Much the same could be said for the systems’ shortcomings in the validity department. While the paper’s authors stated that the fabrication of nonexistent protests was the result of the misclassification of words, and that what’s needed is “more reliable event data,” the deeper issue is the inevitable variation in how people classify these words themselves.

It’s because of this variation that, even if big data researchers make their systems better able to recognize subtleties of meaning, these systems will still produce results with which other researchers find issue. Once again, this is because a system might perform a very good job of classifying newspaper stories according to how one group of people might classify them, but not according to how another would classify them.

In other words, the systematic recording of masses of data alone won’t be enough to ensure the reproducibility and objectivity of social studies, because these studies need to use often controversial social concepts to make their data significant. They use them to organize “raw” data into objects, categories and events, and in doing so they infect even the most “reliable event data” with their partiality and subjectivity.

What’s more, the implications of this weakness extend far beyond the social sciences. There are some, for instance, who think that big data will “revolutionize” advertising and marketing, allowing these two interlinked fields to reach their “ultimate goal: targeting personalized ads to the right person at the right time.” According to figures in the advertising industry “[t]here is a spectacular change occurring,” as masses of data enable firms to profile people and know who they are, down to the smallest preference.

Yet even if big data might enable advertisers to collect more info on any given customer, this won’t remove the need for such info to be interpreted by models, concepts and theories on what people want and why they want it. And because these things are still necessary, and because they’re ultimately informed by the societies and interests out of which they emerge, they maintain the scope for error and disagreement.

Advertisers aren’t the only ones who’ll see certain things (e.g. people, demographics, tastes) that aren’t seen by their peers.

If you ask the likes of Professor Sandy Pentland from MIT, big data will be applied to everything social, and as such will “end up reinventing what it means to have a human society.” Because it provides “information about people’s behavior instead of information about their beliefs,” it will allow us to “really understand the systems that make our technological society” and allow us to “make our future social systems stable and safe.”

That’s a fairly grandiose ambition, yet the possibility of these realizations will be undermined by the inescapable need to conceptualize information about behavior using the very beliefs Pentland hopes to remove from the equation. When it comes to determining what kinds of objects and events his collected data are meant to represent, there will always be the need for us to employ our subjective, biased and partial social constructs.

Consequently, it’s unlikely that big data will bring about a fundamental change to the study of people and society. It will admittedly improve the relative reliability of sociological, political and economic models, yet since these models rest on socially and politically interested theories, this improvement will be a matter of degree rather than kind. The potential for divergence between separate models won’t be erased, and so, no matter how accurate one model becomes relative to the preconceptions that birthed it, there will always remain the likelihood that it will clash with others.

So there’s little chance of a big data revolution in the humanities, only the continued evolution of the field.

More TechCrunch

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract

Eva Ho plans to step away from her position as general partner at Fika Ventures, the Los Angeles-based seed firm she co-founded in 2016. Fika told LPs of Ho’s intention…

Fika Ventures co-founder Eva Ho will step back from the firm after its current fund is deployed

In a post on Werner Vogels’ personal blog, he details Distill, an open-source app he built to transcribe and summarize conference calls.

Amazon’s CTO built a meeting-summarizing app for some reason

Paris-based Mistral AI, a startup working on open source large language models — the building block for generative AI services — has been raising money at a $6 billion valuation,…

Sources: Mistral AI raising at a $6B valuation, SoftBank ‘not in’ but DST is

You can expect plenty of AI, but probably not a lot of hardware.

Google I/O 2024: What to expect