Social media is giving us trypophobia


Something is rotten in the state of technology.

But amid all the hand-wringing over fake news, the cries of election deforming Kremlin disinformation plots, the calls from political podia for tech giants to locate a social conscience, a knottier realization is taking shape.

Fake news and disinformation are just a few of the symptoms of what’s wrong and what’s rotten. The problem with platform giants is something far more fundamental.

The problem is these vastly powerful algorithmic engines are blackboxes. And, at the business end of the operation, each individual user only sees what each individual user sees.

The great lie of social media has been to claim it shows us the world. And their follow-on deception: That their technology products bring us closer together.

In truth, social media is not a telescopic lens — as the telephone actually was — but an opinion-fracturing prism that shatters social cohesion by replacing a shared public sphere and its dynamically overlapping discourse with a wall of increasingly concentrated filter bubbles.

Social media is not connective tissue but engineered segmentation that treats each pair of human eyeballs as a discrete unit to be plucked out and separated off from its fellows.

Think about it, it’s a trypophobic’s nightmare.

Or the panopticon in reverse — each user bricked into an individual cell that’s surveilled from the platform controller’s tinted glass tower.

Little wonder lies spread and inflate so quickly via products that are not only hyper-accelerating the rate at which information can travel but deliberately pickling people inside a stew of their own prejudices.

First it panders then it polarizes then it pushes us apart.

We aren’t so much seeing through a lens darkly when we log onto Facebook or peer at personalized search results on Google, we’re being individually strapped into a custom-moulded headset that’s continuously screening a bespoke movie — in the dark, in a single-seater theatre, without any windows or doors.

Are you feeling claustrophobic yet?

It’s a movie that the algorithmic engine believes you’ll like. Because it’s figured out your favorite actors. It knows what genre you skew to. The nightmares that keep you up at night. The first thing you think about in the morning.

It knows your politics, who your friends are, where you go. It watches you ceaselessly and packages this intelligence into a bespoke, tailor-made, ever-iterating, emotion-tugging product just for you.

Its secret recipe is an infinite blend of your personal likes and dislikes, scraped off the Internet where you unwittingly scatter them. (Your offline habits aren’t safe from its harvest either — it pays data brokers to snitch on those too.)

No one else will ever get to see this movie. Or even know it exists. There are no adverts announcing it’s screening. Why bother putting up billboards for a movie made just for you? Anyway, the personalized content is all but guaranteed to strap you in your seat.

If social media platforms were sausage factories we could at least intercept the delivery lorry on its way out of the gate to probe the chemistry of the flesh-colored substance inside each packet — and find out if it’s really as palatable as they claim.

Of course we’d still have to do that thousands of times to get meaningful data on what was being piped inside each custom sachet. But it could be done.

Alas, platforms involve no such physical product, and leave no such physical trace for us to investigate.

Smoke and mirrors

Understanding platforms’ information-shaping processes would require access to their algorithmic blackboxes. But those are locked up inside corporate HQs — behind big signs marked: ‘Proprietary! No visitors! Commercially sensitive IP!’

Only engineers and owners get to peer in. And even they don’t necessarily always understand the decisions their machines are making.

But how sustainable is this asymmetry? If we, the wider society — on whom platforms depend for data, eyeballs, content and revenue; we are their business model — can’t see how we are being divided by what they individually drip-feed us, how can we judge what the technology is doing to us, one and all? And figure out how it’s systemizing and reshaping society?

How can we hope to measure its impact? Except when and where we feel its harms.

Without access to meaningful data how can we tell whether time spent here or there or on any of these prejudice-pandering advertiser platforms can ever be said to be “time well spent“?

What does it tell us about the attention-sucking power that tech giants hold over us when — just one example — a train station has to put up signs warning parents to stop looking at their smartphones and point their eyes at their children instead?

Is there a new idiot wind blowing through society of a sudden? Or are we been unfairly robbed of our attention?

What should we think when tech CEOs confess they don’t want kids in their family anywhere near the products they’re pushing on everyone else? It sure sounds like even they think this stuff might be the new nicotine.

External researchers have been trying their best to map and analyze flows of online opinion and influence in an attempt to quantify platform giants’ societal impacts.

Yet Twitter, for one, actively degrades these efforts by playing pick and choose from its gatekeeper position — rubbishing any studies with results it doesn’t like by claiming the picture is flawed because it’s incomplete.

Why? Because external researchers don’t have access to all its information flows. Why? Because they can’t see how data is shaped by Twitter’s algorithms, or how each individual Twitter user might (or might not) have flipped a content suppression switch which can also — says Twitter — mould the sausage and determine who consumes it.

Why not? Because Twitter doesn’t give outsiders that kind of access. Sorry, didn’t you see the sign?

And when politicians press the company to provide the full picture — based on the data that only Twitter can see — they just get fed more self-selected scraps shaped by Twitter’s corporate self-interest.

(This particular game of ‘whack an awkward question’ / ‘hide the unsightly mole’ could run and run and run. Yet it also doesn’t seem, long term, to be a very politically sustainable one — however much quiz games might be suddenly back in fashion.)

And how can we trust Facebook to create robust and rigorous disclosure systems around political advertising when the company has been shown failing to uphold its existing ad standards?

Mark Zuckerberg wants us to believe we can trust him to do the right thing. Yet he is also the powerful tech CEO who studiously ignored concerns that malicious disinformation was running rampant on his platform. Who even ignored specific warnings that fake news could impact democracy — from some pretty knowledgeable political insiders and mentors too.

Biased blackboxes

Before fake news became an existential crisis for Facebook’s business, Zuckerberg’s standard line of defense to any raised content concern was deflection — that infamous claim ‘we’re not a media company; we’re a tech company’.

Turns out maybe he was right to say that. Because maybe big tech platforms really do require a new type of bespoke regulation. One that reflects the uniquely hypertargeted nature of the individualized product their factories are churning out at — trypophobics look away now! —  4BN+ eyeball scale.

In recent years there have been calls for regulators to have access to algorithmic blackboxes to lift the lids on engines that act on us yet which we (the product) are prevented from seeing (and thus overseeing).

Rising use of AI certainly makes that case stronger, with the risk of prejudices scaling as fast and far as tech platforms if they get blindbaked into commercially privileged blackboxes.

Do we think it’s right and fair to automate disadvantage? At least until the complaints get loud enough and egregious enough that someone somewhere with enough influence notices and cries foul?

Algorithmic accountability should not mean that a critical mass of human suffering is needed to reverse engineer a technological failure. We should absolutely demand proper processes and meaningful accountability. Whatever it takes to get there.

And if powerful platforms are perceived to be footdragging and truth-shaping every time they’re asked to provide answers to questions that scale far beyond their own commercial interests — answers, let me stress it again, that only they hold — then calls to crack open their blackboxes will become a clamor because they will have fulsome public support.

Lawmakers are already alert to the phrase algorithmic accountability. It’s on their lips and in their rhetoric. Risks are being articulated. Extant harms are being weighed. Algorithmic blackboxes are losing their deflective public sheen — a decade+ into platform giant’s huge hyperpersonalization experiment.

No one would now doubt these platforms impact and shape the public discourse. But, arguably, in recent years, they’ve made the public street coarser, angrier, more outrage-prone, less constructive, as algorithms have rewarded trolls and provocateurs who best played their games.

So all it would take is for enough people — enough ‘users’ — to join the dots and realize what it is that’s been making them feel so uneasy and queasy online — and these products will wither on the vine, as others have before.

There’s no engineering workaround for that either. Even if generative AIs get so good at dreaming up content that they could substitute a significant chunk of humanity’s sweating toil, they’d still never possess the biological eyeballs required to blink forth the ad dollars the tech giants depend on. (The phrase ‘user generated content platform’ should really be bookended with the unmentioned yet entirely salient point: ‘and user consumed’.)

This week the UK prime minister, Theresa May, used a Davos podium World Economic Forum speech to slam social media platforms for failing to operate with a social conscience.

And after laying into the likes of Facebook, Twitter and Google — for, as she tells it, facilitating child abusemodern slavery and spreading terrorist and extremist content — she pointed to a Edelman survey showing a global erosion of trust in social media (and a simultaneous leap in trust for journalism).

Her subtext was clear: Where tech giants are concerned, world leaders now feel both willing and able to sharpen the knives.

Nor was she the only Davos speaker roasting social media either.

“Facebook and Google have grown into ever more powerful monopolies, they have become obstacles to innovation, and they have caused a variety of problems of which we are only now beginning to become aware,” said billionaire US philanthropist George Soros, calling — out-and-out — for regulatory action to break the hold platforms have built over us.

And while politicians (and journalists — and most probably Soros too) are used to being roundly hated, tech firms most certainly are not. These companies have basked in the halo that’s perma-attached to the word “innovation” for years. ‘Mainstream backlash’ isn’t in their lexicon. Just like ‘social responsibility’ wasn’t until very recently.

You only have to look at the worry lines etched on Zuckerberg’s face to see how ill-prepared Silicon Valley’s boy kings are to deal with roiling public anger.

Guessing games

The opacity of big tech platforms has another harmful and dehumanizing impact — not just for their data-mined users but for their content creators too.

A platform like YouTube, which depends on a volunteer army of makers to keep content flowing across the countless screens that pull the billions of streams off of its platform (and stream the billions of ad dollars into Google’s coffers), nonetheless operates with an opaque screen pulled down between itself and its creators.

YouTube has a set of content policies which it says its content uploaders must abide by. But Google has not consistently enforced these policies. And a media scandal or an advertiser boycott can trigger sudden spurts of enforcement action that leave creators scrambling not to be shut out in the cold.

One creator, who originally got in touch with TechCrunch because she was given a safety strike on a satirical video about the Tide Pod Challenge, describes being managed by YouTube’s heavily automated systems as an “omnipresent headache” and a dehumanizing guessing game.

“Most of my issues on YouTube are the result of automated ratings, anonymous flags (which are abused) and anonymous, vague help from anonymous email support with limited corrective powers,” Aimee Davison told us. “It will take direct human interaction and negotiation to improve partner relations on YouTube and clear, explicit notice of consistent guidelines.”

“YouTube needs to grade its content adequately without engaging in excessive artistic censorship — and they need to humanize our account management,” she added.

Yet YouTube has not even been doing a good job of managing its most high profile content creators. Aka its ‘YouTube stars’.

But where does the blame really lie when ‘star’ YouTube creator Logan Paul — an erstwhile Preferred Partner on Google’s ad platform — uploads a video of himself making jokes beside the dead body of a suicide victim?

Paul must manage his own conscience. But blame must also scale beyond any one individual who is being algorithmically managed (read: manipulated) on a platform to produce content that literally enriches Google because people are being guided by its reward system.

In Paul’s case YouTube staff had also manually reviewed and approved his video. So even when YouTube claims it has human eyeballs reviewing content those eyeballs don’t appear to have adequate time and tools to be able to do the work.

And no wonder, given how massive the task is.

Google has said it will increase headcount of staff who carry out moderation and other enforcement duties to 10,000 this year.

Yet that number is as nothing vs the amount of content being uploaded to YouTube. (According to Statista, 400 hours of video were being uploaded to YouTube every minute as of July 2015; it could easily have risen to 600 or 700 hours per minute by now.)

The sheer size of YouTube’s free-to-upload content platform all but makes it impossible to meaningfully moderate.

And that’s an existential problem when the platform’s massive size, pervasive tracking and individualized targeting technology also gives it the power to influence and shape society at large.

The company itself says its 1BN+ users constitute one-third of the entire Internet.

Throw in Google’s preference for hands-off (read: lower cost) algorithmic management of content and some of the societal impacts flowing from the decisions its machines are making are questionable — to put it politely.

Indeed, YouTube’s algorithms have been described by its own staff as having extremist tendencies.

The platform has also been accused of essentially automating online radicalization — by pushing viewers towards increasingly extreme and hateful views. Click on a video about a populist right wing pundit and end up — via algorithmic suggestion — pushed towards a neo-nazi hate group.

And the company’s suggested fix for this AI extremism problem? Yet more AI…

Yet it’s AI-powered platforms that have been caught amplifying fakes and accelerating hates and incentivizing sociopathy.

And it’s AI-powered moderation systems that are too stupid to judge context and understand nuance like humans do. (Or at least can when they’re given enough time to think.)

Zuckerberg himself said as much a year ago, as the scale of the existential crisis facing his company was beginning to become clear. “It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more,” he wrote then. “At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.”

‘Many years’ is tech CEO speak for ‘actually we might not EVER be able to engineer that’.

And if you’re talking about the very hard, very editorial problem of content moderation, identifying terrorism is actually a relatively narrow challenge.

Understanding satire — or even just knowing whether a piece of content has any kind of intrinsic value at all vs been purely worthless algorithmically groomed junk? Frankly speaking, I wouldn’t hold my breath waiting for the robot that can do that.

Especially not when — across the spectrum — people are crying out for tech firms to show more humanity. And tech firms are still trying to force-feed us more AI.

More TechCrunch

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender Solo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient, and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

3 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free