UK to set up security unit to combat state disinformation campaigns

The UK government has announced plans to set up a dedicated national security unit to combat state-led disinformation campaigns — raising questions about how broad its ‘fake news’ bullseye will be.

Last November UK prime minister Theresa May publicly accused Russia of seeking to meddle in elections by weaponizing information and spreading fake news online.

“The UK will do what is necessary to protect ourselves, and work with our allies to do likewise,” she said in her speech at the time.

The new unit is intended to tackle what the PM’s spokesperson described in comments yesterday as the “interconnected complex challenges” of “fake news and competing narratives”.

The decision to set it up was taken after a meeting this week of the National Security Council — a Cabinet committee tasked with overseeing issues related to national security, intelligence and defense.

“We will build on existing capabilities by creating a dedicated national security communications unit. This will be tasked with combating disinformation by state actors and others. It will more systematically deter our adversaries and help us deliver on national security priorities,” the prime minister’s spokesperson told reporters (via Reuters).

According to the PressGazette, the new unit will be named the National Security Communications Unit and will be based in the Cabinet Office.

“The government is committed to tackling false information and the Government Communications Service (GCS) plays a crucial role in this,” a Cabinet Office spokesperson told the publication. “Digital communications is constantly evolving and we are looking at ways to meet the challenging media landscape by harnessing the power of new technology for good.”

Monitoring social media platforms is expected to form a key part of the unit’s work as it seeks to deter adversaries by flagging up their fakes. But operational details are thin on the ground at this point. UK defense secretary, Gavin Williamson, is expected to give a statement to parliament later this week with more details about the unit.

Writing last week (in PR Week) about the challenges GCS faces this year, Alex Aiken, executive director of the service, named “build[ing] a rapid response social media capability to deal quickly with disinformation and reclaim[ing] a fact-based public debate with a new team to lead this work in the Cabinet Office” as the second item on his eight-strong list.

A key phrase there is “rapid response” — given the highly dynamic and bi-directional nature of some of the disinformation campaigns that have, to date, been revealed spreading via social media. Though a report in the Times suggests insiders are doubtful that Whitehall civil servants will have the capacity to respond rapidly enough to online disinformation.

Another key phrase in Aiken’s list is “fact-based” — because governments and power-wielding politicians denouncing ‘fake news’ is a situation replete with irony and littered with pitfalls. So a crucial factor regarding the unit will be how narrowly (or otherwise) its ‘fake news’ efforts are targeted.

If its work is largely focused on identifying and unmasking state-level disinformation campaigns — such as the Russian-backed bots which sought to interfere in the UK’s 2016 Brexit referendum — it’s hard to dispute that’s necessary and sensible.

Although there are still lots of follow-on considerations, including diplomatic ones — such as whether the government will expend resources to monitor all states for potential disinformation campaigns, even political allies.

And whether it will make public every disinformation effort it identifies, or only selectively disclose activity from certain states.

But the PM’s spokesperson’s use of the phrase ‘fake news’ risks implying the unit will have a rather broader intent, which is concerning — from a freedom of the press and freedom of speech perspective.

Certainly it’s a very broad concept to be deploying in this context, especially when government ministers stand accused of being less than honest in how they present information. (For one extant example, just Google the phrase: “brexit bus”.)

Indeed, even the UK PM herself has been accused domestically on that front.

So there’s a pretty clear risk of ‘fake news’ being interpreted by some as equating to any heavy political spin.

But presumably the government is not intending the new unit to police its own communications for falsities. (Though, if it’s going to ignore its own fakes, well it opens itself up to easy accusations of double standards — aka: ‘domestic political lies, good; foreign political lies bad’… )

Earlier this month the French president, Emmanuel Macron — who in recent months has also expressed public concern about Russian disinformation — announced plans to introduce an anti-fake news election law to place restrictions on social media during election periods.

And while that looks like a tighter angle to approach the problem of malicious and politically divisive disinformation campaigns, it’s also clear that a state like Russia has not stopped spreading fake news just because a particular target country’s election is over.

Indeed, the Kremlin has consistently demonstrated very long term thinking in its propaganda efforts, coupled with considerable staying power around its online activity — aimed at building plausibility for its disinformation cyber agents.

Sometimes these agents are seeded multiple years ahead of actively deploying them as ‘fake news’ conduits for a particular election or political event.

So just focusing on election ‘fake news’ risks being too narrow to effectively combat state-level disinformation, unless combined with other measures. Even as generally going after ‘fake news’ opens the UK government to criticism that it’s trying to shut down political debate and criticism.

Disinformation is clearly a very hard problem for governments to tackle, with no easy answers — even as the risks to democracy are clear enough for even Facebook to admit them.

Yet it’s also a problem that’s not being helped by the general intransigence and lack of transparency from the social media companies that control the infrastructure being used to spread disinformation.

These are also the only entities that have full access to the data that could be used to build patterns and help spot malicious bot-spreading agents of disinformation.

Last week, in the face of withering criticism from a UK committee that’s looking into the issue of fake news, Facebook committed to taking a deeper look into its own data around the Brexit referendum.

At this point it’s not clear whether Twitter — which has been firmly in the committee’s crosshairs — will also agree to conduct a thorough investigation of Brexit bot activity or not.

A spokeswomen for the committee told us it received a letter from Twitter on Friday and will be publishing that, along with its response, later this week. She declined to share any details ahead of that.

The committee is running an evidence session in the US, scheduled for February 8, when it will be putting questions to representatives from Facebook and Twitter, according to the spokeswoman. Its full report on the topic is not likely due for some months still, she added.

At the same time, the UK’s Electoral Commission has been investigating social media to consider whether campaign spending rules might have been broken at the time of the EU referendum vote — and whether to recommend the government drafts any new legislation. That effort is also ongoing.