The U.K. published its first-ever national AI strategy this week. The decade-long commitment by the government to levelling up domestic artificial intelligence capabilities — by directing resource and attention toward skills, talent, compute power and data access — has been broadly welcomed by the country’s tech ecosystem, as you’d expect.
But there is a question mark over how serious government is about turning the U.K. into a “global AI superpower” given the lack of a funding announcement to accompany the publication.
A better hint is likely to come shortly, with the spending review tabled for October 27 — which will set out public spending plans for the next three years.
Ahead of that, TechCrunch spoke to Marc Warner, CEO of U.K. AI startup Faculty, who said government needs to show it’s serious about providing long-term support to develop the U.K.’s capabilities and global competitiveness with an appropriate level of funding — while welcoming the “genuine ambition” he believes the government is showing to support AI.
Warner’s startup, which closed a $42 million growth round of funding earlier this year, has started its own internal education program to attract PhDs into the company and turn out future data scientists. While Warner himself is also a member of the U.K.’s AI Council, an expert advisory group that provides advice to government and which was consulted on the strategy.
“I think this is a really pretty good strategy,” he told TechCrunch. “There’s a genuine ambition in it which is relatively rare for government and they recognize some of the most important things that we need to fix.
“The problem — and it’s a huge problem — is that there are currently no numbers attached to this.
“So while in principle there’s lots of great stuff in there, in practice it’s totally critical that it’s actually backed by the funding that it needs — and has the commitment of the wider gov to the high-quality execution that doing some of these things are going to require.”
Warner warned of the risk of the promising potential of a “pretty serious strategy” fading away if it’s not matched with an appropriate — dare we say it, “world beating” — level of funding.
“That’s a question for the spending review but it seems to me very easy now that having done — what looks like a really pretty serious strategy — then… it fades into a much more generic strategy off the back of not really willing to make the funding commitments, not really willing to push through on the execution side and actually make these things happen.”
Asked what level of funding he’d like to see government putting behind the strategy to deliver on its long-term ambitions, Warner said the U.K. needs to aim high — and do so on a global stage.
“We can look around the world and look at the commitments that other countries are making to their AI strategies which are in the hundred of millions to low billions,” he suggested. “And if we are serious about being globally competitive — which the strategy is, and I think we should be — then we’re talking at least matching the funding of other countries, if not exceeding it.”
“Ultimately it comes down to where does this rank in their priority list and if they want to deliver on an ambitious strategy it’s got to be high,” he added.
Access to talent
Discussing the broad detail of what the strategy says is needed for the U.K. to up its AI game, Warner highlighted talent as a key component.
“For a technical field like AI talent is a huge deal. There’s a global competition for that talent. And it seems like the government is taking that seriously and hopefully going to take actions to make sure the U.K. has all the talent it needs for this kind of stuff — from a skills perspective and training up people but also from a visa perspective.”
“From our perspective it’s just wonderful to be able to access some of the most talented people from across the world to come and work on important problems and so the easier that it can be made for those people — or for organizations, whether it’s universities or charities or companies like us or even government departments to start to be able to hire those people it’s just a massive step forward,” he added.
“It’s nice that they’re taking computing and data seriously,” he went on, discussing other elements of the strategy. “Obviously those are the two fuels for the set of techniques of machine learning that are sort of the foundation of modern AI. And having the government think about how we can make that more accessible is clearly a great thing.”
“I think the fact that they’re thinking about the long-term risks of AI is novel and basically important,” he also said.
“Then I think they’re relatively honest that our adoption is weaker than we’d like, as a country, as a set of businesses. And hopefully recognizing that and thinking seriously about how we might go about fixing it — so, all in all, from a strategy perspective it’s actually very good.”
The strategy also talks about the need to establish “clear rules, applied ethical principles and a pro-innovation regulatory environment” for AI. But the U.K. is already lagging on that front — with the European Union proposing an AI Regulation earlier this year.
Asked for his views on AI regulation Warner advocated for domain-specific rules.
Domain specific AI rules
“We think it would be a big mistake to regulate at the level of just artificial intelligence. Because that’s sort of equivalent to regulating steel where you don’t know whether the steel is going to be used in girders or in a knife or a gun,” he suggested.
“Either you pick the kind of legislation that we have around girders and it becomes incredibly lax around the people who are using the steel to make guns or you pick the kind of legislation that we have around guns and it becomes almost impossible to make the girders.
“So while it’s totally critical that we regulate AI effectively that is almost certainly done in a domain-specific fashion.”
He gave the example of AIs used in health contexts, such as for diagnosis, as a domain that would naturally require tighter regulation — whereas a use-case like e-commerce would likely not need such guardrails, he suggested.
“I think the government recognizes this in the strategy,” he added. “It does talk about making sure the regulation is really thoughtfully attuned to the domain. And that just seems very sensible to me.
“We think it’s extremely important that AI is done well and safely and for the benefit of society.”
The EU’s proposed risk-based framework for regulating applications of AI does have a focus on certain domains and use cases — which are classified as higher or lower risk, with regulatory requirements varying accordingly. But Warner said he hasn’t yet studied the EU proposal in enough detail to have a view on their approach.
TechCrunch also asked the Faculty CEO for his views on the U.K. government’s simultaneous push to “reform” the current data protection framework — which includes consulting on changes that could weaken protections for people’s information.
Critics of the reform plan suggest it risks a race to the bottom on privacy standards.
“My view would be that it’s absolutely critical that uses of AI are both legal and legitimate,” said Warner. “As in, if people knew what was being done with their data they would be completely comfortable with what’s going on.”
Faculty’s AI business was in existence (albeit under a different name) before the U.K.’s version of the EU General Data Protection Regulation (GDPR) was transposed into national law — although the prior regime was broadly similar. So existing rules don’t appear to have harmed its prospects as a high value and growing U.K. AI business.
Given that, might the government’s appetite to reduce the level of data protection that U.K. citizens enjoy — with the claim that doing so would somehow be “good for innovation” — actually be rather counterproductive for AI businesses which need the trust of users to flourish? (Plus, of course, if any U.K. AI businesses want to do business in the European Union they would need to comply with the GDPR.)
“GDPR is not perfect,” argued Warner. “If you speak to anyone I think that’s widely recognized — so I don’t think that the way it’s being framed as a choice between one or other, I think we can do better than both and I think that’s what we should aim for.
“I think there are lots of ways that we can — over time — be better at regulating these things. So that we maintain the absolute best in class for legitimacy around the use of these technologies which is obviously totally critical for companies like us that want to do business in a way that’s widely accepted and even encouraged in society.
“Basically I don’t think we should compromise but I don’t think it’s a choice between just following GDPR or not. It’s more complicated than that.”
It’s also worth noting there have been a number of high-profile data scandals emanating from the U.K. in recent years.
And Faculty — in its pre-rebranding guise as ASI Data Science — was intimately involved in controversial use of data for targeting ads at voters during the U.K.’s Brexit vote, for example.
Although it has since said it will never do political work again.
ASI Data Science’s corporate rebranding followed revelations around the data-mining activities of the now defunct and disgraced data company, Cambridge Analytica — which broke into a global scandal in 2018, and led to parliamentarians around the world asking awkward questions about the role of data and predictive modelling to try to sway voters.
The U.K.’s information commissioner even called for an “ethical pause” on the use of data and AI tools for political ad targeting, warning that trust in democracy was being undermined by big data techniques opaquely targeting voters with custom political messaging.
During the Brexit referendum, Warner worked with the U.K. government’s former special advisor, Dominic Cummings, who was a director for the Vote Leave campaign. And Cummings has written extensively that data scientists played a crucial role in winning the Brexit vote — writing, for instance, in a 2016 blog post on how data science and AI was used in the referendum, that:
One of our central ideas was that the campaign had to do things in the field of data that have never been done before. This included a) integrating data from social media, online advertising, websites, apps, canvassing, direct mail, polls, online fundraising, activist feedback, and some new things we tried such as a new way to do polling… and b) having experts in physics and machine learning do proper data science in the way only they can – i.e. far beyond the normal skills applied in political campaigns. We were the first campaign in the UK to put almost all our money into digital communication then have it partly controlled by people whose normal work was subjects like quantum information (combined with political input from Paul Stephenson and Henry de Zoete, and digital specialists AIQ). We could only do this properly if we had proper canvassing software. We built it partly in-house and partly using an external engineer who we sat in our office for months.
Given this infamous episode in his company’s history, we asked Warner if he would be supportive of AI rules that limit how the technology can be used for political campaigning?
The U.K. government has not made such a proposal but it is eyeing changing in election law — such as disclosure labels for online political campaigning.
“Faculty as an organization is not interested in politics anymore — it’s not something we’re thinking about,” was Warner’s response on this.
Pushed again on whether he would support limits on AI in the political campaigning domain, he added: “From Faculty’s perspective we don’t do politics anymore. I think it’s up to the government what they think it’s best around that area.”
Post-publication, the company also sent us this statement:
“Faculty/ASI never worked formally or informally with Cambridge Analytica or its parent company SCL in any capacity. In 2016 we were engaged by Vote Leave for a specific project, the aim of which was to provide polling analysis and advice on reducing advertising costs. None of Faculty/ASI’s work has ever involved the use of private Facebook data or so-called ‘micro-targeting.”
This report was updated with an additional statement from Faculty