The European Union’s executive body, the EC, has taken a first pass at drawing up a strategy to respond to the myriad socio-economic challenges around artificial intelligence technology — including setting out steps intended to boost investment, support education and training, and draw up an ethical and legal framework for steering AI developments by the end of the year.
It says it’s hoping to be able to announce a “coordinated plan on AI” by the end of 2018, working with the bloc’s 28 Member States to get there.
“The main aim is to maximise the impact of investment at the EU and national levels, encourage cooperation across the EU, exchange best practices, and define the way forward together, so as to ensure the EU’s global competitiveness in this sector,” writes the Commission, noting it will also continue to invest in initiatives it views as “key” for AI (specifically name-checking the development of components, systems and chipsets designed to run AI operations; high-performance computers; projects related to quantum technologies; and ongoing work to map the human brain).
Commenting on the strategy in a statement, the EC VP for the Digital Single Market Andrus Ansip said: “Without data, we will not make the most of artificial intelligence, high-performance computing and other technological advances. These technologies can help us to improve healthcare and education, transport networks and make energy savings: this is what the smart use of data is all about.
“Our proposal will free up more public sector data for re-use, including for commercial purposes, driving down the cost of access to data and helping us to create a common data space in the EU that will stimulate our growth.”
Below is a breakdown of what the Commission is proposing in the various areas it’s focusing on.
Regional industry bodies’ response statements to the plan include the usual mix of welcoming platitudes combined with calls for “a cautious approach to regulation” to “allow AI to have the space to grow”, as tech advocacy association, the CCIA, puts it.
While consumer advocacy group, BEUC, criticizes the Commission for postponing what it dubs “hard decisions to later” — calling for it to make a clear commitment to update the bloc’s product safety and liability rules to ensure they are fit for the risks of the AI age.
Target of €20BN+ into AI research by end of 2020
On the investment front the Commission says its target is to increase investments in “AI research and innovation” in the bloc by at least €20BN between now and the end of 2020 — across both public and private sectors.
To support that it says it will increase its investment to €1.5BN for the period 2018-2020 under the Horizon 2020 research and innovation program — and is expecting this to trigger an additional €2.5BN of funding from existing public-private partnerships, such as on big data and robotics.
“[This] will support the development of AI in key sectors, from transport to health; it will connect and strengthen AI research centres across Europe, and encourage testing and experimentation,” it writes.
The EC also says it will support the development of an “AI-on-demand platform” to “provide access to relevant AI resources in the EU for all users”.
And it says it intends to use the European Fund for Strategic Investments to provide companies and start-ups with “additional support” to invest in AI — aiming to, as it puts it, “mobilise more than €500M in total investments by 2020 across a range of key sectors”.
Push to open up public sector data-sets
The Commission is also eyeing a range of ways to open up access to data — as a strategy to stimulate AI developments.
On this it’s proposing legislation to open up more data for re-use, including public sector data — proposing a review of the rules that govern this (aka the PSI Directive) — along with a package of other measures geared towards making data sharing easier; including a new set of recommendations for sharing scientific data; and guidance for the private sector on data sharing collaborations with the private sector; and for business-to-business data sharing (it says it will come out guidance to help companies on this front, and also says it will call for proposals to set up a support center this year, funded via the Connecting Europe Facility).
In a Communication entitled ‘Towards a common European data space’, the Commission writes that its intention is to build on the foundation provided by the incoming GDPR data protection framework — and move towards what it couches as “a seamless digital area with the scale that will enable the development of new products and services based on data”. So full marks for buzzwords.
The changes it’s proposing to the PSI Directive are intended to reduce market entry barriers (especially for SMEs) by lowering charges for the re-use of public sector info; increase the availability of data by bringing new types of public and publicly funded data into the scope of the Directive (specifically the utilities and transport sectors, and research data).
It also says it wants to “minimize the risk of excessive first-mover advantage” — arguing this benefits large companies — by “requiring a more transparent process for the establishment of public-private arrangements”.
Encouraging the publication of “dynamic data” and APIs is another intention — and another strategy to ramp up business opportunities around data.
It has a factsheet on these plans here, where it also writes: “Data is of utmost importance to the European economy” — citing a study which predicts the total direct economic value of public sector information to increase from €52BN in 2018 (across all Member States) to €194BN in 2030.
Support for eHealth research and cross-border services
Yet another Communication published by the Commission today deals with health data specifically.
On this type of data the EC says it has three priorities:
- Citizens’ secure access to their health data, also across borders –– enabling citizens to access their health data across the EU;
- Personalised medicine through shared European data infrastructure — allowing researchers and other professionals to pool resources (data, expertise, computing processing and storage capacities) across the EU;
- Citizen empowerment with digital tools for user feedback and person-centred care — using digital tools to empower people to look after their health, stimulate prevention and enable feedback and interaction between users and healthcare providers.
In its eHealth communication the Commission enthuses about the potential for digital solutions to transform healthcare before lamenting: “Market fragmentation and lack of interoperability across health systems stand in the way of an integrated approach to disease prevention, care and cure better geared to people’s needs” — i.e. as a result of Member States retaining control over their own national healthcare systems.
Hence it’s focusing efforts here on encouraging Member States to “improve the complementarity of their health services cross-border”; putting money into “research and innovation in digital health and care solutions” (via the Horizon 2020 program); and “assist[ing] Member States in pursuing the reforms of their health and care systems”.
Ethical guidelines for AI development coming this year
On the legal and ethical framework front, the Commission says it intends to publish ethical guidelines on AI development by the end of the year — which it says will be based on the EU’s Charter of Fundamental Rights, “taking into account principles such as data protection and transparency, and building on the work of the European Group on Ethics in Science and New Technologies“.
In the UK the upper house of parliament recently published its own report into the economic, ethical and social implications of artificial intelligence — which urged action to avoid biases being baked into algorithms and recommended a cross-sector AI Code to try to steer AI developments in a positive, societally beneficial direction.
To draw up EU-wide guidelines, the EC says it will “bring together all relevant stakeholders in a European AI Alliance“.
“As with any transformative technology, artificial intelligence may raise new ethical and legal questions, related to liability or potentially biased decision-making. New technologies should not mean new values,” it also writes.
But it’s waiting until mid-2019 before issuing AI-related guidance on the interpretation of the EU’s Product Liability Directive — leaving consumers without legal clarity in the case of defective products for at least another year.
Training schemes and business-education partnerships
In terms of socio-economic prep for AI-fueled transformations coming to the job market the Commission says it’s encouraging Member States to “modernise their education and training systems and support labour market transitions, building on the European Pillar of Social Rights“.
More specifically it says it will support business-education partnerships to attract and keep more AI talent in Europe; set up dedicated training schemes with financial support from the European Social Fund; and support digital skills, competencies in STEM, entrepreneurship and creativity.
“Proposals under the EU’s next multiannual financial framework (2021-2027) will include strengthened support for training in advanced digital skills, including AI-specific expertise,” the Commission also adds.
So nothing very revolutionary on this front as yet, with the opportunity to expand available finance support for skills being put on ice until the bloc’s next major financing framework.
In another factsheet on its proposals the Commission flags some existing skills initiatives, such as the Digital Opportunity traineeship — saying this will provide cross-border traineeships for “up to 6,000 students and recent graduates as of summer 2018”. Although this is more broadly aimed at digital skills gaps.
The AI strategy comes in the same week as a group of EU-based scientists have warned in an open letter that the region is falling behind North American and China on AI research — proposed establishing a European AI research institute, linked to industry, to attract and retain AI talent, also arguing that “the distinction between academic research and industrial labs is vanishing”.
Albeit several of the academics who signed the letter also hold positions with tech giants — including Uber’s chief scientist, Zoubin Ghahramani; Google’s head of machine learning research, Olivier Bousquet; and Amazon’s director of machine learning, Ralf Herbrich.