Marietje Schaake is ‘very concerned about the future of democracy’

'Europe’s most wired' politician outlines her top cyber issues

In the ten years she spent as a member of the European Parliament, Marietje Schaake became one of Brussels’ leading voices on technology policy issues.

A Dutch politician from the centrist-liberal Democrats 66 party, Schaake has been called “Europe’s most wired” politician. Since stepping down at the last European Parliament elections in 2019, she has doubled down with her work on cyber policy, becoming president of the CyberPeace Institute in Geneva and moving to the heart of Silicon Valley, where she has joined Stanford University as both the International Director of Policy at Stanford’s Cyber Policy Center, as well as an International Policy Fellow at its Institute for Human-Centered Artificial Intelligence.

I spoke with her about her top cyber policy concerns, the prospects of greater U.S.-EU cooperation on technology and much more.

Can you tell me about your journey from MEP in Brussels to think tank in academia?

There were a variety of reasons why I thought a third term was not the best thing for me to do. I started thinking about what would be a good way to continue, focusing on the fight for justice, for universal human rights and increasingly for the rule of law. A number of academic institutions, especially in the U.S. reached out, and we started a conversation about what the options might be, what I thought would be worthwhile. [My goal] was to understand where tech is going and what does it mean for society, for democracy, for human rights and the rule of law? But also how do the politics of Silicon Valley work?

I feel like there’s a huge opportunity, if not to say gap, on the West Coast when it comes to a policy shop — both to scrutinize policy that the companies are making and to look at what government is doing because Sacramento is super interesting. 

So from a policy perspective, what areas of tech are you thinking about most?

I’m very concerned about the future of democracy in the broadest sense of the word. I feel like we need to understand better how the architecture of information flows and how it impacts our offline democratic world. The more people get steered in a certain direction, the more the foundations of actual liberalism and liberal democracy are challenged. And I feel like we just don’t look at that enough.

I’m [also] concerned about how little we know about what happens in commercially governed tech environments and how the lack of transparency basically hinders accountability. Then bringing that to the sort of global perspective, the stuff that I’m focused on in Geneva is much more about norms [around] attacks, hybrid conflict, and about how nonstate actors are becoming immensely powerful and are able to really undermine and attack, often without accountability and without a focus on how it’s a real human problem. It’s not about how many billions [of dollars] have been stolen or how many terabytes [of data] have been leaked. It’s very much about human beings that are suffering. 

So for me, it’s about shedding light where there’s a lot of opaqueness, with the aim of improving democracy and the rule of law.

What stands out to you when it comes to cyberattacks?

Well, I think there is a spectrum of attacks — anything from ransomware attacks on hospitals to commercial spyware used against journalists or civil society activists to dis/misinformation. Some are purely criminal or for monetary, not geopolitical, objectives. [But] different tactics have the same objective: to erode trust and to undermine liberal democracy.

I think through COVID we’ve understood much more how various actors may be using the opportunity with more people online, more people fearful, to erode trust in experts, in institutions, in people between themselves. We’ve seen this whole agenda of polarization very clearly in the United States from 2016 onward, but I think [it’s] becoming more and more sophisticated.

[And] if you have unsolved [cyber] crimes, I think it erodes people’s trust. [For example], if my mom reads in the newspaper that a hospital was being attacked, but never reads that the attacker has been arrested or has been put on a sanctioned list or has been made to pay a fine. 

What’s the biggest obstacle in countering attacks?

I would say the biggest problem is we don’t know enough. There are now academics that are petitioning the social media platforms to retain the data sets in order for them to later do research on them. 

We may agree that it is not okay to hold a hospital ransom, hack into a vaccine-developing pharmaceutical company or hack elections. [But when] we don’t know enough about the dynamics of the attacks, it makes it easy for governments to say, “Well, attribution is difficult,” and for companies to not talk about it because they care more about their reputation than about transparency. And you get this whole dynamic where there is no accountability. So I feel like it’s a double attack on the technological level, but also on the erosion of trust level again. 

Do governments understand the nature of the challenge?

I think the notion that we need to regulate principles, that laws should apply online as they do offline, is very well understood in the European Union. The question might be how that should play out, like how much room for business [versus the] public interest. But fundamentally people [in Europe] don’t wonder whether government or independent regulators should play a role.

In the U.S. I would say that it’s still very much contested, [so] companies are taking it upon themselves to take [on] governance initiatives. So, I mean, anything from speech decisions that we see a company like Twitter now taking, to Microsoft, [which was] behind a facial recognition law in the United States. 

I think it shows how big the gap is and how if there is no threshold, no standard, no accountability, others  — including Chinese companies, authoritarian governments and other coalitions of nondemocratic actors — will fill in this space. 

Has private oversight been effective? 

Whenever I see companies acknowledge the responsibility that comes with power and that they need to deal with that, it’s a step in the right direction. Now, does it always get executed rightly? Maybe not. But when companies are honest about the challenges that they’re facing, I think that they’re both doing the public and themselves a favor. [For example], I’m surprised that companies like Facebook that have been in the line of fire about a lot of their content moderation decisions have not been more open about the dilemmas that they have to weigh against each other.

What I think is not at all helpful is the lack of investment in local knowledge and capacity, whether it’s in different communities in the United States or in countries where there’s a lot of stake, like election violence, hate speech or other kinds of ethnic violence that we’ve seen. And not putting the resources in place to understand and possibly take action, making very bland statements with the benefit of hindsight, really erodes the credibility of a lot of the tech platforms.

I would add that some of the worst aggressors or most powerful and intrusive companies are basically completely out of the spotlight. For example,  the whole commercial surveillance industry or data broker, which are very important, but nobody really knows about them. 

How are government regulators doing?

In the United States, I see an acknowledgment after a long reluctance that regulators have a role to play and that fundamental principles are at stake. So for example, the antitrust efforts by 47 attorneys general — that’s a huge effort, and I think that says something.

In the EU, we see similar movements. In the European data strategy I saw a very helpful redefinition of not only the market, but also societal impacts of technology and I think that that’s really important.

One of the challenges is that a lot of regulatory instruments are based on consumer rights in the market, on economic harms, but not so much on democratic harms, societal harms, on the public interest, on children, on public health, on discrimination. There’s so many other issues that are at stake now because of digitization or exacerbated by digitization where the regulatory instruments are just not really fit for purpose. 

Obviously there’s been lots of comparison over the different approaches between the U.S. and Europe. So in your view, where is the most alignment and where do you see obstacles?

There’s a growing convergence when it comes to challenges coming from China and on 5G. I think that there’s a growing understanding that this is a challenge that we would best deal with together.

I would say a big difference is that in the United States, [there’s] a heavy-handed national security argument that can be used while in the EU, the single market is one but national security is still handled by the individual member states. I think it’s a big problem for the EU [because] for some Americans, it’s hard to understand why it’s so hard for Europe to step up when it comes to some national security concerns [like] China.

There needs to be a meeting of minds, but to be frank, the way in which this president has approached both the EU and multilateralism has really taken away a lot of opportunity for meaningful cooperation, [whether that be] in NATO, the Council of Europe, the OECD, the WTO. There’s so many fora where in a different political climate a lot more could be done together toward this democratic model of governance that I think we need and where I would like to see the EU and the U.S. leading together instead of standing with their backs toward each other.

The EU has been called a regulatory superpower, and the debate seems to be on one side being a pivot between the U.S. and China and on the other aligning with the U.S. to put stakes in the ground for a free internet versus an authoritarian one? Where do you see things headed?

Well, I think Europe lacks sufficient acknowledgment of the geopolitical dynamic around technology. So for example, one thing that I think is really an omission is that in the EU AI white paper, there’s an explicit leaving out of the military use of AI. It’s the wrong signal at the wrong moment because [at the same time] you have [European Commission President Ursula] von der Leyen saying this is a geopolitical European Commission. Now I agree; it should be a geopolitical commission because there is so much at stake in the global competition.

It’s just remarkable how there is a mismatch between the tech discussions where, you’re right, oftentimes Europe proudly claims it’s a superregulator, even though I think it has barely begun. Just because GDPR passed doesn’t make it a superregulator yet. I would hate to see the EU feeling complacent.

It desperately needs to increase its geopolitical and strategic handling of everything including technology because technology is now a part of everything. Ideally the U.S. and the EU would take the lead together on developing a democratic regulatory model of technology. The U.S. has long advanced together with the EU and [yet] that stops with digitization, which is very peculiar.

So I think it’s a matter of really articulating what the principles and the values and the interests are, and then making sure that they are enshrined in a model that applies to anything from markets to strategic objectives to bilateral relations, like in development, for example, to diplomacy. It has to be a very comprehensive vision. And I hope that the EU and the U.S. can come together, but certainly not alone. I mean, countries like Japan, like India are really needed for this to scale up and to have a chance.

AI is a big focus of your work — what are the top level challenges in your mind as government and society moves forward and thinks about it?

I think the gap between understanding and knowledge and talent between the public and the private sector is a huge challenge. There’s an asymmetry of power between companies that have armies of lawyers that appreciate what is going on versus public authorities that are sort of peeking in from the outside and have a really hard time understanding what’s really happening. And frankly, I’m not sure how many people within companies know what’s really happening. I mean, I make this analogy to engineers who say, “Do you think we know where the head and the tail of the algorithms are?” And if you know that Google’s search algorithm changes, what, 3,200 times a year? Then what does it mean for oversight?

Let’s say you feel like you’ve been mistreated or you’ve been discriminated against, or you’ve been met with unfair behavior by one party or the other. How are you supposed to know? Access to information is a huge theme for me, which is really exacerbated by AI. And so it links to questions about transparency and accountability and it puts a lot of structural issues in the spotlight with no evident answers. I think it’s important that we get there as soon as possible because as we speak, the technologies are advancing and advancing and this mismatch and asymmetry is growing and growing.

And of course, AI is a whole field that still needs to take shape both with horizontal efforts to understand AI and vertical in their application — AI in healthcare, AI in transport, AI in antitrust, AI in HR and whatnot.

We’re in an election year in the U.S. What would you like to see the Trump administration or a Biden administration do differently?

Well, the U.S. needs to stop being allergic to regulation. Regulation is necessary. 

I think it should embrace the democracy-first approach to how it deals with technology. And it should invite tech companies to make a clear choice, kind of what we’re seeing now with Twitter versus Facebook: Are they going to be facilitating more authoritarian elements, [either] through their supply chains, [or by working with] governments that are asking [for] compromises in very fundamental ways, like access to source codes or human rights. Or do they stand with democracy and the rule of law?