As AI accelerates, Europe’s flagship privacy principles are under attack, warns EDPS


The European Data Protection Supervisor (EDPS) has warned key planks of the bloc’s data protection and privacy regime are under attack from industry lobbyists and could face a critical reception from lawmakers in the next parliamentary mandate.

“We have quite strong attacks on the principles themselves,” warned Wojciech Wiewiórowski, who heads the regulatory body that oversees European Union institutions’ own compliance with the bloc’s data protection rules, Tuesday. He was responding to questions from members of the European Parliament’s civil liberties committee concerned the European Union’s General Data Protection Regulation (GDPR) risks being watered down. 

“Especially I mean the [GDPR] principles of minimization and purpose limitation. Purpose limitation will be definitely questioned in the next years.”

Elections to the parliament are coming up in June, while the Commission’s mandate expires at the end of 2024 so changes to the EU’s executive are also looming. Any shift of approach by incoming lawmakers could have implications for the bloc’s high standard of protection for people’s data.

The GDPR has only been up and running since May 2018 but Wiewiórowski, who fleshed out his views on incoming regulatory challenges during a lunchtime press conference following publication of the EDPS’ annual report, said the make-up of the next parliament will contain few lawmakers who were involved with drafting and passing the flagship privacy framework.

“We can say that these people who will work in the European Parliament will see GDPR as a historic event,” he suggested, predicting there will be an appetite among the incoming cohort of parliamentarians to debate whether the landmark legislation is still fit for purpose. Though he also said some revisiting of past laws is a recurring process, i.e. every time the make-up of the elected parliament turns over. 

But he particularly highlighted industry lobbying, especially complaints from businesses targeting the GDPR principle of purpose limitation. Some in the scientific community also see this element of the law as a limit to their research, per Wiewiórowski. 

“There is a kind of expectation from some of the [data] controllers that they will be able to reuse the data which are collected for reason ‘A’ in order to find things which we don’t know even that we will look for,” he said. “There is an old saying of one of the representatives of business who said that the purpose limitation is one of the biggest crimes against humanity, because we will need this data and we don’t know for which purpose.

“I don’t agree with it. But I cannot close my eyes to the fact that this question is asked.”

Any shift away from the GDPR’s purpose limitation and data minimization principles could have significant implications for privacy in the region which was first to the punch to pass a comprehensive data protection framework. The EU is still considered to have some of the strongest privacy rules anywhere in the world, although the GDPR has inspired similar frameworks elsewhere.

Included in the GDPR is an obligation on those wanting to use personal data to process only the minimum info necessary for their purpose (aka data minimization). Additionally, personal data that’s collected for one purpose cannot simply be re-used, willy-nilly, for any other use that occurs.

The GDPR’s purpose limitation principle implies that a data operation should be attached to a specific use. Further processing may be possible — but, for example, it may require obtaining permission from the person whose information it is, or having another valid legal basis. So the purpose limitation approach injects intentional friction into data operations.

This element of the law requires up-front clarity about an intended use of personal data, rather than collecting and holding people’s information just in case some future purpose might occur. While data minimization encourages holding as little data as possible. But with the current industry-wide push to develop more and more powerful generative AI tools there’s a huge scramble for data to train AI models — an impetus that runs directly counter to the EU’s approach.

OpenAI, the maker of ChatGPT, has already run into trouble here. It’s facing a raft of GDPR compliance issues and investigations — including related to the legal basis claimed for processing people’s data for model training.

Wiewiórowski did not explicitly blame generative AI for driving the “strong attacks” on the GDPR’s purpose limitation principle. But he did name AI as one of the key challenges facing the region’s data protection regulators as a result of fast-paced tech developments.

“The problems connected with artificial intelligence and neuroscience will be the most important part of the next five years,” he predicted on nascent tech challenges.

“The technological part of our challenges is quite obvious at the time of the revolution of AI despite the fact that this is not the technological revolution that much. We have rather the democratization of the tools. But we have to remember as well, that in times of great instability, like the ones that we have right now — with Russia’s war in Ukraine — is the time when technology is developing every week,” he also said on this.

Wars are playing an active role in driving use of data and AI technologies — such as in Ukraine where AI has been playing a major role in areas like satellite imagery analysis and geospatial intelligence — with Wiewiórowski saying battlefield applications are driving AI uptake elsewhere in the world. The effects will be pushed out across the economy in the coming years, he further predicted.

On neuroscience, he pointed to regulatory challenges arising from the transhumanism movement, which aims to enhance human capabilities by physically connecting people with information systems. “This is not science fiction,” he said. “[It’s] something which is going on right now. And we have to be ready for that from the legal and human rights point of view.”

Examples of startups targeting transhumanism ideas include Elon Musk’s Neuralink which is developing chips that can read brain waves. Facebook-owner Meta has also been reported to be working on an AI that can interpret people’s thoughts.

Privacy risks in an age of increasing convergence of technology systems and human biology could be grave indeed. So any AI-driven weakening of EU data protection laws in the near term is likely to have long term consequences for citizens’ human rights.

Leave a Reply

Your email address will not be published. Required fields are marked *