Europol 2.0: The project that will generalize surveillance in the European Union

George Marinescu
English Section / 14 noiembrie

Europol 2.0: The project that will generalize surveillance in the European Union

Europol's roadmap set for 2023 foresees 25 potential AI models, from object detection and geolocation in images to identifying deepfakes and extracting personal characteristics

Europol is quietly strengthening its digital architecture, accumulating huge amounts of data and training algorithms capable of redefining the way police work across the bloc, which will lead to the generalization of surveillance in the EU, according to a journalistic investigation by Investigative Journalism for Europe (IJ4EU) and Lighthouse Reports.

According to the cited source, Europol calls the project "Strategic Objective No. 1”, whose goal is to transform the agency into the "European crime intelligence center”, a huge hub of personal data collected from all member states, third countries and private partners.

Critics see this strategy less as a necessary coordination and more as a covert experiment in massive data acquisition and extensive surveillance. Internal documents obtained from the investigation cited and analyzed by experts in data protection and artificial intelligence clearly show the driver of this ambition: artificial intelligence (AI) has become the solution that Europol's management considers essential to decipher the increasingly abundant streams of information, from chat app captures to biometric databases.

An investigation by Computer Weekly, Netzpolitik and the website Solomon shows that since 2021 Europol has embarked on a largely secret campaign to develop machine learning models capable of decisively influencing the way policing will be done in the European Union and even beyond its borders. Behind the documents and interviews with officials and authorities, the fundamental question arises: how much is a police agency allowed to collect in the name of security, and what happens when automation penetrates law enforcement without real controls?

Europol claims in a written response to the cited sources that it maintains "an impartial position towards all stakeholders, in order to fulfill its mandate - supporting national authorities in the fight against serious, organized crime and terrorism” - and that the agency "will be at the forefront of innovation and research in law enforcement”.

87 million messages analyzed by Europol

The large-scale data experiment began as a side effect of massive hacking operations in 2020 and 2021, which gave European police access to millions of messages sent via encrypted phones used by criminal networks. The targets were EncroChat, SKY ECC and ANOM. Europol's role was supposed to be limited to transferring hacked data between national authorities, but the agency kept complete copies of the sets, some 60 million messages from EncroChat alone and 27 million from ANOM, and began analyzing them on its own servers. Europol specialists quickly realized that the volume was completely beyond human capacity. But this very limitation opened up a perspective: if humans can't sift through the data, maybe algorithms can. Lives were at stake and criminals could get away with it. The sources cited state that internal documents show that by the end of 2020, Europol planned to train seven machine learning models on the EncroChat data to automatically flag suspicious conversations, which would be the agency's first real experiment with artificial intelligence. The legality of storing and analyzing this data has been challenged, and a case is currently before the Court of Justice of the EU.

The cited sources also recall the episode in September 2021, when ten inspectors from the European Data Protection Supervisor (EDPS) descended on Europol's headquarters, where they found a project almost completely devoid of safeguards: the documentation on monitoring the training of the models had been drafted only after the development was completed. They noted the lack of assessment of the risks of bias, statistical accuracy and procedural foundations. Europol stopped the EncroChat project in February 2021 after the EDPS signaled the need for stricter control, control that the agency seemed eager to avoid. However, the experiment revealed both Europol's ambition and its willingness to push the legal limits. Inside the agency, there was total silence: the risk of a person being mistakenly implicated was considered minimal, and the models were not used operationally. There had also been no explicit mandate until then that would allow Europol to develop and use AI in investigations - but that was about to change.

Thorn Partnership opens door to americans on Europol's internal plans

In June 2022, a quietly adopted new regulation gave the agency sweeping powers to develope and use of AI technologies, as well as the possibility of exchanging operational data directly with private companies. Europol immediately found an ideal cause to make its new tools politically intangible: combating online child sexual abuse. In a perfectly synchronized moment with the European Commission's proposal requiring digital platforms to scan private messages for abuse material, Europol leaders pressed the Commission to allow the adjustment of these technologies for other purposes, sending a clear message: "All data is useful and should be passed on to law enforcement,” because "quality data is needed to train algorithms.” They asked to be excluded from the restrictions of the upcoming AI Act, even though many of their systems would fall into the category of intrusive and high-risk.

The same rhetoric was also found in the private sector, and Europol's relationship with technology developers is notorious. One important partner: Thorn, an American non-profit organization that creates AI tools to detect child abuse images. The sources claim that emails between Thorn and Europol, dated between 2022 and 2025, show how the agency requested access to confidential technical materials to develop its own classifier. In one such message, a representative of Thorn warned: "I must emphasize that the document is confidential and should not be redistributed.”

Europol later asked for help in accessing classifiers from a joint project. Expert Nuno Moniz emphasized to the sources cited, regarding the email conversations, that they raise serious questions, because Thorn was treated as a privileged partner, with unprecedented access to Europol's internal plans. Informal meetings, lunches and presentations at Europol's headquarters reinforce the image of close collaboration. Europol insists that it has not purchased Thorn products and that it does not intend to use them, but some of the correspondence remains heavily redacted or undisclosed.

The journalistic investigation states that the lack of transparency is not limited to the relationship with Thorn. Europol consistently refuses to publish crucial documents about its AI program, and those provided are so redacted that they become useless.

FRO - a dysfunctional European institution?

The European Ombudsman has challenged Europol's reasoning, and several complaints are still under analysis. The weak point of the supervision is its own internal architecture: the Fundamental Rights Officer (FRO), created to prevent abuse, has no enforcement powers. "The role is institutionally weak,” says Barbara Simao of the organization Article 19, who warns that the FRO functions rather symbolically, and the office's own reports admit that there are no adequate tools for evaluating AI systems - the procedures being inspired, among other things, by a 1998 manual, "The Responsible Administrator.” The parliamentary group overseeing Europol has no real power, and the EDPS, while vital, has limited resources to control the exponential growth of AI in law enforcement. The sources also mention that in the summer of 2023, Europol's top priority had become the creation of its own classifier for child abuse material. The FRO documents acknowledge risks such as racial or gender bias in the training data, but offer only brief recommendations. The datasets were to include known abuse material and "non-CSE” images, sources of which were not specified, and the CSE material was to come mainly from NCMEC, the American organization that receives reports from companies such as Meta and Google. Even though the own project was later put on hold, the NCMEC data fed the first automated model actually used by Europol: EU-CARES, launched in October 2023, capable of automatically downloading files, checking them against Europol's internal databases and transmitting results to EU police forces within minutes. Automation has removed bottlenecks, but introduced new risks: "incorrect data reported by NCMEC”, "mismatches” and the potential involvement of innocent people. The EDPS warned of "severe consequences”, forcing Europol to introduce "unconfirmed” flags, alerts for anomalies and better mechanisms for deleting retracted reports.

Germany: only 48.3% of reports have operational value

In February 2025, Catherine De Bolle announced that the system had processed over 780,000 reports, without their accuracy being known. The German Federal Police, which receives NCMEC reports directly, said that 48.3% of the 205,728 received in 2024 had no operational value.

In parallel, Europol is expanding automation into an extremely sensitive area: facial recognition. Since 2016 they have been testing and purchasing commercial tools, and their most recent acquisition is NeoFace Wat ch, produced by the company NEC, intended to replace or supplement the FACE system, which had access to about one million facial images in 2020. Email correspondence shows that discussions on NeoFace date back to May 2023. The EDPS warned of the risk of low accuracy in processing the faces of minors and incompatibility between different systems.

Europol decided to exclude data from children under 12. The NIST studies cited by Europol did not use "field” images - poor lighting could raise the error rate to 38%. The contract with NEC was signed in October 2024. In its opinion, the FRO recognized serious risks regarding the right to defense and the intrusive nature of the system, classifying it as high-risk under the AI Act, but still approved it, requesting only transparency. The NEC claims its algorithm is "the most accurate in the world,” but experts like Luc Rocher point out to the cited sources that accuracy degrades in real-world conditions, particularly affecting minorities and young people. Barbara Simao warned that the focus on technical performance downplays the real dangers.

An internal Europol roadmap, dated 2023, reveals the true magnitude: 25 potential AI models, from object detection and geolocation in images to identifying deepfakes and extracting personal characteristics. The proposed architecture would make Europol the hub of police automation in the EU, with its models being able to be used across the Union.

In February 2025, Catherine De Bolle announced that ten impact assessments had been submitted, seven for models under development and three for new models. But the response sent to MEPs - a four-page document with generic descriptions - did not clarify anything. MEP Saskia Bricmont says the systems developed by Europol "may involve very strong risks and consequences for fundamental rights” and that "strong and effective oversight is crucial”, but admits that it is almost impossible for MEPs to fulfil their monitoring role.

Meanwhile, the European Commission is proposing to transform Europol into a "truly operational police agency” and wants to double its budget to euro3 billion.

This is a portrait of an agency in a state of rapid expansion, driven by technology, protected by opacity and politically supported in the name of security, while control mechanisms remain fragmented and powerless.

As Europe prepares for a new era of automated law enforcement, the crucial question remains: who supervises the supervisors?

Reader's Opinion

Accord

By writing your opinion here you confirm that you have read the rules below and that you consent to them.

www.agerpres.ro
www.dreptonline.ro
www.hipo.ro

adb