european funds recovery initiative Search Search... Digital Omnibus: How Big Tech Lobbying Is Gutting the GDPR HOME Related News
Digital Omnibus: How Big Tech Lobbying Is Gutting the GDPR Last week we at EFRI wrote about the Digital Omnibus leak and warned that the European Commission was preparing a stealth attack on the GDPR
Since then, two things have happened:
The Commission has now officially published its Digital Omnibus proposal.
noyb (Max Schrems’ organisation) has released a detailed legal analysis and new campaigning material that confirms our worst fears: this is not harmless “simplification”, it is a deregulation package that cuts into the core of the GDPR and ePrivacy.
What noyb has now put on the table
On 19 November 2025, noyb published a new piece with the blunt headline: “Digital Omnibus: EU Commission wants to wreck core GDPR principles”
Here’s a focused summary of the four core points from noyb’s announcement, in plain language:
New GDPR loophole via “pseudonyms” and IDs
The Commission wants to narrow the definition of “personal data” so that much data under pseudonyms or random IDs (ad-tech, data brokers, etc.) might no longer fall under the GDPR.
This would mean a shift from an objective test (“can a person be identified, directly or indirectly?”) to a subjective test (“does this company currently want or claim to be able to identify someone?”).
Therefore, whether the GDPR applies would depend on what a company says about its own capabilities and intentions.
Different companies handling the same dataset could fall inside or outside the GDPR.
For users and authorities, it becomes almost impossible to know ex ante whether the GDPR applies – endless arguments over a company’s “true intentions”.
Schrems’ analogy: it’s like a gun law that only applies if the gun owner admits he can handle the gun and intends to shoot – obviously absurd as a regulatory concept.
arzh-CNnlenfrdeitptrues european funds recovery initiative Search Search... Digital Omnibus: How Big Tech Lobbying Is Gutting the GDPR HOME Related News
Digital Omnibus: How Big Tech Lobbying Is Gutting the GDPR Last week we at EFRI wrote about the Digital Omnibus leak and warned that the European Commission was preparing a stealth attack on the GDPR
Since then, two things have happened:
The Commission has now officially published its Digital Omnibus proposal.
noyb (Max Schrems’ organisation) has released a detailed legal analysis and new campaigning material that confirms our worst fears: this is not harmless “simplification”, it is a deregulation package that cuts into the core of the GDPR and ePrivacy.
What noyb has now put on the table On 19 November 2025, noyb published a new piece with the blunt headline: “Digital Omnibus: EU Commission wants to wreck core GDPR principles”
Here’s a focused summary of the four core points from noyb’s announcement, in plain language:
New GDPR loophole via “pseudonyms” and IDs The Commission wants to narrow the definition of “personal data” so that much data under pseudonyms or random IDs (ad-tech, data brokers, etc.) might no longer fall under the GDPR.
This would mean a shift from an objective test (“can a person be identified, directly or indirectly?”) to a subjective test (“does this company currently want or claim to be able to identify someone?”).
Therefore, whether the GDPR applies would depend on what a company says about its own capabilities and intentions.
Different companies handling the same dataset could fall inside or outside the GDPR.
For users and authorities, it becomes almost impossible to know ex ante whether the GDPR applies – endless arguments over a company’s “true intentions”.
Schrems’ analogy: it’s like a gun law that only applies if the gun owner admits he can handle the gun and intends to shoot – obviously absurd as a regulatory concept.
Weakening ePrivacy protection for data on your device
Today, Article 5(3) ePrivacy protects against remote access to data on your devices (PCs, smartphones, etc.) – based on the Charter right to the confidentiality of communications.
The Commission now wants to add broad “white-listed” exceptions for access to terminal equipment, including “aggregated statistics” and “security purposes”.
Max Schrems finds the wording of the new rule to be extremely permissive and could effectively allow extensive remote scanning or “searches” of user devices,ces as long as they are framed as minimal “security” or “statistics” operations – undermining the current strong protection against device-level snooping.
Opening the door for AI training on EU personal data (Meta, Google, etc.)
Despite clear public resistance (only a tiny minority wants Meta to use their data for AI), the Commission wants to allow Big Tech to train AI on highly personal data, e.g. 15+ years of social-media history.
Schrems’ core argument:
People were told their data is for “connecting” or advertising – now it is fed into opaque AI models, enabling those systems to infer intimate details and manipulate users.
The main beneficiaries are US Big Tech firms building base models from Europeans’ personal data.
The Commission relies on an opt-out approach, but in practice:
Companies often don’t know which specific users’ data are in a training dataset.
Users don’t know which companies are training on their data.
Realistically, people would need to send thousands of opt-outs per year – impossible.
Schrems calls this opt-out a “fig leaf” to cover fundamentally unlawful processing.
On top of training, the proposal would also privilege the “operation” of AI systems as a legal basis – effectively a wildcard: processing that would be illegal under normal GDPR rules becomes legal if it’s done “for AI”. Resulting in an inversion of normal logic: riskier technology (AI) gets lower, not higher, legal standards.
Cutting user rights back to almost zero – driven by German demands
The starting point for this attack on user rights is a debate in Germany about people using GDPR access rights in employment disputes, for example to prove unpaid overtime. The German government chose to label such use as “abuse” and pushed in Brussels for sharp limits on these rights. The Commission has now taken over this line of argument and proposes to restrict the GDPR access right to situations where it is exercised for “data protection purposes” only.
In practice, this would mean that employees could be refused access to their own working-time records in labour disputes. Journalists and researchers could be blocked from using access rights to obtain internal documents and data that are crucial for investigative work. Consumers who want to challenge and correct wrong credit scores in order to obtain better loan conditions could be told that their request is “not a data-protection purpose” and therefore can be rejected.
This approach directly contradicts both CJEU case law and Article 8(2) of the Charter of Fundamental Rights. The Court has repeatedly confirmed that data-subject rights may be exercised for any purpose, including litigation and gathering evidence against a company. As Max Schrems points out, there is no evidence of widespread abuse of GDPR rights by citizens; what we actually see in practice is widespread non-compliance by companies. Cutting back user rights in this situation shifts the balance even further in favour of controllers and demonstrates how detached the Commission has become from the day-to-day reality of users trying to defend themselves.
EFRI’s take: when Big Tech lobbying becomes lawmaking
For EFRI, the message is clear: the Commission has decided that instead of forcing Big Tech and financial intermediaries to finally comply with the GDPR, it is easier to move the goalposts and rewrite the rules in their favour. The result is a quiet but very real redistribution of power – away from citizens, victims, workers and journalists, and towards those who already control the data and the infrastructure. If this package goes through in anything like its current form, it will confirm that well-organised corporate lobbying can systematically erode even the EU’s flagship fundamental-rights legislation. That makes it all the more important for consumer organisations, victim groups and digital-rights advocates to push back – loudly, publicly and with concrete case stories – before the interests of Big Tech are permanently written into EU law.
The EU has no digital dependency on corporate AI, which seems to be the biggest beneficiary of unwinding the GDPR and other personal-info protecting legislation.
I partly agree with the point you're making but I don't think it actually applies that much to weakening this specific legislation.
Further, your point doesn't negate the Corruption - nothing impedes both things happening at once and in fact Corruption explains the current digital dependency, which for example was made worse by EU decisions such as treating the US as a safe haven nation for the data of EU citizens even though already then the Snowden Revelations as well as US' very own legislation made it very clear that the US was not safe for any data stored there or in the hands of US companies since their authorities could secretly force companies there to give them access to that data.
We are in the hole we are in part because over the years people in positions of power in the EU were "friendly" to and "understanding of the concerns" of large US Tech companies and "by an amazing coincidence" were latter given millionaire "jobs" with those companies - EU policy in the digital domain was shaped by something other than the interests if EU citizens or even EU businesses and, last I checked, US companies were not the ones EU politicians were elected to represent.
Last but not least, further caving to the US will just dig that hole even further, making us even more susceptible in the future to such blackmailing - it is literally the very opposite of the direction when should be going to.
As I see it, the EU is a 470 million people market which could seriously fuck up US tech companies doing things like cutting the accounts of judges in the EU which weren't nice to them, and do so by using already existing regulatory tools (for example, launching investigations on them for non-compliance with several EU regulations), which could go all the way to huge penalties and even blocking their access to the EU market (which is huge and represents a massive chunk of their profits). There simply isn't a will to do so and I fully believe that lack of will is related to personal upside maximization of people in positions of power in the EU since, as you describe, they're already attacking the Judiciary in the EU.
Frankly the only explanation I see for these measures which isn't either some form of corruption or massive incompetence, would be if this was just one big smoke and mirrors show to delay actions by the current US administration with no actual intention of ultimately doing anything meaningful, all to give time within the EU to move to alternatives which are not dependent on the US and/or for the evolution of events (for example, the AI bubble crashing) to make the whole thing irrelevant. Even then, this doesn't provide a positive explanation for the strange unwillingness of EU regulators to deploy the big guns when the US attacks members of the Judiciary in the EU as you describe, and for the various measures taken in the past at the EU level which have helped create and deepen the current digital dependency.