The Swedish data protection censor, Datainspektionen has fined Google 75 million Swedish kronor (7 million euro) for failure to comply with the censorship instructions.
According to the internet censor, which is affiliated with Sweden's Ministry of
Justice, Google violated the terms of the right-to-be-forgotten rule, a EU-mandated regulation introduced in 2014 allowing individuals to request the removal of potentially harmful private information from popping up in internet searches and directories.
Datainspektionen says an internal audit has shown that Google has failed to properly remove two search results which were ordered to be delisted back in 2017, making either too narrow an interpretation of what content needed to be removed, or failing
to remove a link to content without undue delay.
The watchdog has also slapped Google with a cease-and-desist order for its practice of notifying website owners of a delisting request, claiming that this practice defeats the purpose of link
removal in the first place.
Google has promised to appeal the fine, with a spokesperson for the company saying that it disagrees with this decision on principle.
The European Union's controversial new copyright rules are on a collision course with EU data privacy rules. The GDPR guards data protection, privacy, and other fundamental rights in the handling of personal data. Such rights are likely to be affected by
an automated decision-making system that's guaranteed to be used, and abused, under Article 17 to find and filter out unauthorized copyrighted material. Here we take a deep dive examining how the EU got here and why Member States should act now to
embrace enforcement policies for the Copyright Directive that steer clear of automated filters that violate the GDPR by censoring and discriminating against users.
Platforms Become the New Copyright Police
Article 17 of the EU's Cop yright Directive (formerly Article 13) makes online services liable for user-uploaded content that infringes someone's copyright. To escape liability, online service operators have to show that they made
best efforts to obtain rightsholders' authorization and ensure infringing content is not available on their platforms. Further, they must show they acted expeditiously to remove content and prevent its re-upload after being notified by rightsholders.
Prior to passage of the Copyright Directive, user rights advocates alerted lawmakers that operators would have to employ upload filters to keep infringing content off their platforms. They warned that then Article 13 will turn online
services into copyright police with special license to scan and filter billions of users' social media posts and videos, audio clips, and photos for potential infringements.
While not everyone agreed about the features of the
controversial overhaul of outdated copyright rules, there was little doubt that any automated system for catching and blocking copyright infringement would impact users, who would sometimes find their legitimate posts erroneously removed or blocked.
Instead of unreservedly safeguarding user freedoms, the compromise worked out focuses on procedural safeguards to counter over-blocking. Although complaint and redress mechanisms are supposed to offer a quick fix, chances are that censored Europeans will
have to join a long queue of fellow victims of algorithmic decision-making and await the chance to plead their case.
Can't See the Wood For the Trees: the GDPR
There's something awfully familiar
about the idea of an automated black-box judgment system that weighs user-generated content and has a significant effect on the position of individuals. At recent EU copyright dialogue debates on technical and legal limits of copyright filters, EU data
protection rules--which restrict the use of automated decision-making processes involving personal data--were not put on the agenda by the EU officials. Nor were academic experts on the GDPR who have raised this issue in the past (read this analysis by
Sophie Stalla-Bourdillon or have a look at this year's CPDP panel on copyright filters ).
Under Article 22 of the GDPR , users have a right "not to be subject to a decision based solely on automated processing, including
profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." Save for exceptions, which will be discussed below, this provision protects users from detrimental decisions made by algorithms, such
as being turned down for an online loan by a service that uses software, not humans, to accept or reject applicants. In the language of the regulation, the word "solely" means a decision-making process that is totally automated and excludes any
real human influence on the outcome.
The Copyright-Filter Test Personal Data
The GDPR generally applies if a provider is processing personal data, which is defined as any information relating to an
identified or identifiable natural person ("data subject," Article 4(1) GDPR ). Virtually every post that Article 17 filters analyze will have come from a user who had to create an account with an online service before making their post. The
required account registration data make it inevitable that Copyright Directive filters must respect the GDPR. Even anonymous posts will have metadata, such as IP addresses ( C-582/14, Breyer v Germany), which can be used to identify the poster.
Anonymization is technically fraught, but even purportedly anonymization will not satisfy the GDPR if the content is connected with a user profile, such as a social media profile on Facebook or YouTube.
Defenders of copyright
filters might counter that these filters do not evaluate metadata. Instead, they'll say that filters merely compare uploaded content with information provided by rightsholders. However, the Copyright Directive's algorithmic decision-making is
about much more than content-matching. It is the decision whether a specific user is entitled to post a specific work. Whether the user's upload matches the information provided by rightsholders is just a step along the way. Filters might not
always use personal data to determine whether to remove content, but the decision is always about what a specific individual can do. In other words: how can monitoring and removing peoples' uploads, which express views they seek to share,
not involve a decision about based on that individual?
Moreover, the concept of "personal data" is very broad. The EU Court of Justice (Case C-434/16 Nowak v Data Protection Commissioner ) held that "personal
data" covers any information "provided that it 'relates' to the data subject," whether through the content (a selfie uploaded on Facebook), through the purpose (a video is processed to evaluate a person's preferences), or
through the effect (a person is treated differently due to the monitoring of their uploads). A copyright filter works by removing any content that matches materials from anyone claiming to be a rightsholder. The purpose of filtering is to decide
whether a work will or won't be made public. The consequence of using filtering as a preventive measure is that users' works will be blocked in error, while other (luckier) users' works will not be blocked, meaning the filter creates a significant effect
or even discriminates against some users.
Even more importantly, the Guidelines on automated decision-making developed by the WP29 , an official European data protection advisory body (now EDPB ) provide a user-focused
interpretation of the requirements for automated individual decision-making. Article 22 applies to decisions based on any type of data. That means that Article 22 of the GDPR applies to algorithms that evaluate user-generated content that is
uploaded to a platform.
Do copyright filters result in "legal" or "significant" effects as envisioned in the GDPR? The GDPR doesn't define these terms, but the
guidelines endorsed by the European Data Protection Board enumerate some "legal effects," including denial of benefits and the cancellation of a contract.
The guidelines explain that even where a filter's judgment does
not have legal impact, it still falls within the scope of Article 22 of the GDPR if the decision-making process has the potential to significantly affect the behaviour of the individual concerned, has a prolonged impact on the user, or leads to
discrimination against the user. For example, having your work erroneously blocked could lead to adverse financial circumstances or denial of economic opportunities. The more intrusive a decision is and the more reasonable expectations are frustrated,
the higher the likelihood for adverse effects.
Consider a takedown or block of an artistic video by a creator whose audience is waiting to see it (they may have backed the creator's crowdfunding campaign). This could result in
harming the creator's freedom to conduct business, leading to financial loss. Now imagine a critical essay about political developments. Blocking this work is censorship that impairs the author's right of free expression. There are many more examples
that show that adverse effects will often be unavoidable.
Legitimate Grounds for Automated Individual Decision-Making
There are three grounds under which automated decision-making may be allowed
under the GDPR's Article 22(2). Users may be subjected to automated decision-making if one of three exceptions apply:
it's necessary for entering into or performance of a contract,
authorized by the EU or member state law, or
based on the user's explicit consent.
Copyright filters cannot justly be considered "necessary" under this rule . "Necessity" is narrowly construed in the data protection framework, and can't merely be
something that is required under terms of service. Rather, a "necessity" defence for automated decision-making must be in line with the objectives of data protection law, and can't be used if there are more fair or less intrusive measures
available. The mere participation in an online service does not give rise to this "necessity," and thus provides no serious justification for automated decision-making.
Perhaps proponents of upload filters will argue that they will be
authorized by the EU member state's law that implement the Copyright Directive. Whether this is what the directive requires has been ambiguous from the very beginning.
Copyright Directive rapporteur MEP Axel Voss insisted
that the Copyright Directive would not require upload filters and dismissed claims to the contrary as mere scare-mongering by digital rights groups. Indeed, after months of negotiation between EU institutions, the final language version of the directive
conspicuously avoided any explicit reference to filter technologies. Instead, Article 17 requires "preventive measures" to ensure the non-availability of copyright-protected content and makes clear that its application should not lead to any
identification of individual users, nor to the processing of personal data, except where provided under the GDPR.
Even if the Copyright Directive does "authorize" the use of filters, Article 22(2)(b) of the GDPR says
that regulatory authorization alone is not sufficient to justify automated decision-making. The authorizing law--the law that each EU Member State will make to implement the Copyright Directive--must include "suitable" measures to safeguard
users' rights, freedoms, and legitimate interests. It is unclear whether Article 17 provides enough leeway for member states to meet these standards.
Without "necessity" or
"authorization," the only remaining path for justifying copyright filters under the GDPR is explicit consent by users. For data processing based on automated decision-making, a high level of individual control is required. The GDPR
demands that consent be freely given, specific, informed, and unambiguous. As take-it-or-leave-it situations are against the rationale of true consent, it must be assessed whether the decision-making is necessary for the offered service. And consent must
be explicit, which means that the user must give an obvious express statement of consent. It seems likely that few users will be interested in consenting to onerous filtering processes.
Article 22 says that even if automated
decision-making is justified by user consent or by contractual necessity, platforms must safeguard user rights and freedoms. Users always have the right to obtain "human intervention" from platforms, to express their opinion about the content
removal, and to challenge the decision. The GDPR therefore requires platforms to be fully transparent about why and how users' work was taken down or blocked.
Conclusion: Copyright-Filters Must Respect Users' Privacy Rights
The significant negative effects on users subjected to automated decision-making, and the legal uncertainties about the situations in which copyright-filters are permitted, should best be addressed by a policy of legislative
self-restraint. Whatever decision national lawmakers take, they should ensure safeguards for users' privacy, freedom of speech and other fundamental rights before any uploads are judged, blocked or removed.
If Member States
adopt this line of reasoning and fulfill their legal obligations in the spirit of EU privacy rules, it could choke off any future for EU-mandated, fully-automated upload filters. This will set the groundwork for discussions about general monitoring and
filtering obligations in the upcoming Digital Service Act.
(Many thanks to Rossana Ducato for the exchange of legal arguments, which inspired this article).