The European Commission has drafted new laws to force ISPs to block child porn. The measure will be voted on by the European Parliament next month. The technical solutions envisaged are broadly based on arrangements in the UK, where all major
ISPs block access to child abuse websites named on a list maintained by the Internet Watch Foundation (IWF).
If the laws are passed as proposed, the UK government will get powers to force the small ISPs who do not use the IWF blocklist – who serve less than 2% of British internet users – to fall into line. Last year the Home Office abandoned
a pledge to enforce 100% compliance.
Although voluntary, the British system is not without controversy, and EuroISPA, the European ISP trade association, is lobbying MEPs to reject the move to enforce it across the bloc.
Malcolm Hutty, the President of EuroISPA, said: In order to make the Directive on child sexual exploitation as strong as possible, emphasis must be placed on making swift notice and takedown of child sexual abuse material
focused and effective. Blocking, as an inefficient measure, should be avoided. Law enforcement authorities' procedures for rapid communication to internet hosting providers of such illegal material must be reviewed and bottlenecks eliminated.
Cecilia Malmstrom, the European commissioner for home affairs, is worried that MEPs' amendments to a draft directive on the sexual abuse and exploitation of children would make it more difficult for EU member states to block access to websites
carrying child pornography.
The European Parliament's civil liberties committee is to vote on the European Commission's proposal and MEPs' amendments on 14th February.
At present, it is up to member states whether they want to block websites such content. The Commission is seeking to introduce an obligation on all member states to block access in cases where their removal is impossible.
A majority of member states back the mandatory blocking of internet sites but the measure has run into trouble with MEPs. Germany, Ireland and Luxembourg have also openly rejected the measure.
Some of the hundreds of amendments to the draft regulation put forward by MEPs would introduce EU-wide rules that would make it more difficult for member states to continue blocking websites. Many MEPs are concerned about the implications of
website blocking for freedom of speech.
I am a liberal, I consider free speech as a fundamental value and I have fought for that all my life, so accusations that I'm trying to censor the internet and limit freedom of speech really go to my heart because that is absolutely not what
I'm trying to do, Malmstro m said. But I have seen those pictures; they have nothing to do with freedom of speech. This is a horrible violation.
She also rejected the slippery-slope argument -- the notion that once the EU imposed rules on blocking access to one type of website, it could do so for other types in the future. I intend in no way to propose any other type of blocking for
any other thing, but this particular crime demands particular attention.
The European parliament's civil liberties, justice and home affairs committee (LIBE) will meet in Strasbourg tomorrow, when it is expected to approve a controversial measure that would compel EU member states to inform internet publishers that
their images are to be deleted from the internet or blocked for reasons of child pornography.
Publishers will also have to be informed of their right to appeal against any removal or blocking.
The measure would make the UK's system for blocking and removing child pornography without informing the publisher illegal.
MEPs seem more concerned with the rights of child pornographers than they do with the rights of children who have been sexually abused to make their foul, illegal images, said John Carr, the secretary of the Children's Charities Coalition
on Internet Safety (And an adviser to the UK government on child internet safety!)
Surely it is non-child porn publishers that can appeal. If they can show that their sites are legal then it is absolutely correct that they should be able to prove their point.
On the other hand, child pornographers would simply have no case on which to make an appeal, their material is illegal, and will stay removed or blocked.
The EU has taken a step towards common rules against those who sexually abuse children and post images of the abuse on the internet.
A committee of Euro MPs backed an EU draft directive calling for child abuse images to be removed at source.
Where removal is impossible - for example, because web pages are hosted outside the EU - then the abuse images may be blocked by national authorities.
MEPs aim to adopt the new rules later this year, after further negotiations.
MEPs insisted that any moves to block access to images on the web must be accompanied by transparent procedures and provide adequate safeguards so that the restriction is limited to what is necessary and proportionate .
The safeguards would include informing users of the reason for the block and informing content providers and users of their right to appeal.
The European Court of Justice has given a preliminary opinion that will have far-reaching implications in the fight against overaggressive copyright monopoly abusers. It is not a final verdict, but the Advocate General's position; the Court
generally follows this. The Advocate General says that no ISP can be required to filter the Internet, and particularly not to enforce the copyright monopoly.
The opinion is very clear: Advocate General Cruz Villalon considers that the installation of that filtering and blocking system is a restriction on the right to respect for the privacy of communications and the right to protection of personal
data, both of which are rights protected under the Charter of Fundamental Rights. By the same token, the deployment of such a system would restrict freedom of information, which is also protected by the Charter of Fundamental Rights.
Broadband providers have voiced alarm over an EU proposal to create a Great Firewall of Europe by blocking illicit web material at the borders of the bloc.
The proposal emerged an obscure meeting of the Council of the European Union's Law Enforcement Work Party (LEWP), a forum for cooperation on issues such as counter terrorism, customs and fraud.
The minutes from the meeting state:
The Presidency of the LEWP presented its intention to propose concrete measures towards creating a single secure European cyberspace with a certain virtual Schengen border and virtual access points whereby the
Internet Service Providers (ISP) would block illicit contents on the basis of the EU black-list . Delegations were also informed that a conference on cyber-crime would be held in Budapest on 12-13 April 2011.
Malcolm Hutty, head of public affairs at LINX, a cooperative of British ISPs, said the plan appeared ill thought-out and confused . We take the view that network level filtering of the type proposed has been proven ineffective.
Broadband providers say that illegal content should be removed at the source by cooperation between police and web hosting firms because network blocking can easily be circumvented.
The Committee on Civil Liberties, Justice and Home Affairs (LIBE) of the European Parliament has adopted a compromise text agreed with the Council and the Commission on the draft Child Sexual Exploitation Directive. The compromise text allows
Member States to introduce mandatory blocking measures for Internet sites containing child abuse images, but does not require them as the Council had proposed.
Article 21: Measures against websites containing or disseminating child pornography:
Member States shall take the necessary measures to ensure the prompt removal of webpages containing or disseminating child pornography hosted in their territory and to endeavour to obtain the removal of such pages hosted
outside of their territory.
Member States may take measures to block access to webpages containing or disseminating child pornography towards the Internet users in their territory. These measures must be set by transparent procedures and provide
adequate safeguards, in particular to ensure that the restriction is limited to what is necessary and proportionate, and that users are informed of the reason for the restriction. These safeguards shall also include the possibility of judicial
Civil liberties groups will be pleased at having defeated mandatory blocking across Europe, but disappointed at having failed to ensure that judicial authority is required before an ISP can be forced to block an Internet address.
The draft Directive is due to be adopted in the Autumn.
The European Parliament has approved new rules that will implement tough penalties for offences related to child porn online. The resolution was adopted by the European Parliament with 541 votes in favor and two against.
The directive will require EU countries to remove child porn websites or allow them to block access to those pages. EU member states will have two years to make the rules into national law.
The new rules will outline requirements on prevention, prosecution of offenders and protection of victims and
Association of Sites Advocating Child Protection Executive Director Tim Henning said:
It covers all the major bases and will make it less difficult for EU authorities to prosecute these heinous crimes against children. It will also help to reduce the proliferation and consumption of child pornography content.
But he noted one troubling aspect of blocking suspected website pages:
This needs to be completely transparent in order to prevent EU territories from blocking legal adult entertainment that may be mistaken for illegal child porn. The directive has stated this will be the case.
The rules set out penalties for about 20 criminal offenses. For instance, coercing a child into sexual actions or forcing a child into prostitution will be punishable by at least 10 years in prison. Child pornography producers will face at least
3 years, and viewers of online child pornography will face at least 1 year.
The Council of the EU has adopted a directive aimed at combating sexual abuse and exploitation of children as well as child pornography.
The directive will harmonise around twenty relevant criminal offences, at the same time setting high level of penalties.
The new rules which have to be transposed into national law within two years also include provisions to fight against online child pornography and sex tourism. They also aim to prevent convicted paedophiles moving to another EU member state from
exercising professional activities involving regular contacts with children. Finally, the directive introduces measures to protect the child victim during investigations and legal proceedings.
Concerning online child pornography, the text obliges member states to ensure the prompt removal of such websites hosted in their territory and to endeavour to obtain their removal if hosted outside of their territory.
In addition, member states may block access to such web pages, but must follow transparent procedures and provide safeguards if they make use of this possibility.
Job vetting will also extend to a European wide level with a reliable check for EU nationals when applying for jobs related to the care of children. In addition, within the EU, higher protection of children will be achieved once member
states implement the directive and fully commit themselves to circulate data on disqualifications from their criminal records. It is currently very difficult to clear foreign EU nationals when applying for jobs related to the care of children.
In legal advice to the EU Court of Justice, Advocate General Pedro Cruz Villalon has announced that EU law allows for ISPs to be ordered to block their customers from accessing known copyright infringing sites.
The opinion, which relates to a dispute between a pair of movie companies and an Austrian ISP over the now-defunct site Kino.to, is not legally binding. However, the advice of the Advocate General is usually followed in such cases.
The current dispute involves Austrian ISP UPC Telekabel Wien and movie companies Constantin Film Verleih and Wega Filmproduktionsgesellschaft. The film companies complained that the ISP was providing its subscribers with access to Kino.to which
enabled them to access their copyrighted material without permission.
Interim injunctions were granted in the movie companies' favor which required the ISP to block the site. However, the Austrian Supreme Court later issued a request to the Court of Justice to clarify whether a provider that provides Internet
access to those using an illegal website were to be regarded as an intermediary, in the same way that the host of an illegal site might.
In his opinion, Advocate General Pedro Cruz Villalon said that the ISP of a user accessing a website said to be infringing copyright should also be regarded as an intermediary whose services are used by a third party, such as the operator of an
infringing website. This means that the ISP of an infringing site user can be subjected to a blocking injunction, as long as it contain specifics on the technicalities.
The European Parliament is currently considering EU wide website blocking powers.
The latest draft of the directive on combating terrorism contains proposals on blocking websites that promote or incite terror attacks. Member states may take all necessary measures to remove or to block access to webpages publicly inciting to
commit terrorist offences, says text submitted by German MEP and rapporteur Monika Hohlmeier.
Digital rights activists have argued that it leaves the door wide open to over-blocking and censorship as safeguards defending proportionality and fundamental rights can be skipped if governments opt for voluntary schemes implemented by
Amendments have been proposed that would require any take down or Web blocking to be subject to full judicial oversight and rubber stamping.
Last week, Estonian MEP Marju Lauristin told Ars she was very disappointed with the text, saying it was jeopardising freedom of expression as enshrined in the Charter of Fundamental Rights of EU.
The measure will be up for a vote by the civil liberties committee on 27th June.
ISPs that block access to websites with adult content or block ads could be breaking EU guidelines on net neutrality even if customers opt in. EU regulations only allow providers to block content for three reasons: to comply with a member state's
laws, to manage levels of traffic across a network, or for security.
Blocking websites with adult content has no clear legal framework in UK legislation, and providers have relied on providing the ability to opt in to protect themselves from falling foul of the rules. However, an update to guidelines issued by EU
body Berec says that even if a person indicates they want certain content to be blocked, it should be done on their device, rather than at a network level. The updated guidelines say:
With regard to some of the suggestions made by stakeholders about traffic management features that could be requested or controlled by end-users, Berec notes that the regulation does not consider that end-user consent enables ISPs to engage in
such practices at the network level.
End-users may independently choose to apply equivalent features, for example via their terminal equipment or more generally on the applications running at the terminal equipment, but Berec considers that management of such features at the
network level would not be consistent with the regulation.
Frode Sorensen, co-chair of the Berec expert working group on net neutrality said the updated guidance made it clear that it had found no legal basis for using customer choice to justify blocking any content without national legislation or for
reasons of traffic management or security.
David Cameron said in October last year that he had secured an opt-out from the rules enabling British internet providers to introduce porn filters. However, Sorensen said he was not aware of any opt-out, and the net neutrality rules introduced
in November, after Cameron made his claim, said they applied to the whole European Economic Area which includes the UK.
Social media giants Facebook, Google and Twitter will be forced to change their terms of service for EU users within a month, or face hefty fines from European authorities, an official said on Friday.
The move was initiated after politicians have decided to blame their unpopularity on 'fake news' rather than their own incompetence and their failure to listen to the will of the people.
The EU Commission sent letters to the three companies in December, stating that some terms of service were in breach of EU protection laws and urged them to do more to prevent fraud on their platforms. The EU has also urged social media companies
to do more when it comes to assessing the suitability of user generated content.
The letters, seen by Reuters, explained that the EU Commission also wanted clearer signposting for sponsored content, and that mandatory rights, such as cancelling a contract, could not be interfered with.
Germany said this week it is working on a new law that would see social media sites face fines of up to $53 million if they failed to strengthen their efforts to remove material that the EU does not like. German censorship minister Heiko Mass
There must be as little space for criminal incitement and slander on social networks as on the streets. Too few criminal comments are deleted and they are not erased quickly enough. The biggest problem is that networks do not take the complaints
of their own users seriously enough...it is now clear that we must increase the pressure on social networks.
Article 13: Monitoring and filtering of internet content is unacceptable. Index on Censorship joined with 56 other NGOs to call for the deletion of Article 13 from the proposal on the Digital Single Market, which includes obligations on internet
companies that would be impossible to respect without the imposition of excessive restrictions on citizens' fundamental rights.
Dear President Juncker,
Dear President Tajani,
Dear President Tusk,
Dear Prime Minister Ratas,
Dear Prime Minister Borissov,
Dear MEP Voss, MEP Boni
The undersigned stakeholders represent fundamental rights organisations.
Fundamental rights, justice and the rule of law are intrinsically linked and constitute core values on which the EU is founded. Any attempt to disregard these values undermines the mutual trust between member states required for the EU to
function. Any such attempt would also undermine the commitments made by the European Union and national governments to their citizens.
Article 13 of the proposal on Copyright in the Digital Single Market include obligations on internet companies that would be impossible to respect without the imposition of excessive restrictions on citizens' fundamental rights.
Article 13 introduces new obligations on internet service providers that share and store user-generated content, such as video or photo-sharing platforms or even creative writing websites, including obligations to filter uploads to their
services. Article 13 appears to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens' communications if they are to have any chance of staying in business.
Article 13 contradicts existing rules and the case law of the Court of Justice. The Directive of Electronic Commerce ( 2000/31/EC) regulates the liability for those internet companies that host content on behalf of their users. According
to the existing rules, there is an obligation to remove any content that breaches copyright rules, once this has been notified to the provider.
Article 13 would force these companies to actively monitor their users' content, which contradicts the 'no general obligation to monitor' rules in the Electronic Commerce Directive. The requirement to install a system for filtering electronic
communications has twice been rejected by the Court of Justice, in the cases Scarlet Extended ( C 70/10) and Netlog/Sabam (C 360/10). Therefore, a legislative provision that requires internet companies to install a filtering system would
almost certainly be rejected by the Court of Justice because it would contravene the requirement that a fair balance be struck between the right to intellectual property on the one hand, and the freedom to conduct business and the right to
freedom of expression, such as to receive or impart information, on the other.
In particular, the requirement to filter content in this way would violate the freedom of expression set out in Article 11 of the Charter of Fundamental Rights. If internet companies are required to apply filtering mechanisms in order to
avoid possible liability, they will. This will lead to excessive filtering and deletion of content and limit the freedom to impart information on the one hand, and the freedom to receive information on the other.
If EU legislation conflicts with the Charter of Fundamental Rights, national constitutional courts are likely to be tempted to disapply it and we can expect such a rule to be annulled by the Court of Justice. This is what happened with the
Data Retention Directive (2006/24/EC), when EU legislators ignored compatibility problems with the Charter of Fundamental Rights. In 2014, the Court of Justice declared the Data Retention Directive invalid because it violated the Charter.
Taking into consideration these arguments, we ask the relevant policy-makers to delete Article 13.
European Digital Rights (EDRi)
Associação D3 -- Defesa dos Direitos Digitais
Associação Nacional para o Software Livre (ANSOL)
Association for Progressive Communications (APC)
Association for Technology and Internet (ApTI)
Association of the Defence of Human Rights in Romania (APADOR)
Bangladesh NGOs Network for Radio and Communication (BNNRC)
Bits of Freedom (BoF)
Bulgarian Helsinki Committee
Center for Democracy & Technology (CDT)
Centre for Peace Studies
Coalizione Italiana Liberta@ e Diritti Civili (CILD)
Code for Croatia
Culture Action Europe
Electronic Frontier Foundation (EFF)
Estonian Human Rights Centre
Freedom of the Press Foundation
Frënn vun der Ënn
Helsinki Foundation for Human Rights
Hermes Center for Transparency and Digital Human Rights
Human Rights Monitoring Institute
Human Rights Watch
Human Rights Without Frontiers
Hungarian Civil Liberties Union
Index on Censorship
International Partnership for Human Rights (IPHR)
International Service for Human Rights (ISHR)
Justice & Peace
La Quadrature du Net
Media Development Centre
Miklos Haraszti (Former OSCE Media Representative)
Modern Poland Foundation
Netherlands Helsinki Committee
One World Platform
Open Observatory of Network Interference (OONI)
Open Rights Group (ORG)
Plataforma en Defensa de la Libertad de Información (PDLI)
Reporters without Borders (RSF)
Rights International Spain
South East Europe Media Organisation (SEEMO)
South East European Network for Professionalization of Media (SEENPM)
The Right to Know Coalition of Nova Scotia (RTKNS)
The European Union is in the process of creating an authority to monitor and censor so-called fake news. It is setting up a High-Level 'Expert' Group. The EU is currently consulting media professionals and the public to decide what powers to
give to this EU body, which is to begin operation next spring.
World Socialist Web Site has its own colourful view on the intentions of the body, but I don't suppose it is too far from the truth:
An examination of the EU's announcement shows that it is preparing mass state censorship aimed not at false information, but at news reports or political views that encourage popular opposition to the European ruling class.
It aims to create conditions where unelected authorities control what people can read or say online.
EU Vice-President Frans Timmermans explained the move in ominous tersm
We live in an era where the flow of information and misinformation has become almost overwhelming. The EU's task is to protect its citizens from fake news and to manage the information they receive.
According to an EU press release, the EU Commission, another unelected body, will select the High-Level Expert Group, which is to start in January 2018 and will work over several months. It will discuss possible future actions to strengthen
citizens' access to reliable and verified information and prevent the spread of disinformation online.
Who will decide what views are verified, who is reliable and whose views are disinformation to be deleted from Facebook or removed from Google search results? The EU, of course.
The European Union voted on November 14, to pass the new internet censorship regulation nominally in the name of consumer protection. But of course censorship often hides behind consumer protection, eg the UK's upcoming internet porn ban is
enacted in the name of protecting under 18 internet consumers.
The new EU-wide law gives extra power to national consumer protection agencies, but which also contains a vaguely worded clause that also grants them the power to block and take down websites without judicial oversight.
Member of the European Parliament Julia Reda said in a speech in the European Parliament Plenary during a last ditch effort to amend the law:
The new law establishes overreaching Internet blocking measures that are neither proportionate nor suitable for the goal of protecting consumers and come without mandatory judicial oversight,
According to the new rules, national consumer protection authorities can order any unspecified third party to block access to websites without requiring judicial authorization, Reda added later in the day on her blog .
This new law is an EU regulation and not a directive, meaning its obligatory for all EU states.
The new law proposal started out with good intentions, but sometimes in the spring of 2017, the proposed regulation received a series of amendments that watered down some consumer protections but kept intact the provisions that ensured national
consumer protection agencies can go after and block or take down websites.
Presumably multinational companies had been lobbying for new weapons n their battle against copyright infringement. For instance, the new law gives national consumer protection agencies the legal power to inquire and obtain information about
domain owners from registrars and Internet Service Providers.
Besides the website blocking clause, authorities will also be able to request information from banks to detect the identity of the responsible trader, to freeze assets, and to carry out mystery shopping to check geographical discrimination or
Comment: European Law Claims to Protect Consumers... By Blocking the Web
The Consumer Protection Regulation provides in Article 8(3)(e) that consumer protection authorities must have the power:
where no other effective means are available to bring about the cessation or the prohibition of the infringement including by requesting a third party or other public authority to implement such measures, in order to prevent the risk of serious
harm to the collective interests of consumers:
to remove content or restrict access to an online interface or to order the explicit display of a warning to consumers when accessing the online interface;
to order a hosting service provider to remove, disable or restrict the access to an online interface; or
where appropriate, order domain registries or registrars to delete a fully qualified domain name and allow the competent authority concerned to register it;
The risks of unelected public authorities being given the power to block websites was powerfully demonstrated in 2014, when the Australian company regulator ASIC
accidentally blocked 250,000 websites in an attempt to block just a handful of sites alleged to be defrauding Australian consumers.
This likelihood of unlawful overblocking is just one of the reasons that the United Nations Special Rapporteur for Freedom of Expression and Opinion has underlined how web blocking often contravenes international human rights law. In a
2011 report [PDF], then Special Rapporteur Frank La Rue set out how extremely limited are the circumstances in which blocking of websites can be justified, noting that where:
the specific conditions that justify blocking are not established in law, or are provided by law but in an overly broad and vague manner, [this] risks content being blocked arbitrarily and excessively. ... [E]ven where justification is provided,
blocking measures constitute an unnecessary or disproportionate means to achieve the purported aim, as they are often not sufficiently targeted and render a wide range of content inaccessible beyond that which has been deemed illegal. Lastly,
content is frequently blocked without the intervention of or possibility for review by a judicial or independent body.
This describes exactly what the new Consumer Protection Regulation will do. It hands over a power that should only be exercised, if at all, under the careful scrutiny of a judge in the most serious of cases, and allows it to be wielded at the
whim of an unelected consumer protection agency. As
explained by Member of the European Parliament (MEP) Julia Reda , who voted against the legislation, it sets the stage for the construction of a censorship infrastructure that could be misused for purposes that we cannot even anticipate,
ranging from copyright enforcement through to censorship of political protest.
Regrettably, the Regulation is now law--and is required to be enforced by all European states. It is both ironic and tragic that a law intended to protect consumers actually poses such a dire threat to their right to freedom of expression.
The third evaluation of the EU's 'Code of Conduct' on censoring 'illegal online hate speech' carried out by NGOs and public bodies shows that IT companies removed on average 70% of posts claimed to contain 'illegal hate speech'.
However, some further challenges still remain, in particular the lack of systematic feedback to users.
Google+ announced today that they are joining the Code of Conduct, and Facebook confirmed that Instagram would also do so, thus further expanding the numbers of actors covered by it.
Vera Jourová, with the oxymoronic title of EU Commissioner for Justice, Consumers and Gender Equality, said:
The Internet must be a safe place, free from illegal hate speech, free from xenophobic and racist content. The Code of Conduct is now proving to be a valuable tool to tackle illegal content quickly and efficiently. This shows that where there is
a strong collaboration between technology companies, civil society and policy makers we can get results, and at the same time, preserve freedom of speech. I expect IT companies to show similar determination when working on other important
issues, such as the fight with terrorism, or unfavourable terms and conditions for their users.
On average, IT companies removed 70% of all the 'illegal hate speech' notified to them by the NGOs and public bodies participating in the evaluation. This rate has steadily increased from 28% in the first monitoring round in 2016 and 59% in the
second monitoring exercise in May 2017.T
The Commission will continue to monitor regularly the implementation of the Code by the participating IT Companies with the help of civil society organisations and aims at widening it to further online platforms. The Commission will consider
additional measures if efforts are not pursued or slow down.
Of course no mention of the possibility that some of the reports of supposed 'illegal hate speech' are not actioned because they are simply wrong and may be just the politically correct being easily offended. We seem to live in an injust age
where the accuser is always considered right and the merits of the case count for absolutely nothing.
A few MEPs produce YouTube video highlighting the corporate and state censorship that will be enabled by an EU proposal to require social media posts to be approved before posting by an automated censorship machine
In a new campaign video, several Members of the European Parliament warn that the EU's proposed mandatory upload filters pose a threat to freedom of speech. The new filters would function as censorship machines which are "completely
disproportionate," they say. The MEPs encourage the public to speak up, while they still can.
Through a series of new proposals, the European Commission is working hard to
modernize EU copyright law. Among other things, it will require online services to do more to fight piracy.
These proposals have not been without controversy. Article 13 of the proposed Copyright Directive, for example, has been widely criticized as it would require online services to monitor and filter uploaded content.
This means that online services, which deal with large volumes of user-uploaded content, must use fingerprinting or other detection mechanisms -- similar to YouTube's Content-ID system -- to block copyright infringing files.
The Commission believes that more stringent control is needed to support copyright holders. However, many
legal scholars ,
digital activists , and members of the public worry that they will violate the rights of regular Internet users.
In the European Parliament, there is fierce opposition as well. Today, six Members of Parliament (MEPs) from across the political spectrum released a new campaign video warning their fellow colleagues and the public at large.
The MEPs warn that such upload filters would act as censorship machines, something they've made clear to the Council's working group on intellectual property, where the controversial proposal was discussed today.
Imagine if every time you opened your mouth, computers controlled by big companies would check what you were about to say, and have the power to prevent you from saying it, Greens/EFA MEP Julia Reda says.
A new legal proposal would make this a reality when it comes to expressing yourself online: Every clip and every photo would have to be pre-screened by some automated 'robocop' before it could be uploaded and seen online, ALDE MEP Marietje
Stop censorship machines!
Schaake notes that she has dealt with the consequences of upload filters herself. When she uploaded a recording of a political speech to YouTube, the site took it down without explanation. Until this day, the MEP still doesn't know on what
grounds it was removed.
These broad upload filters are completely disproportionate and a danger for freedom of speech, the MEPs warn. The automated systems make mistakes and can't properly detect whether something's fair use, for example.
Another problem is that the measures will be relatively costly for smaller companies ,which puts them at a competitive disadvantage. "Only the biggest platforms can afford them -- European competitors and small businesses will
struggle," ECR MEP Dan Dalton says.
The plans can still be stopped, the MEPs say. They are currently scheduled for a vote in the Legal Affairs Committee at the end of March, and the video encourages members of the public to raise their voices.
Speak out ...while you can still do so unfiltered! S&D MEP Catherine Stihler says.
Illegal content and terrorist propaganda are still spreading rapidly online in the European Union -- just not on mainstream platforms, new analysis shows.
Twitter, Google and Facebook all play by EU rules when it comes to illegal content, namely hate speech and terrorist propaganda, policing their sites voluntarily.
But with increased scrutiny on mainstream sites, alt-right and terrorist sympathizers are flocking to niche platforms where illegal content is shared freely, security experts and anti-extremism activists say.
The European Union has given Google, YouTube, Facebook, Twitter and other internet companies three months to show that they are removing extremist content more rapidly or face legislation forcing them to do so.
The European Commission said on Thursday that internet firms should be ready to remove extremist content within an hour of being notified and recommended measures they should take to stop its proliferation. Digital commissioner Andrus Ansip said:
While several platforms have been removing more illegal content than ever before ... we still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens' security, safety and fundamental
The EC said that it would assess the need for legislation of technology firms within three months if demonstrable improvement is not made on what it describes as terrorist content. For all other types of 'illegal' content the EC will assess the
technology firms' censorship progress within six months.
It also urged the predominantly US-dominated technology sector to adopt a more proactive approach, with automated systems to detect and censor 'illegal' content.
The European Commission proposes designating internet censors, which it euphemistically calls 'trusted flaggers', and then requiring internet hosting companies to censor whatever the 'trusted flaggers' say
The EU Commission has recommended an internet censorship decision sounding like something straight out of China. The system consists of designating police, state censors, commercial censors acting for the state, and perhaps independent groups
like the IWF. These are euphemistically known as trusted flaggers.
Website and content hosting companies will then be required to remove any content (nominally illegal content) in a timely manner.
The IWF usefully summarises the proposals as follows:
The EU Commission's proposals to tackle illegal content online include:
Hosting providers and Member States being prepared to submit all monitoring information to the Commission, upon request, within six months (three months for terrorist content) in order for the Commission to assess whether
further legislation is required.
Recommends introducing definitions for "illegal content" and "trusted flaggers".
Fast track procedures should be introduced for materials referred by trusted flaggers.
Hosting providers to publish a list of who they consider to be a "trusted flagger".
Automated takedown of content is encouraged, but should have safeguards such as human oversight.
Terrorist content should be removed within one hour.
A loss of trust in Facebook in the light of the Cambridge Analytica scandal could prompt the EU to scrap its voluntary code of conduct on the removal of online hate speech in favour of legislation and heavy sanctions, European commission Vera
The EU's executive is examining how to have hateful content censored swiftly by social media platforms, with legislation being one option that could replace the current system.
JJourová said she would be grilling Sheryl Sandberg , Facebook's chief operating officer, later this week over unanswered questions about the company's past errors and future plans.
Jourove said she was wary of following the German path, because of the thin line between removing offensive material and censorship, but said all options were on the table.
Brussels may threaten social media companies with censorship laws unless they move urgently to tackle supposed 'fake news' and Cambridge Analytica-style data abuse.
The EU security commissioner, Julian King, said short-term, concrete plans needed to be in place before the elections, when voters in 27 EU member states will elect MEPs.
Under King's ideas, social media companies would sign a voluntary code of conduct to prevent the misuse of platforms to pump out misleading information.
The code would include a pledge for greater transparency, so users would be made aware why their Facebook or Twitter feed was presenting them with certain adverts or stories. Another proposal is for political adverts to be accompanied with
information about who paid for them.
The EU's plans to modernize copyright law in Europe are moving ahead. With a crucial vote coming up later this month, protests from various opponents are on the rise as well. They warn that the proposed plans will result in Internet filters which
threaten people's ability to freely share content online. According to Pirate Party MEP Julia Reda, these filters will hurt regular Internet users, but also creators and businesses.
September 2016, the European Commission published its proposal for a modernized copyright law. Among other things, it proposed measures to require online services to do more to fight piracy.
Specifically, Article 13 of the proposed Copyright Directive will require online services to track down and delete pirated content, in collaboration with rightsholders.
The Commission stressed that the changes are needed to support copyright holders. However, many legal scholars , digital activists , politicians , and members of the public worry that they will violate the rights of regular Internet users.
Last month the EU Council finalized the latest version of the proposal. This means that the matter now goes to the Legal Affairs Committee of the Parliament (JURI), which must decide how to move ahead. This vote is expected to take place in two
Although the term filter is commonly used to describe Article 13, it is not directly mentioned in the text itself .
According to Pirate Party Member of Parliament (MEP) Julia Reda , the filter keyword is avoided in the proposal to prevent a possible violation of EU law and the Charter of Fundamental Rights. However, the outcome is essentially the same.
In short, the relevant text states that online services are liable for any uploaded content unless they take effective and proportionate action to prevent copyright infringements, identified by copyright holders. That also includes preventing
these files from being reuploaded.
The latter implies some form of hash filtering and continuous monitoring of all user uploads. Several companies, including Google Drive, Dropbox, and YouTube already have these types of filters, but many others don't.
A main point of critique is that the automated upload checks will lead to overblocking, as they are often ill-equipped to deal with issues such as fair use.
The proposal would require platforms to filter all uploads by their users for potential copyright infringements -- not just YouTube and Facebook, but also services like WordPress, TripAdvisor, or even Tinder. We know from experience that these
algorithmic filters regularly make mistakes and lead to the mass deletion of legal uploads, Julia Reda tells TF.
Especially small independent creators frequently see their content taken down because others wrongfully claim copyright on their works. There are no safeguards in the proposal against such cases of copyfraud.
Besides affecting uploads of regular Internet users and smaller creators, many businesses will also be hit. They will have to make sure that they can detect and prevent infringing material from being shared on their systems.
This will give larger American Internet giants, who already have these filters in place, a competitive edge over smaller players and new startups, the Pirate Party MEP argues.
It will make those Internet giants even stronger, because they will be the only ones able to develop and sell the filtering technologies necessary to comply with the law. A true lose-lose situation for European Internet users, authors and
businesses, Reda tells us.
Based on the considerable protests in recent days, the current proposal is still seen as a clear threat by many.
In fact, the
save youri nternet campaign, backed by prominent organizations such as Creative Commons, EFF, and Open Media, is ramping up again. They urge the European public to reach out to their Members of Parliament before it's too late.
Should Article 13 of the Copyright Directive proposal be adopted, it will impose widespread censorship of all the content you share online. The European Parliament is the only one that can step in and Save your Internet, they write.
The full Article 13 text includes some language to limit its scope. The nature and size of online services must be taken into account, for example. This means that a small and legitimate niche service with a few dozen users might not be directly
liable if it operates without these anti-piracy measures.
Similarly, non-profit organizations will not be required to comply with the proposed legislation, although there are calls from some member states to change this.
In addition to Article 13, there is also considerable pushback from the public against Article 11, which is regularly referred to as the link tax .
At the moment, several organizations are planning a protest day next week, hoping to mobilize the public to speak out. A week later, following the JURI vote, it will be judgment day.
If they pass the Committee the plans will progress towards the final vote on copyright reform next Spring. This also means that they'll become much harder to stop or change. That has been done before, such as with ACTA, but achieving that type of
momentum will be a tough challenge.
As Europe's latest copyright proposal
heads to a critical vote on June 20-21, more than 70 Internet and computing luminaries have spoken out against a dangerous provision, Article 13, that would require Internet platforms to automatically filter uploaded content. The group,
which includes Internet pioneer Vint Cerf, the inventor of the World Wide Web Tim Berners-Lee, Wikipedia co-founder Jimmy Wales, co-founder of the Mozilla Project Mitchell Baker, Internet Archive founder Brewster Kahle, cryptography expert Bruce
Schneier, and net neutrality expert Tim Wu , wrote in a
joint letter that was released today :
By requiring Internet platforms to perform automatic filtering all of the content that their users upload, Article 13 takes an unprecedented step towards the transformation of the Internet, from an open platform for sharing and innovation, into
a tool for the automated surveillance and control of its users.
The prospects for the elimination of Article 13 have continued to worsen. Until late last month, there was the hope that that Member States (represented by the Council of the European Union) would find a compromise. Instead, their final
negotiating mandate doubled down on it.
The last hope for defeating the proposal now lies with the European Parliament. On June 20-21 the Legal Affairs (JURI) Committee will vote on the proposal. If it votes against upload filtering, the fight can continue in the Parliament's
subsequent negotiations with the Council and the European Commission. If not, then automatic filtering of all uploaded content may become a mandatory requirement for all user content platforms that serve European users. Although this will pose
little impediment to the largest platforms such as YouTube, which already uses its
Content ID system to filter content, the law will create an expensive barrier to entry for smaller platforms and startups, which may choose to establish or move their operations overseas in order to avoid the European law.
For those platforms that do establish upload filtering, users will find that their contributions--including video, audio, text, and even
source code --will be monitored and potentially blocked if the automated system detects what it believes to be a copyright infringement. Inevitably,
mistakes will happen . There is no way for an automated system to reliably determine when the use of a copyright work falls within a copyright limitation or exception under European law, such as quotation or parody.
Moreover, because these exceptions are not consistent across Europe, and because there is no broad fair use right as in the United States, many harmless uses of copyright works in memes, mashups, and remixes probably are technically
infringing even if no reasonable copyright owner would object. If an automated system monitors and filters out these technical infringements, then the permissible scope of freedom of expression in Europe will be radically curtailed, even without
the need for any substantive changes in copyright law.
The upload filtering proposal stems from a
misunderstanding about the purpose of copyright . Copyright isn't designed to compensate creators for each and every use of their works. It is meant to incentivize creators as part of an effort to promote the public interest in innovation
and expression. But that public interest isn't served unless there are limitations on copyright that allow new generations to build and comment on the previous contributions . Those limitations are both legal, like fair dealing, and practical,
like the zone of tolerance for harmless uses. Automated upload filtering will undermine both.
The authors of today's letter write:
We support the consideration of measures that would improve the ability for creators to receive fair remuneration for the use of their works online. But we cannot support Article 13, which would mandate Internet platforms to embed an automated
infrastructure for monitoring and censorship deep into their networks. For the sake of the Internet's future, we urge you to vote for the deletion of this proposal.
What began as a bad idea offered up to copyright lobbyists as a solution to an imaginary "
value gap " has now become an outright crisis for future of the Internet as we know it. Indeed, if those who created and sustain the operation of the Internet recognize the scale of this threat, we should all be sitting up and taking
If you live in Europe or have European friends or family, now could be your last opportunity to avert the upload filter. Please take action by clicking the button below, which will take you to a campaign website where you can phone, email, or
Tweet at your representatives, urging them to stop this threat to the global Internet before it's too late.
David Kaye, the UN's Special Rapporteur on freedom of expression has now chimed in with a very thorough report, highlighting how Article 13 of the Directive -- the part about mandatory copyright filters -- would be a disaster for free speech and
would violate the UN's Declaration on Human Rights, and in particular Article 19 which says:
Everyone has the right to freedom of opinion and expression; the right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers.
As Kaye's report notes, the upload filters of Article 13 of the Copyright Directive would almost certainly violate this principle.
Article 13 of the proposed Directive appears likely to incentivize content-sharing providers to restrict at the point of upload user-generated content that is perfectly legitimate and lawful. Although the latest proposed versions of Article 13
do not explicitly refer to upload filters and other content recognition technologies, it couches the obligation to prevent the availability of copyright protected works in vague terms, such as demonstrating best efforts and taking effective and
proportionate measures. Article 13(5) indicates that the assessment of effectiveness and proportionality will take into account factors such as the volume and type of works and the cost and availability of measures, but these still leave
considerable leeway for interpretation.
The significant legal uncertainty such language creates does not only raise concern that it is inconsistent with the Article 19(3) requirement that restrictions on freedom of expression should be provided by law. Such uncertainty would also
raise pressure on content sharing providers to err on the side of caution and implement intrusive content recognition technologies that monitor and filter user-generated content at the point of upload. I am concerned that the restriction of
user-generated content before its publication subjects users to restrictions on freedom of expression without prior judicial review of the legality, necessity and proportionality of such restrictions. Exacerbating these concerns is the reality
that content filtering technologies are not equipped to perform context-sensitive interpretations of the valid scope of limitations and exceptions to copyright, such as fair comment or reporting, teaching, criticism, satire and parody.
Kaye further notes that copyright is not the kind of thing that an algorithm can readily determine, and the fact-specific and context-specific nature of copyright requires much more than just throwing algorithms at the problem -- especially when
a website may face legal liability for getting it wrong.
The designation of such mechanisms as the main avenue to address users' complaints effectively delegates content blocking decisions under copyright law to extrajudicial mechanisms, potentially in violation of minimum due process guarantees under
international human rights law. The blocking of content -- particularly in the context of fair use and other fact-sensitive exceptions to copyright -- may raise complex legal questions that require adjudication by an independent and impartial
judicial authority. Even in exceptional circumstances where expedited action is required, notice-and-notice regimes and expedited judicial process are available as less invasive means for protecting the aims of copyright law.
In the event that content blocking decisions are deemed invalid and reversed, the complaint and redress mechanism established by private entities effectively assumes the role of providing access to remedies for violations of human rights law. I
am concerned that such delegation would violate the State's obligation to provide access to an effective remedy for violations of rights specified under the Covenant. Given that most of the content sharing providers covered under Article 13 are
profit-motivated and act primarily in the interests of their shareholders, they lack the qualities of independence and impartiality required to adjudicate and administer remedies for human rights violations. Since they also have no incentive to
designate the blocking as being on the basis of the proposed Directive or other relevant law, they may opt for the legally safer route of claiming that the upload was a terms of service violation -- this outcome may deprive users of even the
remedy envisioned under Article 13(7). Finally, I wish to emphasize that unblocking, the most common remedy available for invalid content restrictions, may often fail to address financial and other harms associated with the blocking of
He goes on to point that while large platforms may be able to deal with all of this, smaller ones are going to be in serious trouble:
I am concerned that the proposed Directive will impose undue restrictions on nonprofits and small private intermediaries. The definition of an online content sharing provider under Article 2(5) is based on ambiguous and highly subjective
criteria such as the volume of copyright protected works it handles, and it does not provide a clear exemption for nonprofits. Since nonprofits and small content sharing providers may not have the financial resources to establish licensing
agreements with media companies and other right holders, they may be subject to onerous and legally ambiguous obligations to monitor and restrict the availability of copyright protected works on their platforms. Although Article 13(5)'s criteria
for effective and proportionate measures take into account the size of the provider concerned and the types of services it offers, it is unclear how these factors will be assessed, further compounding the legal uncertainty that nonprofits and
small providers face. It would also prevent a diversity of nonprofit and small content-sharing providers from potentially reaching a larger size, and result in strengthening the monopoly of the currently established providers, which could be an
impediment to the right to science and culture as framed in Article 15 of the ICESCR.
On June 20, the EU's legislative committee will vote on the
new Copyright directive , and decide whether it will include the controversial "Article 13" (automated censorship of anything an algorithm identifies as a copyright violation) and "Article 11" (no linking to news stories
without paid permission from the site).
These proposals will make starting new internet companies effectively impossible -- Google, Facebook, Twitter, Apple, and the other US giants will be able to negotiate favourable rates and build out the infrastructure to comply with these
proposals, but no one else will. The EU's regional tech success stories -- say
Seznam.cz , a successful Czech search competitor to Google -- don't have $60-100,000,000 lying around to build out their filters, and lack the leverage to extract favorable linking licenses from news sites.
If Articles 11 and 13 pass, American companies will be in charge of Europe's conversations, deciding which photos and tweets and videos can be seen by the public, and who may speak.
So far, the focus in the debate has been on the intended consequences of the proposals: the idea that a certain amount of free expression and competition must be sacrificed to enable rightsholders to force Google and Facebook to share their
But the unintended -- and utterly foreseeable -- consequences are even more important. Article 11's link tax allows news sites to decide who gets to link to them, meaning that they can exclude their critics. With election cycles dominated by
hoaxes and fake news, the right of a news publisher to decide who gets to criticise it is carte blanche to lie and spin.
Article 13's copyright filters are even more vulnerable to attack: the proposals contain no penalties for false claims of copyright ownership, but they do mandate that the filters must accept copyright claims in bulk, allowing
rightsholders to upload millions of works at once in order to claim their copyright and prevent anyone from posting them.
That opens the doors to all kinds of attacks. The obvious one is that trolls might sow mischief by uploading millions of works they don't hold the copyright to, in order to prevent others from quoting them: the works of Shakespeare, say, or
everything ever posted to Wikipedia, or my novels, or your family photos.
More insidious is the possibility of targeted strikes during crisis: stock-market manipulators could use bots to claim copyright over news about a company, suppressing its sharing on social media; political actors could suppress key articles
during referendums or elections; corrupt governments could use arms-length trolls to falsely claim ownership of footage of human rights abuses.
It's asymmetric warfare: falsely claiming a copyright will be easy (because the rightsholders who want this system will not tolerate jumping through hoops to make their claims) and instant (because rightsholders won't tolerate delays when their
new releases are being shared online at their moment of peak popularity). Removing a false claim of copyright will require that a human at an internet giant looks at it, sleuths out the truth of the ownership of the work, and adjusts the database
-- for millions of works at once. Bots will be able to pollute the copyright databases much faster than humans could possibly clear it.
I spoke with Wired UK's KG Orphanides about this, and their
excellent article on the proposal is the best explanation I've seen of the uses of these copyright filters to create unstoppable disinformation campaigns.
Doctorow highlighted the potential for unanticipated abuse of any automated copyright filtering system to make false copyright claims, engage in targeted harassment and even silence public discourse at sensitive times.
"Because the directive does not provide penalties for abuse -- and because rightsholders will not tolerate delays between claiming copyright over a work and suppressing its public display -- it will be trivial to claim copyright over key
works at key moments or use bots to claim copyrights on whole corpuses.
The nature of automated systems, particularly if powerful rightsholders insist that they default to initially blocking potentially copyrighted material and then releasing it if a complaint is made, would make it easy for griefers to use
copyright claims over, for example, relevant Wikipedia articles on the eve of a Greek debt-default referendum or, more generally, public domain content such as the entirety of Wikipedia or the complete works of Shakespeare.
"Making these claims will be MUCH easier than sorting them out -- bots can use cloud providers all over the world to file claims, while companies like Automattic (WordPress) or Twitter, or even projects like Wikipedia, would have to
marshall vast armies to sort through the claims and remove the bad ones -- and if they get it wrong and remove a legit copyright claim, they face unbelievable copyright liability."
The European Parliament's Committee on Legal Affairs (JURI) has officially approved Articles 11 and 13 of a Digital Single Market (DSM) copyright proposal, mandating censorship machines and a link tax.
Articles 11 and 13 of the Directive of the European Parliament and of the Council on Copyright in the Digital Single Market have been the subject of considerable campaigning from pro-copyleft groups including the Open Rights Group and Electronic
Frontier Foundation of late.
Article 11, as per the final version of the proposal, discusses the implementation of a link tax - the requirement that any site citing third-party materials do so in a way that adheres to the exemptions and restrictions of a total of 28 separate
copyright laws or pays for a licence to use and link to the material;
Article 13, meanwhile, requires any site which allows users to post text, sound, program code, still or moving images, or any other work which can be copyrighted to automatically scan all such uploads against a database of copyright works - a
database which they will be required to pay to access.
Both Article 11 and Article 13 won't become official legislation until passed by the entire European Parliament in a plenary vote. There's no definite timetable for when such a vote might take place, but it would likely happen sometime between
December of this year and the first half of 2019.
What is the mysterious hold that US Big Music has over Euro politicians?
Article 13, the proposed EU legislation that aims to restrict safe harbors for online platforms, was crafted to end the so-called "Value Gap" on YouTube.
Music piracy was traditionally viewed as an easy to identify problem, one that takes place on illegal sites or via largely uncontrollable peer-to-peer networks. In recent years, however, the lines have been blurred.
Sites like YouTube allow anyone to upload potentially infringing content which is then made available to the public. Under the safe harbor provisions of US and EU law, this remains legal -- provided YouTube takes content down when told to do so.
It complies constantly but there's always more to do.
This means that in addition to being one of the greatest legal platforms ever created, YouTube is also a goldmine of unlicensed content, something unacceptable to the music industry.
They argue that the existence of this pirate material devalues the licensed content on the platform. As a result, YouTube maintains a favorable bargaining position with the labels and the best licensing deal in the industry.
The difference between YouTube's rates and those the industry would actually like is now known as the "
Value Gap " and it's become one of the hottest topics in recent years.
In fact, it is so controversial that new copyright legislation, currently weaving its way through the corridors of power in the EU Parliament, is specifically designed to address it.
If passed, Article 13 will require platforms like YouTube to pre-filter uploads to detect potential infringement. Indeed, the legislation may as well have been named the YouTube Act, since it's the platform that provoked this entire debate and
whole Value Gap dispute.
With that in mind, it's of interest to consider the words of YouTube's global head of music Lyor Cohen this week. In an interview with
MusicWeek , Cohen pledges that his company's new music service, YouTube Music, will not only match the rates the industry achieves from Apple Music and Spotify, but the company's ad-supported free tier viewers will soon be delivering more
cash to the labels too. "Of course [rights holders are] going to get more money," he told Music Week.
If YouTube lives up to its pledge, a level playing field will not only be welcomed by the music industry but also YouTube competitors such as Spotify, who currently offer a free tier on less favorable terms.
While there's still plenty of room for YouTube to maneuver, peace breaking out with the labels may be coming a little too late for those deeply concerned about the implications of Article 13.
YouTube's business model and its reluctance to pay full market rate for music is what started the whole Article 13 movement in the first place and with the Legal Affairs Committee of the Parliament (JURI)
adopting the proposals last week , time is running out to have them overturned.
Behind the scenes, however, the labels and their associates are going flat out to ensure that Article 13 passes, whether YouTube decides to "play fair" or not. Their language suggests that force is the best negotiating tactic with the
Yesterday, UK Music CEO Michael Dugher led a delegation to the EU Parliament in support of Article 13. He was joined by deputy Labour leader Tom Watson and representatives from the BPI, PRS, and Music Publishers Association, who urged MEPs to
support the changes.
As we have been covering in the last couple of articles, a controversial EU Copyright Directive has been under discussion at the European Parliament, and in a surprising turn of events,
it voted to reject fast-tracking the tabled proposal by the JURI Committee which contained controversial proposals, particularly in
Art 11 and
Art 13 . The proposed Directive will now get a full discussion and debate in plenary in September.
I say surprising because for those of us who have been witnesses (and participants) to the Copyright Wars for the last 20 years, such a defeat of copyright maximalist proposals is practically unprecedented, perhaps with the exception of
SOPA/PIPA . For years we've had a familiar pattern in the passing of copyright legislation: a proposal has been made to enhance protection and/or restrict liberties, a small group of ageing millionaire musicians would be paraded supporting
the changes in the interest of creators. Only copyright nerds and a few NGOs and digital rights advocates would complain, their opinions would be ignored and the legislation would pass unopposed. Rinse and repeat.
But something has changed, and a wide coalition has managed to defeat powerful media lobbies for the first time in Europe, at least for now. How was this possible?
The main change is that the media landscape is very different thanks to the Internet. In the past, the creative industries were monolithic in their support for stronger protection, and they included creators, corporations, collecting societies,
publishers, and distributors; in other words the gatekeepers and the owners were roughly on the same side. But the Internet brought a number of new players, the tech industry and their online platforms and tools became the new gatekeepers.
Moreover, as people do not buy physical copies of their media and the entire industry has moved towards streaming, online distributors have become more powerful. This has created a perceived imbalance, where the formerly dominating industries
need to negotiate with the new gatekeepers for access to users. This is why creators complain about a
value gap between what they perceive they should be getting, and what they actually receive from the giants.
The main result of this change from a political standpoint is that now we have two lobbying sides in the debate, which makes all the difference when it comes to this type of legislation. In the past, policymakers could ignore experts and digital
rights advocates because they never had the potential to reach them, letters and articles by academics were not taken into account, or given lip service during some obscure committee discussion just to be hidden away. Tech giants such as Google
have provided lobbying access in Brussels, which has at least levelled the playing field when it comes to presenting evidence to legislators.
As a veteran of the Copyright Wars, I have to admit that it has been very entertaining reading the reaction from the copyright industry lobby groups and their individual representatives, some almost going apoplectic with rage at Google's
intervention. These tend to be the same people who spent decades lobbying legislators to get their way unopposed, representing large corporate interests unashamedly and passing laws that would benefit only a few, usually to the detriment of
users. It seems like lobbying must be decried when you lose.
But to see this as a victory for Google and other tech giants completely ignores the large coalition that shares the view that the proposed Articles 11 and 13 are very badly thought-out, and could represent a real danger to existing rights. Some
of us have been fighting this fight when Google did not even exist, or it was but a small competitor of AltaVista, Lycos, Excite and Yahoo!
At the same time that more restrictive copyright legislation came into place, we also saw the rise of free and open source software, open access, Creative Commons and open data. All of these are legal hacks that allow sharing, remixing and
openness. These were created precisely to respond to restrictive copyright practices. I also remember how they were opposed as existential threats by the same copyright industries, and treated with disdain and animosity. But something wonderful
happened, eventually open source software started winning (we used to buy operating systems), and Creative Commons became an important part of the Internet's ecosystem by propping-up valuable common spaces such as Wikipedia.
Similarly, the Internet has allowed a great diversity of actors to emerge. Independent creators, small and medium enterprises, online publishers and startups love the Internet because it gives them access to a wider audience, and often they can
bypass established gatekeepers. Lost in this idiotic "Google v musicians" rhetoric has been the threat that both Art 11 and 13 represent to small entities. Art 11 proposes a new publishing right that has been proven to affect smaller
players in Germany and Spain; while Art 13 would impose potentially crippling economic restrictions to smaller companies as they would have to put in place automated filtering systems AND redress mechanisms against mistakes. In fact, it has been
often remarked that Art 13 would benefit existing dominant forces, as they already have filtering in place (think ContentID).
Similarly, Internet advocates and luminaries see the proposals as a threat to the Internet, the people who know the Web best think that this is a bad idea. If you can stomach it,
read this thread featuring a copyright lobbyist attacking Neil Gaiman, who has been one of the Internet celebrities that have voiced their concerns about the Directive.
copyright experts who almost never intervene in digital rights affairs the have been vocal in their opposition to the changes.
And finally we have political representatives from various parties and backgrounds who have been vocally opposed to the changes. While the leader of the political opposition has been the amazing Julia Reda, she has managed to bring together a
variety of voices from other parties and countries. The vitriol launched at her has been unrelenting, but futile. It has been quite a sight to see her opponents both try to dismiss her as just another clueless young Pirate commanded by Google,
while at the same time they try to portray her as a powerful enemy in charge of the mindless and uninformed online troll masses ready to do her bidding.
All of the above managed to do something wonderful, which was to convey the threat in easy-to-understand terms so that users could contact their representatives and make their voice heard. The level of popular opposition to the Directive has been
a great sight to behold.
Tech giants did not create this alliance, they just gave various voices access to the table. To dismiss this as Google's doing completely ignores the very real and rich tapestry of those defending digital rights, and it is quite clearly
patronising and insulting, and precisely the reason why they lost. It was very late until they finally realised that they were losing the debate with the public, and not even the last-minute deployment of musical dinosaurs could save the day.
But the fight continues, keep contacting your MEPs and keep applying pressure.
So who supported internet censorship in the EU parliamentary vote?
Mostly the EU Conservative Group and also half the Social Democrat MEPs and half the Far Right MEPs
Internet companies will have to delete content claimed to be extremist on their platforms within an hour or face being fined, under new censorship plans by the European Commission.
The proposals will be set out in draft regulation due to be published next month, according to The Financial Times.
Julian King, the EU's commissioner for security, told the newspaper that Brussels had not seen enough progress, when it came to the sites clamping down on terror-related material.
Under the rules, which would have to be agreed by a majority of EU member states, the platforms would have an hour to remove the material, a senior official told the newspaper.
The rules would apply to all websites, regardless of their size. King told the FT:
The difference in size and resources means platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent.
All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform.
Of course the stringent requirements are totally impractical for small companies, and so no doubt will further strengthen the monopolies of US companies with massive workforces.
And of course a one hour turn around gives absolutely no one time to even consider whether the censorship requests are fair or reasonable and so translates into a tool for direct state censorship of the internet.
Tech companies that fail to remove terrorist content quickly could soon face massive fines. The European Commission proposed new rules on Wednesday that would require internet platforms to remove illegal terror content within an hour of it being
flagged by national authorities. Firms could be fined up to 4% of global annual revenue if they repeatedly fail to comply.
Facebook (FB), Twitter (TWTR) and YouTube owner Google (GOOGL) had already agreed to work with the European Union on a voluntary basis to tackle the problem. But the Commission said that progress has not been sufficient.
A penalty of 4% of annual revenue for 2017 would translate to $4.4 billion for Google parent Alphabet and $1.6 billion for Facebook.
The proposal is the latest in a series of European efforts to control the activities of tech companies.
The terror content proposal needs to be approved by the European Parliament and EU member states before becoming law.
Vera Jourova is the European Commissioner for justice, consumers and gender equality. Once she opened a Facebook account. It did not go well. Jourova said at a news conference:
For a short time, I had a Facebook account. It was a channel of dirt. I didn't expect such an influx of hatred. I decided to cancel the account because I realised there will be less hatred in Europe after I do this.
Jourova's words carry more weight than most. She has a policy beef with Facebook, and also the means to enforce it. Jourova says Facebook's terms of service are misleading, and has called upon the company to clarify them. In a post Thursday on
that other channel of dirt, Twitter.com, she said:
I want #Facebook to be extremely clear to its users about how their service operates and makes money. Not many people know that Facebook has made available their data to third parties or that for instance it holds full copyright about any
picture or content you put on it.
Jourova says European authorities could sanction Facebook next year if it doesn't like what it hears from the company soon. I was quite clear that we cannot negotiate forever, she said at the news conference. We need to see the result.
Vera Jourova is the European Commissioner for justice, consumers and gender equality has condemned a series of hard-hitting front pages in the British press after a recent Sun headline described Europe's leaders as 'EU Dirty Rats'.
Jourová bad mouthed media again in a press release saying:
Media can build the culture of dialogue or sow divisions, spread disinformation and encourage exclusion.
The Brexit debate is the best example of that. Do you remember the front page of a popular British daily calling the judges the 'enemy of the people'? Or just last week, the EU leaders were called 'Dirty Rats' on another front page.
Fundamental rights must be a part of public discourse in the media. They have to belong to the media. Media are also instrumental in holding politicians to account and in defining the limits of what is 'unacceptable' in a society.
Offsite Comment: Now the EU wants to turn off the Sun
They dream of stopping populism by curbing press freedom.
The European Commission has come up with a new way to prevent people backing Brexit -- not by winning the argument, but by curbing press freedom . They want to stop the British press encouraging hatred of EU leaders and judges, and impose a
European approach of smart regulation to control the views expressed by the tabloids and their supposedly non-smart readers.
Index on Censorship shares the widespread concerns about the proposed EU regulation on preventing the dissemination of terrorist content online. The regulation would endanger freedom of expression and would create huge practical challenges for
companies and member states. Jodie Ginsberg, CEO of Index, said We urge members of the European Parliament and representatives of EU member states to consider if the regulation is needed at all. It risks creating far more problems than it solves.
At a minimum the regulation should be completely revised.
Following the recent agreement by the European Council on a draft position for the proposed regulation on preventing the dissemination of terrorist content online, which adopted the initial draft presented by the European Commission with some
changes, the Global Network Initiative (GNI) is concerned about the potential unintended effects of the proposal and would therefore like to put forward a number of issues we urge the European Parliament to address as it considers it further.
GNI members recognize and appreciate the European Union (EU) and member states' legitimate roles in providing security, and share the aim of tackling the dissemination of terrorist content online. However, we believe that, as drafted, this
proposal could unintentionally undermine that shared objective by putting too much emphasis on technical measures to remove content, while simultaneously making it more difficult to challenge terrorist rhetoric with counter-narratives. In
addition, the regulation as drafted may place significant pressure on a range of information and communications technology (ICT) companies to monitor users' activities and remove content in ways that pose risks for users' freedom of expression
and privacy. We respectfully ask that EU officials, Parliamentarians, and member states take the time necessary to understand these and other significant risks that have been identified, by consulting openly and in good faith with affected
companies, civil society, and other experts.
Governments around the world are grappling with the threat of terrorism, but their efforts aimed at curbing the dissemination of terrorist content online all too often result in censorship. Over the past five years, we've seen a number of
governments--from the US Congress to that of France and now the European Commission (EC)--seek to implement measures that place an undue burden on technology companies to remove terrorist speech or face financial liability.
This is why EFF has joined forces with dozens of organizations to call on members of the European Parliament to oppose the EC's proposed regulation, which would require companies to take down terrorist content within one hour . We've added our
voice to two letters--one from Witness and another organized by the Center for Democracy and Technology --asking that MEPs consider the serious consequences that the passing of this regulation could have on human rights defenders and on freedom
We share the concerns of dozens of allies that requiring the use of proactive measures such as use of the terrorism hash database (already voluntarily in use by a number of companies) will restrict expression and have a disproportionate impact on
marginalized groups. We know from years of experience that filters just don't work.
Furthermore, the proposed requirement that companies must respond to reports of terrorist speech within an hour is, to put it bluntly, absurd. As the letter organized by Witness states, this regulation essentially forces companies to bypass due
process and make rapid and unaccountable decisions on expression through automated means and furthermore doesn't reflect the realities of how violent groups recruit and share information online.
We echo these and other calls from defenders of human rights and civil liberties for MEPs to reject proactive filtering obligations and to refrain from enacting laws that will have unintended consequences for freedom of expression.
The idea is that the government of any European Member State will be able to order any website to remove content considered "terrorist". No independent judicial authorisation will be needed to do so, letting governments abuse the wide
definition of "terrorism". The only thing IMCO accepted to add is for government's orders to be subject to "judicial review", which can mean anything.
In France, the government's orders to remove "terrorist content" are already subject to "judicial review", where an independent body is notified of all removal orders and may ask judges to asses them. This has not been of much
help: only once has this censorship been submitted to a judge's review. It was found to be unlawful, but more than one year and half after it was ordered. During this time, the French government was able to abusively censor content, in this case,
far-left publications by two French Indymedia outlets.
Far from simplifying, this Regulation will add confusion as authorities from one member state will be able to order removal in other one, without necessarily understanding context.
Unrealistic removal delays
Regarding the one hour delay within which the police can order a hosting service provider to block any content reported as "terrorist", there was no real progress either. It has been replaced by a deadline of at least eight hours, with
a small exception for "microentreprises" that have not been previously subject to a removal order (in this case, the "deadline shall be no sooner than the end of the next working day").
This narrow exception will not allow the vast majority of Internet actors to comply with such a strict deadline. Even if the IMCO Committee has removed any mention of proactive measures that can be imposed on Internet actors, and has stated that
"automated content filters" shall not be used by hosting service providers, this very tight deadline, and the threat of heavy fines will only incite them to adopt the moderation tools developed by the Web's juggernauts (Facebook and
Google) and use the broadest possible definition of terrorism to avoid the risk of penalties. The impossible obligation to provide a point of contact reachable 24/7 has not been modified either. The IMCO opinion has even worsened the financial
penalties that can be imposed: it is now "at least" 1% and up to 4% of the hosting service provider's turnover.
The next step will be on 11 March, when the CULT Committee (Culture and Education) will adopt its opinion.
The last real opportunity to obtain the rejection of this dangerous text will be on 21 March 2019, in the LIBE Committee (Civil Liberties, Justice and Home Affairs). European citizens must contact their MEPs to demand this rejection. We have
dedicated page on our website with an analysis of this Regulation and a tool to directly contact the MEPs in charge.
Starting today, and for the weeks to come, call your MEPS and demand they reject this text.
A group of some of the best known internet pioneers have written an open letter explaining how the EU's censorship law nominally targeting terrorism will both chill the non terrorist internet whilst simultaneously advantaging US internet
giants over smaller European businesses. The group writes:
EU Terrorist Content regulation will damage the internet in Europe without meaningfully
contributing to the fight against terrorism
Dear MEP Dalton,
Dear MEP Ward,
Dear MEP Reda,
As a group of pioneers, technologists, and innovators who have helped create and sustain todays internet,
we write to you to voice our concern at proposals under consideration in the EU Terrorist Content
Tackling terrorism and the criminal actors who perpetrate it is a necessary public policy objective, and the internet plays an important role in achieving this end. The tragic and harrowing incident in Christchurch, New
Zealand earlier this month has underscored the continued threat terrorism poses to our fundamental freedoms, and the need to confront it in all its forms. However, the fight against terrorism does not preclude lawmakers from their responsibility
to implement evidence-based law that is proportionate, justified, and supportive of its stated aim.
The EU Terrorist Content regulation, if adopted as proposed, will restrict the basic rights of European internet users and undercut innovation on the internet without meaningfully contributing to the fight against terrorism.
We are particularly concerned by the following aspects of the proposed Regulation:
ÂUnclear definition of terrorist content: The definition of 'terrorist content' is extremely broad, and includes no clear exemption for educational, journalistic, or research purposes. This creates the risk of over-removal
of lawful and important public interest speech.
Lack of proportionality: The regulation applies equally to all internet hosting services, bringing thousands of services into scope that have no relevance to terrorist content. By not taking any account of the different
types and sizes of online services, nor their exposure to such illegal content, the new rules would be far out of proportion with the stated aim of the proposal.
Unworkable takedown timeframes: The obligation to remove content within a mere 60 minutes of notification will likely lead to significant over-removal of lawful content and place a catastrophic compliance burden on micro,
small, and medium-sized companies offering services within Europe. At the same time, it will greatly favour large multinational platforms that have already developed highly sophisticated content moderation operations
Reliance on upload filters and other ;proactive measures': The draft regulation frames automated upload filters as ÂetheÂf solution for terrorist content moderation at scale, and provides government agencies with the
power to mandate how such upload filters and other proactive measures are designed and implemented. But upload filtering of 'terrorist content' is fraught with challenges and risks, and only a handful of online services have the resources and
capacity to build or license such technology. As such, the proposal is setting a benchmark that only the largest platforms can meet. Moreover, upload filtering and related proactive measures risks suppressing important public interest content,
such as news reports about terrorist incidents and dispatches from warzones
We fully support efforts to combat dangerous and illegal information on the internet, including through new legislation where appropriate. Yet as currently drafted, this Regulation risks inflicting harm on free expression and due process,
competition and the possibility to innovate online5.
Given these likely ramifications we urge you to undertake a proper assessment of the proposal and make the necessary changes to ensure that the perverse outcomes described above are not realised. At the very least, any legislation of this nature
must include far greater rights protection and be built around a proportionality criterion that ensures companies of all sizes and types can comply and compete in Europe.
Citizens in Europe look to you for leadership in developing progressive policy that protects their rights, ensures their companies can compete, and protects their public interest. This legislation in its current form runs contrary to those
ambitions. We urge you to amend it, for the sake of European citizens and for the sake of the internet. Yours sincerely,
Mitchell Baker Executive Chairwoman, The Mozilla Foundation and Mozilla Corporation Tim Berners-Lee Inventor of the World Wide Web and Founder of the Web Foundation Vint Cerf Internet Pioneer Brewster Kahle Founder & Digital Librarian,
Internet Archive Jimmy Wales Founder of Wikipedia and Member of the Board of Trustees of the Wikimedia Foundation Markus Beckedahl Founder, Netzpolitik; Co-founder, re:publica Brian Behlendorf Member of the EFF Board of Directors; Executive
Director of Hyperledger at the Linux Foundation Cindy Cohn Executive Director, Electronic Frontier Foundation Cory Doctorow Author; Co-Founder of Open Rights Group; Visiting Professor at Open University (UK) Rebecca MacKinnon Co-founder, Global
Voices; Director, Ranking Digital Rights Katherine Maher Chief Executive Officer of the Wikimedia Foundation Bruce Schneier Public-interest technologist; Fellow, Berkman Klein Center for Internet & Society; Lecturer, Harvard Kennedy School
The European Parliament is set to vote on legislation that would require websites that host user-generated content to take down material reported as terrorist content within one hour. We have some examples of current notices sent to the Internet
Archive that we think illustrate very well why this requirement would be harmful to the free sharing of information and freedom of speech that the European Union pledges to safeguard.
In the past week, the Internet Archive has received a series of email notices from Europol's European Union Internet Referral Unit (EU IRU) falsely identifying hundreds of URLs on archive.org as terrorist propaganda. At least one of these
mistaken URLs was also identified as terrorist content in a separate take down notice from the French government's L'Office Central de Lutte contre la Criminalit39 li39e aux Technologies de l'Information et de la Communication (OCLCTIC).
The Internet Archive has a few staff members that process takedown notices from law enforcement who operate in the Pacific time zone. Most of the falsely identified URLs mentioned here (including the report from the French government) were sent
to us in the middle of the night 203 between midnight and 3am Pacific 203 and all of the reports were sent outside of the business hours of the Internet Archive.
The one-hour requirement essentially means that we would need to take reported URLs down automatically and do our best to review them after the fact.
It would be bad enough if the mistaken URLs in these examples were for a set of relatively obscure items on our site, but the EU IRU's lists include some of the most visited pages on archive.org and materials that obviously have high scholarly
and research value. See a summary below with specific examples.
The European Parliament has approved a draft version of new EU internet censorship law targeting terrorist content.
In particular the MEPs approved the imposition of a one-hour deadline to remove content marked for censorship by various national organisations. However the MEPs did not approve a key section of the law requiring internet companies to pre-process
and censor terrorsit content prior to upload.
A European Commission official told the BBC changes made to the text by parliament made the law ineffective. The Commission will now try to restore the pre-censorship requirement with the new parliament when it is elected.
The law would affect social media platforms including Facebook, Twitter and YouTube, which could face fines of up to 4% of their annual global turnover. What does the law say?
In amendments, the European Parliament said websites would not be forced to monitor the information they transmit or store, nor have to actively seek facts indicating illegal activity. It said the competent authority should give the website
information on the procedures and deadlines 12 hours before the agreed one-hour deadline the first time an order is issued.
In February, German MEP Julia Reda of the European Pirate Party said the legislation risked the surrender of our fundamental freedoms [and] undermines our liberal democracy. Ms Reda welcomed the changes brought by the European Parliament but said
the one-hour deadline was unworkable for platforms run by individual or small providers.