Ofcom has published a prospectus angling for a role as the UK internet censor. It writes:
Ofcom has published a discussion document examining the area of harmful online content.
In the UK and around the world, a debate is underway about whether regulation is needed to address a range of problems that originate online, affecting people, businesses and markets.
The discussion document is intended as a contribution to that debate, drawing on Ofcom's experience of regulating the UK's communications sector, and broadcasting in particular. It draws out the key lessons from the regulation of content
standards 203 for broadcast and on-demand video services 203 and the insights that these might provide to policy makers into the principles that could underpin any new models for addressing harmful online content.
The UK Government intends to legislate to improve online safety, and to publish a White Paper this winter. Any new legislation is a matter for Government and Parliament, and Ofcom has no view about the institutional arrangements that might
Alongside the discussion paper, Ofcom has published joint research with the Information Commissioner's Office on people's perception, understanding and experience of online harm. The survey of 1,686 adult internet users finds that 79% have
concerns about aspects of going online, and 45% have experienced some form of online harm. The study shows that protection of children is a primary concern, and reveals mixed levels of understanding around what types of media are regulated.
The sales pitch is more or less that Ofcom's TV censorship has 'benefited' viewers so would be a good basis for internet censorship.
Ofcom particularly makes a point of pushing the results of a survey of internet users and their 'concerns'. The survey is very dubious and ends up suggesting thet 79% of users have concerns about going on line.
And maybe this claim is actually true. After all, the Melon Farmers are amongst the 79% have concerns about going online: The Melon Farmers are concerned that:
There are vast amounts of scams and viruses waiting to be filtered out from Melon Farmers email inbox every day.
The authorities never seem interested in doing anything whatsoever about protecting people from being scammed out of their life savings. Have you EVER heard of the police investigating a phishing scam?
On the other hand the police devote vast resources to prosecuting internet insults and jokes, whilst never investigating scams that see old folks lose their life savings.
So yes, there is concern about the internet. BUT, it would be a lie to infer that these concerns mean support for Ofcom's proposals to censor websites along the lines of TV.
In fact looking at the figures, some of the larger categories of 'concern's are more about fears of real crime rather than concerns about issues like fake news.
Interestingly Ofcom has published how the 'concerns' were hyped up by prompting the surveyed a bit. For instance, Ofcom reports that 12% of internet users say they are 'concerned' about fake news without being prompted. With a little prompting by
the interviewer, the number of people reporting being concerned about fake news magically increases to 29%.
It also has to be noted that there are NO reports in the survey of internet users concerned about a lack news balancing opinions, a lack of algorithm transparency, a lack of trust ratings for news sources, or indeed for most of the other
suggestions that Ofcom addresses.
I've seen more fake inferences in the Ofcom discussion document than I have seen fake news items on the internet in the last ten years.
Tony Hall, the BBC's director general, has repeated his call for global streaming companies, Netflix and Amazon to suffer the same censorship as the UK's traditional broadcasters -- or else risk killing off distinctive British content. He said to
the Royal Television Society's London conference:
It cannot be right that the UK's media industry is competing against global giants with one hand tied behind its back.
In so many ways -- prominence, competition rules, advertising, taxation, content regulation, terms of trade, production quotas -- one set of rules applies to UK companies, and barely any apply to the new giants. That needs rebalancing, too. We
stand ready to help, where we can.
Hall will use the speech to warn that young British audiences now spend almost as much time watching Netflix -- which only launched its UK streaming service in 2012 -- as watching BBC television and iPlayer combined.
Citing Ofcom figures, Hall warned that Britain's public service broadcasters have cut spending on content in real terms by around £1bn since 2004. He said that global streaming companies are not spending enough on British productions to make up
the difference, while their UK-based productions tend to focus on material which has a global appeal rather than a distinctly British flavour. Hall added:
This isn't just an issue for us economically, commercially or as institutions. There is an impact on society. The content we produce is not an ordinary consumer good. It helps shape our society. It brings people together, it helps us understand
each other and share a common national story.
Tech companies that fail to remove terrorist content quickly could soon face massive fines. The European Commission proposed new rules on Wednesday that would require internet platforms to remove illegal terror content within an hour of it being
flagged by national authorities. Firms could be fined up to 4% of global annual revenue if they repeatedly fail to comply.
Facebook (FB), Twitter (TWTR) and YouTube owner Google (GOOGL) had already agreed to work with the European Union on a voluntary basis to tackle the problem. But the Commission said that progress has not been sufficient.
A penalty of 4% of annual revenue for 2017 would translate to $4.4 billion for Google parent Alphabet and $1.6 billion for Facebook.
The proposal is the latest in a series of European efforts to control the activities of tech companies.
The terror content proposal needs to be approved by the European Parliament and EU member states before becoming law.
The European Court of Human Rights (ECtHR) has found that the UK's mass surveillance programmes, revealed by NSA whistleblower Edward Snowden, did not meet the quality of law requirement and were incapable of keeping the interference
to what is necessary in a democratic society.
The landmark judgment marks the Court's first ruling on UK mass surveillance programmes revealed by Mr Snowden. The case was started in 2013 by campaign groups Big Brother Watch, English PEN, Open Rights Group and computer science expert Dr
Constanze Kurz following Mr Snowden's revelation of GCHQ mass spying.
Documents provided by Mr Snowden revealed that the UK intelligence agency GCHQ were conducting population-scale interception, capturing the communications of millions of innocent people. The mass spying programmes included TEMPORA, a bulk data
store of all internet traffic; KARMA POLICE, a catalogue including a web browsing profile for every visible user on the internet; and BLACK HOLE, a repository of over 1 trillion events including internet histories, email and instant messenger
records, search engine queries and social media activity.
The applicants argued that the mass interception programmes infringed UK citizens' rights to privacy protected by Article 8 of the European Convention on Human Rights as the population-level surveillance was effectively indiscriminate, without
basic safeguards and oversight, and lacked a sufficient legal basis in the Regulation of Investigatory Powers Act (RIPA).
In its judgment, the ECtHR acknowledged that bulk interception is by definition untargeted ; that there was a lack of oversight of the entire selection process, and that safeguards were not sufficiently robust to provide adequate
guarantees against abuse.
In particular, the Court noted concern that the intelligence services can search and examine "related communications data" apparently without restriction -- data that identifies senders and recipients of communications, their
location, email headers, web browsing information, IP addresses, and more. The Court expressed concern that such unrestricted snooping could be capable of painting an intimate picture of a person through the mapping of social networks,
location tracking, Internet browsing tracking, mapping of communication patterns, and insight into who a person interacted with.
The Court acknowledged the importance of applying safeguards to a surveillance regime, stating:
In view of the risk that a system of secret surveillance set up to protect national security may undermine or even destroy democracy under the cloak of defending it, the Court must be satisfied that there are adequate and effective guarantees
The Government passed the Investigatory Powers Act (IPA) in November 2016, replacing the contested RIPA powers and controversially putting mass surveillance powers on a statutory footing.
However, today's judgment that indiscriminate spying breaches rights protected by the ECHR is likely to provoke serious questions as to the lawfulness of bulk powers in the IPA.
Jim Killock, Executive Director of Open Rights Group said:
Viewers of the BBC drama, the Bodyguard, may be shocked to know that the UK actually has the most extreme surveillance powers in a democracy. Since we brought this case in 2013, the UK has actually increased its powers to indiscriminately
surveil our communications whether or not we are suspected of any criminal activity.
In light of today's judgment, it is even clearer that these powers do not meet the criteria for proportionate surveillance and that the UK Government is continuing to breach our right to privacy.
Silkie Carlo, director of Big Brother Watch said:
This landmark judgment confirming that the UK's mass spying breached fundamental rights vindicates Mr Snowden's courageous whistleblowing and the tireless work of Big Brother Watch and others in our pursuit for justice.
Under the guise of counter-terrorism, the UK has adopted the most authoritarian surveillance regime of any Western state, corroding democracy itself and the rights of the British public. This judgment is a vital step towards protecting millions
of law-abiding citizens from unjustified intrusion. However, since the new Investigatory Powers Act arguably poses an ever greater threat to civil liberties, our work is far from over.
Antonia Byatt, director of English PEN said:
This judgment confirms that the British government's surveillance practices have violated not only our right to privacy, but our right to freedom of expression too. Excessive surveillance discourages whistle-blowing and discourages investigative
journalism. The government must now take action to guarantee our freedom to write and to read freely online.
Dr Constanze Kurz, computer scientist, internet activist and spokeswoman of the German Chaos Computer Club said:
What is at stake is the future of mass surveillance of European citizens, not only by UK secret services. The lack of accountability is not acceptable when the GCHQ penetrates Europe's communication data with their mass surveillance techniques.
We all have to demand now that our human rights and more respect of the privacy of millions of Europeans will be acknowledged by the UK government and also by all European countries.
Dan Carey of Deighton Pierce Glynn, the solicitor representing the applicants, stated as follows:
The Court has put down a marker that the UK government does not have a free hand with the public's communications and that in several key respects the UK's laws and surveillance practices have failed. In particular, there needs to be much
greater control over the search terms that the government is using to sift our communications. The pressure of this litigation has already contributed to some reforms in the UK and this judgment will require the UK government to look again at
its practices in this most critical of areas.
The European Parliament has voted to approve new copyright powers enabling the big media industry to control how their content is used on the internet.
Article 11 introduces the link tax which lets news companies control how their content is used. The target of the new law is to make Google pay newspapers for its aggregating Google News service. The collateral damage is that millions of
websites can now be harangued for linking to and quoting articles, or even just sharing links to them.
Article 13 introduces the requirements for user content sites to create censorship machines that pre-scan all uploaded content and block anything copyrighted. The original proposal, voted on in June, directly specified content hosts use
censorship machines (or filters as the EU prefers to call them). After a cosmetic rethink since June, the law no longer specifies automatic filters, but instead specifies that content hosts are responsible for copyright published. And of course
the only feasible way that content hosts can ensure they are not publishing copyrighted material is to use censorship machines anyway. The law was introduced, really with just the intention of making YouTube and Facebook pay more for content from
the big media companies. The collateral damage to individuals and small businesses was clearly of no concern to the well lobbied MEPs.
Both articles will introduce profound new levels of censorship to all users of the internet, and will also mean that there will reduced opportunities for people to get their contributions published or noticed on the internet. This is simply
because the large internet companies are commercial organisations and will always make decisions with costs and profitability in mind. They are not state censors with a budget to spend on nuanced decision making. So the net outcome will be to
block vast swathes of content being uploaded just in case it may contain copyright.
An example to demonstrate the point is the US censorship law, FOSTA. It requires content hosts to block content facilitating sex trafficking. Internet companies generally decided that it was easier to block all adult content rather than to try
and distinguish sex trafficking from non-trafficking sex related content. So sections of websites for dating and small ads, personal services etc were shut down overnight.
The EU however has introduced a few amendments to the original law to slightly lessen the impact an individuals and small scale content creators.
Article 13 will now only apply to platforms where the main purpose ...is to store and give access to the public or to stream significant amounts of copyright protected content uploaded / made available by its users and
that optimise content and promotes for profit making purposes .
When defining best practices for Article 13, special account must now be taken of fundamental rights, the use of exceptions and limitations. Special focus should also be given to ensuring that the burden on SMEs remain
appropriate and that automated blocking of content is avoided (effectively an exception for micro/small businesses). Article 11 shall not extend to mere hyperlinks, which are accompanied by individual words (so it seems links are safe, but
quoted snippets of text must be very short) and the protection shall also not extend to factual information which is reported in journalistic articles from a press publication and will therefore not prevent anyone from reporting such factual
Article 11 shall not prevent legitimate private and non-commercial use of press publications by individual users .
Article 11 rights shall expire 5 years after the publication of the press publication. This term shall be calculated from the first day of January of the year following the date of publication. The right referred to in
paragraph 1 shall not apply with retroactive effect .
Individual member states will now have to decide how Article 11 is implemented, which could create some confusion across borders.
At the same time, the EU rejected the other modest proposals to help out individuals and small creators:
No freedom of panorama. When we take photos or videos in public spaces, we're apt to incidentally capture copyrighted works: from stock art in ads on the sides of buses to t-shirts worn by protestors, to building facades claimed by architects
as their copyright. The EU rejected a proposal that would make it legal Europe-wide to photograph street scenes without worrying about infringing the copyright of objects in the background.
No user-generated content exemption, which would have made EU states carve out an exception to copyright for using excerpts from works for criticism, review, illustration, caricature, parody or pastiche.
A final round of negotiation with the EU Council and European Commission is now due to take place before member states make a decision early next year. But this is historically more of a rubber stamping process and few, if any, significant
changes are expected.
However, anybody who mistakenly thinks that Brexit will stop this from impacting the UK should be cautious. Regardless of what the EU approves, the UK might still have to implement it, and in any case the current UK Government supports many of
the controversial new measures.
Despite waves of calls and emails from European Internet users, the European Parliament today voted to accept the principle of a universal pre-emptive copyright filter for content-sharing sites, as well as the idea that news publishers should
have the right to sue others for quoting news items online -- or even using their titles as links to articles. Out of all of the potential amendments offered that would fix or ameliorate the damage caused by these proposals, they voted for worst
on offer .
There are still opportunities, at the EU level, at the national level, and ultimately in Europe's courts, to limit the damage. But make no mistake, this is a serious setback for the Internet and digital rights in Europe.
It also comes at a trepidatious moment for pro-Internet voices in the heart of the EU. On the same day as the vote on these articles, another branch of the European Union's government, the Commission, announced plans to introduce a new regulation
on preventing the dissemination of terrorist content online . Doubling down on speedy unchecked censorship, the proposals will create a new removal order, which will oblige hosting service providers to remove content within one hour of being
ordered to do so. Echoing the language of the copyright directive, the Terrorist Regulation aims at ensuring smooth functioning of the digital single market in an open and democratic society, by preventing the misuse of hosting services for
terrorist purposes; it encourages the use of proactive measures, including the use of automated tools.
Not content with handing copyright law enforcement to algorithms and tech companies, the EU now wants to expand that to defining the limits of political speech too.
And as bad as all this sounds, it could get even worse. Elections are coming up in the European Parliament next May. Many of the key parliamentarians who have worked on digital rights in Brussels will not be standing. Marietje Schaake, author of
some of the better amendments for the directive, announced this week that she would not be running again. Julia Reda, the German Pirate Party representative, is moving on; Jan Philipp Albrecht, the MEP behind the GDPR, has already left Parliament
to take up a position in domestic German politics. The European Parliament's reserves of digital rights expertise, never that full to begin with, are emptying.
The best that can be said about the Copyright in the Digital Single Market Directive, as it stands, is that it is so ridiculously extreme that it looks set to shock a new generation of Internet activists into action -- just as the DMCA, SOPA/PIPA
and ACTA did before it.
If you've ever considered stepping up to play a bigger role in European politics or activism, whether at the national level, or in Brussels, now would be the time.
It's not enough to hope that these laws will lose momentum or fall apart from their own internal incoherence, or that those who don't understand the Internet will refrain from breaking it. Keep reading and supporting EFF, and join Europe's
powerful partnership of digital rights groups, from Brussels-based EDRi to your local national digital rights organization . Speak up for your digital business, open source project, for your hobby or fandom, and as a contributor to the global
This was a bad day for the Internet and for the European Union: but we can make sure there are better days to come.
even bigger test to businesses than GDPR . It's a regulation that will create a likely deficit in the customer information they collect even post-GDPR.
Current cookie banner notifications, where websites inform users of cookie collection, will make way for cookie request pop-ups that deny cookie collection until a user has opted in or out of different types of cookie collection. Such a pop-up is
expected to cause a drop in web traffic as high as 40 per cent. The good news is that it will only appear should the user not have already set their cookie preferences at browser level.
The outcome for businesses whose marketing and advertising lies predominantly online is the inevitable reduction in their ability to track, re-target and optimise experiences for their visitors.
For any business with a website and dependent on cookies, the new regulations put them at severe risk of losing this vital source of consumer data . As a result, businesses must find a practical, effective and legal alternative to alleviate the
burden on the shoulders of all teams involved and to offset any drastic shortfall in this crucial data.
Putting the power in the hands of consumers when it comes to setting browser-level cookie permissions will limit a business's ability to extensively track the actions users take on company websites and progress targeted cookie-based advertising.
Millions of internet users will have the option to withdraw their dataset from the view of businesses, one of the biggest threats ePrivacy poses.
MEPs approve copyright law requiring Google and Facebook to use censorship machines to block user uploads that may contain snippets of copyright material, including headlines, article text, pictures and video
The European Parliament has approved a disgraceful copyright law that threatens to destroy the internet as we know it.
The rulehands more power to news and record companies against Internet giants like Google and Facebook. But it also allows companies to make sweeping blocks of user-generated content, such as internet memes or reaction GIFs that use copyrighted
material. The tough approach could spell the end for internet memes, which typically lay text over copyrighted photos or video from television programmes, films, music videos and more.
MEPs voted 438 in favour of the measures, 226 against, with 39 abstentions. The vote introduced Articles 11 and 13 to the directive, dubbed the link tax and censorship machines.
Article 13 puts the onus of policing for copyright infringement on the websites themselves. This forces web giants like YouTube and Facebook to scan uploaded content to stop the unlicensed sharing of copyrighted material. If the internet
companies find that such scanning does not work well, or makes the service unprofitable, the companies could pull out of allowing users to post at all on topics where the use of copyright material is commonplace.
The second amendment to the directive, Article 11, is intended to give publishers and newspapers a way to make money when companies like Google link to their stories.Search engines and online platforms like Twitter and Facebook will have to pay a
license to link to news publishers when quoting portions of text from these outlets.
Following Wednesday's vote, EU lawmakers will now take the legislation to talks with the European Commission and the 28 EU countries.
Niche porn producer, Pandora Blake, Misha Mayfair, campaigning lawyer Myles Jackman and Backlash are campaigning to back a legal challenge to the upcoming internet porn censorship regime in the UK. They write on a new
We are mounting a legal challenge.
Do you lock your door when you watch porn 203 or do you publish a notice in the paper? The new UK age verification law means you may soon have to upload a proof of age to visit adult sites. This would connect your legal identity to a database of
all your adult browsing. Join us to prevent the damage to your privacy.
The UK Government is bringing in age verification for adults who want to view adult content online; yet have failed to provide privacy and security obligations to ensure your private information is securely protected.
The law does not currently limit age verification software to only hold data provided by you just in order to verify your age. Hence, other identifying data about you could include anything from your passport information to your credit card
details, up to your full search history information. This is highly sensitive data.
What are the Privacy Risks?
Data Misuse - Since age verification providers are legally permitted to collect this information, what is to stop them from increasing revenue through targeting advertising at you, or even selling your personal data?
Data Breaches - No database is perfectly secure, despite good intentions. The leaking or hacking of your sensitive personal information could be truly devastating. The Ashley Madison hack led to suicides. Don't let the Government allow your
private sexual preferences be leaked into the public domain.
What are we asking money for?
We're asking you to help us crowdfund legal fees so we can challenge the new age verification rules under the Digital Economy Act 2017. We re asking for 2£10,000 to cover the cost of initial legal advice, since it's a complicated area of law.
Ultimately, we'd like to raise even more money, so we can send a message to Government that your personal privacy is of paramount importance.
Lucy Powell writes in the Guardian, (presumably intended as an open comment):
Closed forums on Facebook allow hateful views to spread unchallenged among terrifyingly large groups. My bill would change that
You may wonder what could bring Nicky Morgan, Anna Soubry, David Lammy, Jacob Rees-Mogg and other senior MPs from across parliament together at the moment. Yet they are all sponsoring a bill I'm proposing that will tackle online hate, fake news
and radicalisation. It's because, day-in day-out, whatever side of an argument we are on, we see the pervasive impact of abuse and hate online 203 and increasingly offline, too.
Social media has given extremists a new tool with which to recruit and radicalise. It is something we are frighteningly unequipped to deal with.
Worryingly, it is on Facebook, which most of us in Britain use, where people are being exposed to extremist material. Instead of small meetings in pubs or obscure websites in the darkest corners of the internet, our favourite social media site is
increasingly where hate is cultivated. From hope to hate: how the early internet fed the far right Read more
Online echo chambers are normalising and allowing extremist views to go viral unchallenged. These views are spread as the cheap thrill of racking up Facebook likes drives behaviour and reinforces a binary worldview. Some people are being groomed
unwittingly as unacceptable language is treated as the norm. Others have a more sinister motive.
While in the real world, alternative views would be challenged by voices of decency in the classroom, staffroom, or around the dining-room table, there are no societal norms in the dark crevices of the online world. The impact of these bubbles of
hate can be seen, in extreme cases, in terror attacks from radicalised individuals. But we can also see it in the rise of the far right, with Tommy Robinson supporters rampaging through the streets this summer, or in increasing Islamophobia and
Through Facebook groups (essentially forums), extremists can build large audiences. There are many examples of groups that feature anti-Muslim or antisemitic content daily, in an environment which, because critics are removed from the groups,
normalises these hateful views. If you see racist images, videos and articles in your feed but not the opposing argument, you might begin to think those views are acceptable and even correct. If you already agree with them, you might be motivated
This is the thinking behind Russia's interference in the 2016 US presidential election. The Russian Internet Research Agency set up Facebook groups, amassed hundreds of thousands of members, and used them to spread hate and fake news, organise
rallies, and attack Hillary Clinton. Most of its output was designed to stoke the country's racial tensions.
It's not only racism that is finding a home on Facebook. Marines United was a secret group of 30,000 current and former servicemen in the British armed forces and US Marines. Members posted nude photos of their fellow servicewomen, taken in
secret. A whistleblower described the group as revenge porn, creepy stalker-like photos taken of girls in public, talk about rape. It is terrifying that the group grew so large before anyone spoke out, and that Facebook did nothing until someone
informed the media.
Because these closed forums can be given a secret setting, they can be hidden away from everyone but their members. This locks out the police, intelligence services and charities that could otherwise engage with the groups and correct
disinformation. This could be particularly crucial with groups where parents are told not to vaccinate their children against diseases. Internet warriors: inside the dark world of online haters Read more
Despite having the resources to solve the problem, Facebook lacks the will. In fact, at times it actively obstructs those who wish to tackle hate and disinformation. Of course, it is not just Facebook, and the proliferation of online platforms
and forums means that the law has been much too slow to catch up with our digital world.
We should educate people to be more resilient and better able to spot fake news and recognise hate, but we must also ensure there are much stronger protections to spread decency and police our online communities. The responsibility to regulate
these social media platforms falls on the government. It is past time to act. Advertisement
That's why I am introducing a bill in parliament which will do just that. By establishing legal accountability for what's published in large online forums, I believe we can force those who run these echo chambers to stamp out the evil that is
currently so prominent. Social media can be a fantastic way of bringing people together 203 which is precisely why we need to prevent it being hijacked by those who instead wish to divide.
On Wednesday, the EU will vote on whether to accept two controversial proposals in the new Copyright Directive; one of these clauses, Article 13, has the potential to allow anyone, anywhere in the world, to effect mass, rolling waves of
censorship across the Internet.
The way things stand today, companies that let their users communicate in public (by posting videos, text, images, etc) are required to respond to claims of copyright infringement by removing their users' posts, unless the user steps up to
contest the notice. Sites can choose not to remove work if they think the copyright claims are bogus, but if they do, they can be sued for copyright infringement (in the United States at least), alongside their users, with huge penalties at
stake. Given that risk, the companies usually do not take a stand to defend user speech, and many users are too afraid to stand up for their own speech because they face bankruptcy if a court disagrees with their assessment of the law.
This system, embodied in the United States' Digital Millennium Copyright Act (DMCA) and exported to many countries around the world, is called notice and takedown, and it offers rightsholders the ability to unilaterally censor the Internet on
their say-so, without any evidence or judicial oversight. This is an extraordinary privilege without precedent in the world of physical copyright infringement (you can't walk into a cinema, point at the screen, declare I own that, and get the
movie shut down!).
But rightsholders have never been happy with notice and takedown. Because works that are taken down can be reposted, sometimes by bots that automate the process, rightsholders have called notice and takedown a game of whac-a-mole , where they
have to keep circling back to remove the same infringing files over and over.
Rightsholders have long demanded a notice and staydown regime. In this system, rightsholders send online platforms digital copies of their whole catalogs; the platforms then build copyright filters that compare everything a user wants to post to
this database of known copyrights, and block anything that seems to be a match.
Tech companies have voluntarily built versions of this system. The most well-known of the bunch is YouTube's Content ID system, which cost $60,000,000 to build, and which works by filtering the audio tracks of videos to categorise them.
Rightsholders are adamant that Content ID doesn't work nearly well enough, missing all kinds of copyrighted works, while YouTube users report rampant overmatching, in which legitimate works are censored by spurious copyright claims: NASA gets
blocked from posting its own Mars rover footage; classical pianists are blocked from posting their own performances , birdsong results in videos being censored , entire academic conferences lose their presenters' audio because the hall they
rented played music at the lunch-break--you can't even post silence without triggering copyright enforcement. Besides that, there is no bot that can judge whether something that does use copyrighted material is fair dealing. Fair dealing is
protected under the law, but not under Content ID.
If Content ID is a prototype, it needs to go back to the drawing board. It overblocks (catching all kinds of legitimate media) and underblocks (missing stuff that infuriates the big entertainment companies). It is expensive, balky, and
It's coming soon to an Internet near you.
On Wednesday, the EU will vote on whether the next Copyright Directive will include Article 13, which makes Content-ID-style filters mandatory for the whole Internet, and not just for the soundtracks of videos--also for the video portions, for
audio, for still images, for code, even for text. Under Article 13, the services we use to communicate with one another will have to accept copyright claims from all comers, and block anything that they believe to be a match.
This measure will will censor the Internet and it won't even help artists to get paid.
Let's consider how a filter like this would have to work. First of all, it would have to accept bulk submissions. Disney and Universal (not to mention scientific publishers, stock art companies, real-estate brokers, etc) will not pay an army of
data-entry clerks to manually enter their vast catalogues of copyrighted works, one at a time, into dozens or hundreds of platforms' filters. For these filters to have a hope of achieving their stated purpose, they will have to accept thousands
of entries at once--far more than any human moderator could review.
But even if the platforms could hire, say, 20 percent of the European workforce to do nothing but review copyright database entries, this would not be acceptable to rightsholders. Not because those workers could not be trained to accurately
determine what was, and was not, a legitimate claim--but because the time it would take for them to review these claims would be absolutely unacceptable to rightsholders.
It's an article of faith among rightsholders that the majority of sales take place immediately after a work is released, and that therefore infringing copies are most damaging when they're available at the same time as a new work is released
(they're even more worried about pre-release leaks).
If Disney has a new blockbuster that's leaked onto the Internet the day it hits cinemas, they want to pull those copies down in seconds, not after precious days have trickled past while a human moderator plods through a queue of copyright claims
from all over the Internet.
Combine these three facts:
Anyone can add anything to the blacklist of copyrighted works that can't be published by Internet users;
The blacklists have to accept thousands of works at once; and
New entries to the blacklist have to go into effect instantaneously.
It doesn't take a technical expert to see how ripe for abuse this system is. Bad actors could use armies to bots to block millions of works at a go (for example, jerks could use bots to bombard the databases with claims of ownership over the
collected works of Shakespeare, adding them to the blacklists faster than they could possibly be removed by human moderators, making it impossible to quote Shakespeare online).
But more disturbing is targeted censorship: politicians have long abused takedown to censor embarrassing political revelations or take critics offline , as have violent cops and homophobic trolls .
These entities couldn't use Content ID to censor the whole Internet: instead, they had to manually file takedowns and chase their critics around the Internet. Content ID only works for YouTube -- plus it only allows trusted rightsholders to add
works wholesale to the notice and staydown database, so petty censors are stuck committing retail copyfraud.
But under Article 13, everyone gets to play wholesale censor, and every service has to obey their demands: just sign up for a rightsholder account on a platform and start telling it what may and may not be posted. Article 13 has no teeth for
stopping this from happening: and in any event, if you get kicked off the service, you can just pop up under a new identity and start again.
Some rightsholder lobbyists have admitted that there is potential for abuse here, they insist that it will all be worth it, because it will get artists paid. Unfortunately, this is also not true.
For all that these filters are prone to overblocking and ripe for abuse, they are actually not very effective against someone who actually wants to defeat them.
Let's look at the most difficult-to-crack content filters in the world: the censoring filters used by the Chinese government to suppress politically sensitive materials. These filters have a much easier job than the ones European companies will
have to implement: they only filter a comparatively small number of items, and they are built with effectively unlimited budgets, subsidized by the government of one of the world's largest economies, which is also home to tens of millions of
skilled technical people, and anyone seeking to subvert these censorship systems is subject to relentless surveillance and risks long imprisonment and even torture for their trouble.
Those Chinese censorship systems are really, really easy to break , as researchers from the University of Toronto's Citizen Lab demonstrated in a detailed research report released a few weeks ago.
People who want to break the filters and infringe copyright will face little difficulty. The many people who want to stay on the right side of the copyright --but find themselves inadvertently on the wrong side of the filters--will find
themselves in insurmountable trouble, begging for appeal from a tech giant whose help systems all dead-end in brick walls. And any attempt to tighten the filters to catch these infringers, will of course, make it more likely that they will block
A system that allows both censors and infringers to run rampant while stopping legitimate discourse is bad enough, but it gets worse for artists.
Content ID cost $60,000,000 and does a tiny fraction of what the Article 13 filters must do. When operating an online platform in the EU requires a few hundred million in copyright filtering technology, the competitive landscape gets a lot more
bare. Certainly, none of the smaller EU competitors to the US tech giants can afford this.
On the other hand, US tech giants can afford this (indeed, have pioneered copyright filters as a solution , even as groups like EFF protested it ), and while their first preference is definitely to escape regulation altogether, paying a few
hundred million to freeze out all possible competition is a pretty good deal for them.
The big entertainment companies may be happy with a deal that sells a perpetual Internet Domination License to US tech giants for a bit of money thrown their way, but that will not translate into gains for artists. The fewer competitors there are
for the publication, promotion, distribution and sale of creative works, the smaller the share will be that goes to creators.
We can do better: if the problem is monopolistic platforms (and indeed, monopolistic distributors ), tackling that directly as a matter of EU competition law would stop those companies from abusing their market power to squeeze creators.
Copyright filters are the opposite of antitrust, though: it will make the biggest companies much bigger, to the great detriment of all the little guys in the entertainment industry and in the market for online platforms for speech.
Many thanks to my local MEP Athea McIntyre who responded to my email about the rise of the censorship machines
I appreciate your concerns regarding the new Copyright reform proposals. However, the objective of Article 13 is to make sure authors, such as musicians, are appropriately paid for their work, and to ensure that platforms fairly share revenues
which they derive from creative works on their sites with creators. I will be voting for new text which seeks to exclude small and microenterprise platforms from the scope and to introduce greater proportionality for SMEs.
In the text under discussion, if one of the main purposes of a platform is to share copyright works, if they optimise these works and also derive profit from them, the platform would need to conclude a fair license with the rightholders, if
rightholders request this. If not, platforms will have to check for and remove specific copyright content once this is supplied from rightholders. This could include pirated films which are on platforms at the same time as they are shown at the
cinema. However, if a platform's main purpose is not to share protected works, it does not optimise copyright works nor to make profit from them, it would not be required to conclude a license. There are exemptions for online encyclopaedias
(Wikipedia), sites where rightholders have approved to the uploading of their works and software platforms, while online market places (including Ebay) are also out of the scope.
Closing this value gap is an essential part of the Copyright Directive, which Secretary of
State Matthew Hancock supports addressing . My Conservative colleagues and I support the general policy justification behind it, which is to make sure that platforms are responsible for their sites and that authors are fairly rewarded and
incentivised to create work. Content recognition will help to make sure creators, such as song writers, can be better identified and paid fairly for their work. Nevertheless, this should not be done at the expense of users' rights. We are
dedicated to striking the right balance between adequately rewarding rightholders and safeguarding users' rights. There are therefore important safeguards to protect users' rights, respect data protection, and to make sure that only proportionate
measures are taken.
I will therefore be supporting the mandate to enter into trilogue negotiations tomorrow so that the Directive can become law.
[Surely one understand that musicians are getting a bit of a rough deal from the internet giants and one can see where McIntyre is coming from. However it is clear that little thought has been made into how rules will
pan out in the real profit driven world where the key take holders are doing their best for their shareholders, not the European peoples. It is surely driving the west into poverty when laws are so freely passed just to do a few nice things,
whilst totally ignoring the cost of destroying people's businesses and incomes].
Offsite Comment: ...And from the point of view of the internet giants
ARTICLE 19 is leading a coalition of international human rights organisations, who will tell the European Court of Justice (CJEU) that the de-listing of websites under the right to be forgotten should be limited in order to protect global
freedom of expression. The hearing will take place on September 11 with a judgment expected in early 2019.
The CJEU hearing in Google vs CNIL is taking place after France's highest administrative court asked for clarification in relation to the 2014 ruling in Google Spain. This judgment allows European citizens to ask search engines like Google to
remove links to inadequate, irrelevant or ... excessive content -- commonly known as the right to be forgotten (RTBF). While the content itself remains online, it cannot be found through online searches of the individual's name.
The CJEU has been asked to clarify whether a court or data regulator should require a search engine to de-list websites only in the country where it has jurisdiction or across the entire world.
France's data regulator, the Commission Nationale de l'Informatique et des Libertes (CNIL) has argued that if they uphold a complaint by a French citizen, search engines such as Google should not only be compelled to remove links from google.fr
but all Google domains.
ARTICLE 19 and the coalition of intervening organisations have warned that forcing search engines to de-list information on a global basis would be disproportionate. Executive Director of ARTICLE 19, Thomas Hughes said:
This case could see the right to be forgotten threatening global free speech. European data regulators should not be allowed to decide what Internet users around the world find when they use a search engine. The CJEU must limit the scope of the
right to be forgotten in order to protect the right of Internet users around the world to access information online.
ARTICLE 19 argues that rights to privacy and rights to freedom of expression must be balanced when it comes to making deciding whether websites should be de-listed. Hughes added:
If European regulators can tell Google to remove all references to a website, then it will be only a matter of time before countries like China, Russia and Saudi Arabia start to do the same. The CJEU should protect freedom of expression not set
a global precedent for censorship.
The bill threatens investigative journalism and academic research by making it a crime to view material online that could be helpful to a terrorist. This would deter investigative journalists from doing their work and would make academic
research into terrorism difficult or impossible.
New border powers in the bill could put journalists' confidential sources at risk. The bill's border security measures would mean that journalists could be forced to answer questions or hand over material that would reveal the identity of a
confidential source. These new powers could be exercised without any grounds for suspicion.
The bill also endangers freedom of expression in other ways. It would make it an offence to express an opinion in support of a proscribed (terrorist) organisation in a way that is reckless as to whether this could encourage another person to
support the organisation. This would apply even if the reckless person was making the statement to one other person in a private home.
The bill would criminalise the publication of a picture or video clip of an item of clothing or for example a flag in a way that aroused suspicion that the person is a member or supporter of a terrorist organisation. This would cover, for
example, someone taking a picture of themselves at home and posting it online.
Joy Hyvarinen, head of advocacy said: The fundamentally flawed Counter-Terrorism and Border Security Bill should be sent back to the drawing board. It is not fit for purpose and it would limit freedom of expression, journalism and academic
research in a way that should be completely unacceptable in a democratic country.
Pornhub's Age verification system AgeID has announced an exclusive partnership with OCL and its Portes solution for providing anonymous face-to-face age verification solution where retailers OK the age of customers who buy a card enabling porn
access. The similar AVSecure scheme allows over 25s to buy the access card without showing any ID but may require to see unrecorded ID from those appearing less than 25.
According to the company, the PortesCard is available to purchase from selected high street retailers and any of the U.K.'s 29,000 PayPoint outlets as a voucher. Each PortesCard will cost £4.99 for use on a single device, or £8.99 for use across
multiple devices. This compares with £10 for the AVSecure card.
Once a card or voucher is purchased, its unique validation code must be activated via the Portes app within 24 hours before expiring. Once the user has been verified they will automatically be granted access to all adult sites using AgeID. Maybe
this 24 hour limit is something to do with an attempt to restrict secondary sales of porn access codes by ensuring that they get tied to devices almost immediately. It all sounds a little hasslesome.
As an additional layer of protection, parents can quickly and simply block access on their children's devices to sites using Portes, so PortesCards cannot be associated with AgeID.
But note that an anonymously bought card is not quite a 100% safe solution. One has to consider whether if the authorities get hold of a device whether the can then see a complete history of all websites accessed using the app or access code. One
also has to consider whether someone can remotely correlate an 'anonymous' access code with all the tracking cookies holding one's id.
The radio host and colourful conspiracy theorist Alex Jones has been permanently censored by Twitter.
One month after it distinguished itself from the rest of the tech industry by declining to bar the rightwing shock jock its platform, Twitter fell in line with the other major social networks in banning Jones.
Twitter justified the censorship saying:
We took this action based on new reports of Tweets and videos posted yesterday that violate our abusive behavior policy, in addition to the accounts' past violations. We will continue to evaluate reports we receive regarding other accounts
potentially associated with @realalexjones or @infowars and will take action if content that violates our rules is reported or if other accounts are utilized in an attempt to circumvent their ban.
The government is amending its Counter-Terrorism and Border Security Bill with regards to criminalising accessing terrorism related content on the internet.
MPs, peers and the United Nations have already raised human rights concerns over pre-existing measures in the Counter-Terrorism and Border Security Bill, which proposed to make accessing propaganda online on three or more different occasions a
The Joint Human Rights Committee found the wording of the law vague and told the government it violated Article 10 of the European Convention on Human Rights (ECHR). The committee concluded in July:
This clause may capture academic and journalistic research as well as those with inquisitive or even foolish minds.
The viewing of material without any associated intentional or reckless harm is, in our view, an unjustified interference with the right to receive information...unless amended, this implementation of this clause would clearly risk breaching
Article 10 of the ECHR and unjustly criminalising the conduct of those with no links to terrorism.
The committee called for officials to narrow the new criminal offence so it requires terrorist intent and defines how people can legally view terrorist material.
The United Nations Special Rapporteur on the right to privacy also chipped accusing the British government of straying towards thought crime with the law.
In response, the government scrapped the three clicks rule entirely and broadened the concept of viewing to make the draft law read:
A person commits an offence if...the person views or otherwise accesses by means of the internet a document or record containing information of that kind.
It also added a clause saying a reasonable excuse includes:
Having no reason to believe, that the document or record in question contained, or was likely to contain, information of a kind likely to be useful to a person committing or preparing an act of terrorism.
In exactly one week, the European Parliament will hold
a crucial debate and vote on a proposal
so terrible , it can only be called an extinction-level event for the Internet as we know it.
At issue is the text of the new EU Copyright Directive, which updates the 17-year-old copyright regulations for the 28 member-states of the EU. It makes a vast array of technical changes to EU copyright law, each of which has stakeholders rooting
for it, guaranteeing that whatever the final text says will become the law of the land across the EU.
The Directive was pretty uncontroversial, right up to the day last May when the EU started enforcing the General Data Protection Regulation (GDPR), a seismic event that eclipsed all other Internet news for weeks afterward. On that very day, a
German MEP called Axl Voss quietly changed the text of the Directive to
reintroduce two long-discarded proposals -- "Article 11" and "Article 13" -- proposals that had been evaluated by the EU's own experts and dismissed as dangerous and unworkable.
Under Article 11 -- the " link tax " -- online services are banned from allowing links to news services on their platforms unless they get a license to make links to the news; the rule does not define "news service" or
"link," leaving 28 member states to make up their own definitions and leaving it to everyone else to comply with 28 different rules.
Under Article 13 -- the " censorship machines " -- anyone who allows users to communicate in public by posting audio, video, stills, code, or anything that might be copyrighted -- must send those posts to a copyright enforcement
algorithm. The algorithm will compare it to all the known copyrighted works (anyone can add anything to the algorithm's database) and censor it if it seems to be a match.
These extreme, unworkable proposals represent a grave danger to the Internet. The link tax means that only the largest, best-funded companies will be able to offer a public space where the news can be discussed and debated. The censorship
machines are a gift to every petty censor and troll (just claim copyright in an embarrassing recording and watch as it disappears from the Internet!), and will add hundreds of millions to the cost of operating an online platform, guaranteeing
that Big Tech's biggest winners will never face serious competition and will rule the Internet forever.
That's terrible news for Europeans, but it's also alarming for all the Internet's users, especially Americans.
The Internet's current winners -- Google, Facebook, Twitter, Apple, Amazon -- are overwhelmingly American, and they embody the American regulatory indifference to surveillance and privacy breaches.
But the Internet is global, and that means that different regions have the power to export their values to the rest of the world. The EU has been a steady source of pro-privacy, pro-competition, public-spirited Internet rules and regulations, and
European companies have a deserved reputation for being less prone to practicing "
surveillance capitalism " and for being more thoughtful about the human impact of their services.
In the same way that California is a global net exporter of lifesaving emissions controls for vehicles, the EU has been a global net exporter of privacy rules, anti-monopoly penalties, and other desperately needed corrections for an Internet that
grows more monopolistic, surveillant, and abusive by the day.
Many of the cheerleaders for Articles 11 and 13 talk like these are a black eye for Google and Facebook and other U.S. giants, and it's true that these would result in hundreds of millions in compliance expenditures by Big Tech, but it's money
that Big Tech (and only Big Tech) can afford to part with. Europe's much smaller Internet companies need not apply.
It's not just Europeans who lose when the EU sells America's tech giants the right to permanently rule the Internet: it's everyone, because Europe's tech companies, co-operatives, charities, and individual technologists have the potential to make
everyone's Internet experience better. The U.S. may have a monopoly on today's Internet, but it doesn't have a monopoly on good ideas about how to improve tomorrow's net.
The global Internet means that we have friends and colleagues and family all over the world. No matter where you are in the world today, please take ten minutes to get in touch with two friends in the EU , send them this article, and then
ask them to get in touch with their MEPs by visiting
Save Your Internet .
There's only one Internet and we all live on it. Europeans rose up to kill
ACTA , the last brutal assault on Internet freedom, helping Americans fight our own government's short-sighted foolishness; now the rest of the world can return the favor to our friends in the EU.
A number of TV broadcasters, mobile network and internet service providers has urged the UK government to introduce a new internet censor of social media companies. In a letter to The Sunday Telegraph, executives from the BBC, ITV and Channel 4,
as well as Sky, BT and TalkTalk, called for a new censor to help tackle fake news, child exploitation, harassment and other growing issues online. The letter said:
We do not think it is realistic or appropriate to expect internet and social media companies to make all the judgment calls about what content is and is not acceptable, without any independent oversight.
There is an urgent need for independent scrutiny of the decisions taken, and greater transparency.
This is not about censoring the internet:[ ...BUT... ] it is about making the most popular internet platforms safer, by ensuring there is accountability and transparency over the decisions these private companies are already
taking. The UK government is aware of the problems on Facebook, Twitter, and other social media platforms. Last October, it introduced an Internet Safety Green Paper as part of its digital charter manifesto pledge. Following a consultation
period, then digital secretary Matt Hancock (he's now the health secretary) said a white paper would be introduced later in 2018.
And in a comment suggesting that maybe the call is more about righting market imbalances than concern over societal problems. The letter noted that its signatories all pay high and fair levels of tax. The letter also notes that
broadcasters and telcos are held to account by Ofcom, while social media forms are not, which again gives the internet companies an edge in the market.
Back in 2001, the European Parliament came together to pass regulations and set up copyright laws for the internet, a technology that was just finding its footing after the dot com boom and bust. Wikipedia had just been born, and there were 29
million websites. No one could imagine the future of this rapidly growing ecosystem -- and today, the internet is even more complex. Over a billion websites, countless mobile apps, and billions of additional users. We are more interconnected than
ever. We are more global than ever. But 17 years later, the laws that protect this content and its creators have not kept up with the exponential growth and evolution of the web.
Next week, the European Parliament will decide how information online is shared in a vote that will significantly affect how we interact in our increasingly connected, digital world. We are in the last few moments of what could be our last
opportunity to define what the internet looks like in the future. The next wave of proposed rules under consideration by the European Parliament will either permit more innovation and growth, or stifle the vibrant free web that has allowed
creativity, innovation, and collaboration to thrive. This is significant because copyright does not only affect books and music, it profoundly shapes how people communicate and create on the internet for years to come.
This is why we must remember the original objective for this update to the law: to make copyright rules that work for better access to a quickly-evolving, diverse, and open internet.
The very context in which copyright operates has changed completely. Consider Wikipedia, a platform which like much of the internet today, is made possible by people who act as consumers and creators. People read Wikipedia, but they also write
and edit articles, take photos for Wikimedia Commons, or contribute to other Wikimedia free knowledge projects. Content on Wikipedia is available under a free license for anyone to use, copy, or remix.
Every month, hundreds of thousands of volunteers make decisions about what content to include on Wikipedia, what constitutes a copyright violation, and when those decisions need to be revised . We like it this way -- it allows people, not
algorithms, to make decisions about what knowledge should be presented back to the rest of the world.
Changes to the EU Directive on Copyright in the Digital Single Market could have serious implications for Wikipedia and other independent and nonprofit websites like it.
The internet today is collaborative and open by nature. And that is why our representatives to the EU must institute policies that promote the free exchange of information online for everyone.
We urge EU representatives to support reform that adds critical protections for public domain works of art, history, and culture, and to limit new exclusive rights to existing works that are already free of copyright.
The world should be concerned about new proposals to introduce a system that would automatically filter information before it appears online. Through pre-filtering obligations or increased liability for user uploads, platforms would be forced to
create costly, often biased systems to automatically review and filter out potential copyright violations on their sites. We already know that these systems are historically faulty and often lead to false positives. For example, consider the
experience of a German professor who
repeatedly received copyright violation notices when using public domain music from Beethoven, Bartůk, and Schubert in videos on YouTube.
The internet has already created alternative ways to manage these issues. For instance, Wikipedia contributors already work hard to catch and remove infringing content if it does appear. This system, which is largely driven by human efforts, is
very effective at preventing copyright infringement.
Much of the conversation surrounding EU copyright reform has been dominated by the market relationships between large rights holders and for-profit internet platforms. But this small minority does not reflect the breadth of websites and users on
the internet today. Wikipedians are motivated by a passion for information and a sense of community. We are entirely nonprofit, independent, and volunteer-driven. We urge MEPs to consider the needs of this silent majority online when designing
copyright policies that work for the entire internet.
As amendments to the draft for a new Copyright Directive are considered, we urge the European Parliament to create a copyright framework that reflects the evolution of how people use the internet today. We must remember the original problem
policymakers set out to solve: to bring copyright rules in line with a dramatically larger, more complex digital world and to remove cross-border barriers. We should remain true to the original vision for the internet -- to remain an open,
accessible space for all.
Met Police Commissioner Cressida Dick believes detectives should have access to material from social media companies within minutes. She said UK police forces had faced a very protracted procedure in such cases.
The call comes after a suspect in the murder of Lucy McHugh, 13, was jailed for withholding his Facebook password from police. Last week, Stephen Nicholson was jailed for 14 months having admitted failing to comply with an order under the
Regulation of Investigatory Powers Act requiring him to disclose a Facebook password.
Detectives investigating her murder say it is taking an inordinate amount of time to access evidence from Facebook.
Angus Crawford, BBC News Correspondent, explained:
Facebook is a US company and so has to abide by US laws on data protection and due process. This means they have no duty to hand any information over to a foreign police force.
Only a request via the US Department of Justice using something called the Mutual Legal Assistance Treaty will oblige disclosure, but this is cumbersome, expensive and can take months.
A spokeswoman for the social media company said Facebook is working closely with law enforcement and following well-established legal mechanisms. Facebook says it already has a team which works with law enforcement and they have been cooperating
with Hampshire Police on the Lucy McHugh case.
[Of course the police should get instant access to social media when pursuing people guilty of a serious crime. But of course they need to be denied the facility when pursuing innocent people being harassed for a
In July MEPs voted down plans to fast-track the Copyright Directive, derailing Article 13's plan to turn Internet platforms into copyright enforcers.
Yet the fight to stop Article 13's vision of the Internet - one where all speech is approved or rejected by an automated upload filter - is not over.
On 12 September MEPs will vote once again, but this time as of yet unknown amendments will be added to the mix. Bad ideas like Article 13 - and perhaps worse - will be voted on individually, so it's not a simple up or down vote. To identify and
oppose bad amendments, MEPs must understand exactly why Article 13 threatens free speech.
Many MEPs are undecided. Please write to them now. You can use the points below to construct your own unique message. IF YOU'RE OUTSIDE THE UK use this tool instead:
Oppose changes to Internet platform liability. If platforms become liable for user content, they will have no choice but to scan all uploads with automated filters.
Say no to upload filters. Filters struggle to identify the vital legal exceptions to copyright that enable research, commentary, creative works, parody and more. Poor judgement means innocent speech gets blocked along with copyright
Internet companies do not make good copyright enforcers. To avoid liability penalties, platforms will err on the side of caution and over-block.
Free speech takes precedence over copyright. Threatening free expression is way too high a price to pay for the sake of copyright enforcement.
General monitoring of all content is technically infeasible. No filter can possibly review every form of content covered in Article 13's extraorindarily wide mandate which includes text, audio, video, images and software.
If you are part of a tech business, or a creator, like a musician, photographer, video editor or a writer, let your MEP know!
We need copyright reform that does not threaten free expression The controversial Copyright Directive is fast approaching another pivotal vote on 12 September. For the third time in half a year MEPs will decide whether Article 13 -
or something even worse - will usher in a new era, where all content is approved or rejected by automated gatekeepers.
Seen through an economic lens the Directive's journey is viewed as a battle between rights holders and tech giants. Yet a growing chorus of ordinary citizens, Internet luminaries, human rights organisations and creatives have rightly expanded the
debate to encompass the missing human dimension.
Open Rights Group opposes Article 13 - or any new amendments proposing similar ideas - because it poses a real threat to the fundamental right to free speech online.
Free speech defenders claimed a victory over industry lobbyists this summer when MEPs rejected plans to fast-track the Directive and a lasting triumph is now in reach. UK residents are in an especially strong position to make a difference because
many of their MEPs remain undecided. Unlike some other EU states, voting patterns aren't falling strictly on party lines in the UK.
This time new amendments will be added, and the underlying principles of Article 13 will again face a vote. They include:
Changes to Internet platform liability
If Internet platforms become directly liable for user content, they will become de facto copyright enforcers. This will leave them little choice but to introduce general monitoring of all user content with automated filters. Companies are not fit
to police free speech. To avoid penalties they will err on the side of caution and over-block user content.
The Implicit or explicit introduction of upload filters
Everything we know about automated filters shows
they struggle to comprehend context. Yet identifying the vital legal exceptions to copyright that enable research, commentary, creative works, parody and more requires a firm grasp of context. An algorithm's poor judgement will cause
innocent speech to be routinely blocked along with copyright violations.
The introduction of general monitoring
General monitoring of all user content is a step backwards for a free and open Internet. It is also technically infeasible to monitor every form of content covered in Article 13's extraordinarily wide mandate which includes text, audio, video,
images and software.
Outspoken Article 13 cheerleader Axel Voss MEP said "Censorship machines is not what we intend to implement and no one in the European Parliament wants that." This is what happens when copyright reform is pursued with little
consideration to human rights.
The proposals within Article 13 would change the way that the Internet works, from free and creative sharing, to one where anything can be removed without warning, by computers. This is far too high a price to pay for copyright enforcement. We
need a copyright reform which does not sacrifice fundamental human and digital rights.
The Five Eyes governments of the UK, US, Canada, Australia and New Zealand have threatened the tech industry to voluntarily create backdoor access to their systems, or be compelled to by law if they don't.
The move is a final warning to platform holders such as WhatsApp, Apple and Google who deploy encryption to guarantee user privacy on their services. A statement by the Five Eyes governments says:
Encryption is vital to the digital economy and a secure cyberspace, and to the protection of personal, commercial and government information ...HOWEVER.. . the increasing use and sophistication of certain encryption designs present
challenges for nations in combating serious crimes and threats to national and global security.
Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate
investigations and avoid detection and prosecution.
If the industry does not voluntarily establish lawful access solutions to their products the statement continued, we may pursue technological, enforcement, legislative or other measures to guarantee entry.