Theresa May has made a speech at the Lord Mayor's Banquet saying that fake news and Russian propaganda are threatening the international
order. She said:
It is seeking to weaponise information. Deploying its state-run media organisations to plant fake stories and photo-shopped images in an attempt to sow discord in the west and undermine our institutions.
The UK did not want to return to the Cold War, or to be in a state of perpetual confrontation but the UK would have to act to protect the interests of the UK, Europe and rest of the world if Russia continues on its current path.
May did not say whether she was concerned with Russian intervention in any UK democratic processes, but Ben Bradshaw, a leading Labour MP, is among those to have called for a judge-led inquiry into the possibility that Moscow tried to influence
the result of the Brexit referendum.
Russia has been accused of running troll factories that disseminate fake news and divisive posts on social media. It emerged on Monday that a Russian bot account was one of those that shared a viral image that claimed a Muslim woman ignored
victims of the Westminster terror attack as she walked across the bridge.
Surely declining wealth and poor economic prospects are a more likely root cause of public discontent rather than a little trivial propaganda.
Three countries are using the European Council to put dangerous pro-censorship amendments into the already controversial Copyright Directive.
The copyright law that Openmedia has been campaigning on -- the one pushing the link tax and censorship machines -- is facing some dangerous sabotage from the European Council. In particular, France, Spain and Portugal are directly harming the
The Bill is currently being debated in the European Parliament but the European Council also gets to make its own proposed version of the law, and the two versions eventually have to compromise with each other. This European Council is made up of
ministers from the governments of all EU member states. Those ministers are usually represented by staff who do most of the negotiating on their behalf. It is not a transparent body, but it does have a lot of power.
The Council can choose to agree with Parliament's amendments, but it doesn't look like that's going to happen in this case. In fact they've been taking worrying steps, particularly when it comes to the censorship machine proposals.
As the proposal stands before the Council intervention, it encourages sites where users upload and make content to install filtering mechanisms -- a kind of censorship machine which would use algorithms to look for copyrighted content and then
block the post. This is despite the fact that there many legal reasons to use copyrighted content.
These new changes want to go a step further. They firstly want to make the censorship machine demand even more explicit. As Julia Reda puts it:
They want to add to the Commission proposal that platforms need to automatically remove media that has once been classified as infringing, regardless of the context in which it is uploaded.
Then, they go all in with a suggested rewrite of existing copyright law to end the liability protections which are vital for a functioning web.
Liability protection laws mean we (not websites) are responsible for what we say and post online. This is so that websites are not obliged to monitor everything we say or do. If they were liable there would be much overzealous blocking and
censorship. These rules made YouTube, podcast platforms, social media, all possible. The web as we know it works because of these rules.
But the governments of France, Spain, Portugal and the Estonian President of the Council want to undo them. It would mean all these sites could be sued for any infringement posted there. It would put off new sites from developing. And it
would cause huge legal confusion -- given that the exact opposite is laid out in a different EU law.
Home Secretary Amber Rudd told an audience at New America, a Washington think tank, on Thursday night that there was an
online arms race between militants and the forces of law and order.
She said that social media companies should press ahead with development and deployment of AI systems that could spot militant content before it is posted on the internet and block it from being disseminated.
Since the beginning of 2017, violent militant operatives have created 40,000 new internet destinations, Rudd said. As of 12 months ago, social media companies were taking down about half of the violent militant material from their sites within two
hours of its discovery, and lately that proportion has increased to two thirds, she said.
YouTube is now taking down 83% of violent militant videos it discovers, Rudd said, adding that UK authorities have evidence that the Islamic State was now struggling to get some of its materials online.
She added that in the wake of an increasing number of vehicle attacks by islamic terrorists British security authorities were reviewing rental car regulations and considering ways for authorities to collect more relevant data from car hire
YouTube has announced an extension of its age restriction policy for parody videos using children's characters but with inappropriate themes
The new policy was announced on Thursday and will see age restrictions apply on content featuring inappropriate use of family entertainment characters like unofficial videos depicting Peppa Pig. The company already had a policy that rendered such
videos ineligible for advertising revenue, in the hope that doing would reduce the motivation to create them in the first place. Juniper Downs, , YouTube's director of policy explained:
Earlier this year, we updated our policies to make content featuring inappropriate use of family entertainment characters ineligible for monetisation,We're in the process of implementing a new policy that age restricts this content in the YouTube
main app when flagged. Age-restricted content is automatically not allowed in YouTube Kids. The YouTube team is made up of parents who are committed to improving our apps and getting this right.
Age-restricted videos can't be seen by users who aren't logged in, or by those who have entered their age as below 18 on both the site and the app. More importantly, they also don't show up on YouTube Kids, a separate app aimed at parents who want
to let their children under 13 use the site unsupervised.
The Senate Commerce Committee just approved a slightly modified version of SESTA, the Stop Enabling Sex Traffickers Act ( S. 1693 ).
SESTA was and continues to be a deeply flawed bill. It is intended to weaken the section commonly known as CDA 230 or simply Section 230, one of the most important laws protecting free expression online . Section 230 says that for purposes of
enforcing certain laws affecting speech online, an intermediary cannot be held legally responsible for any content created by others.
It's not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage.
SESTA would create an exception to Section 230 for laws related to sex trafficking, thus exposing online platforms to an immense risk of civil and criminal litigation. What that really means is that online platforms would be forced to take drastic
measures to censor their users.
Some SESTA supporters imagine that compliance with SESTA would be easy--that online platforms would simply need to use automated filters to pinpoint and remove all messages in support of sex trafficking and leave everything else untouched. But
such filters do not and cannot exist: computers aren't good at recognizing subtlety and context, and with severe penalties at stake, no rational company would trust them to .
Online platforms would have no choice but to program their filters to err on the side of removal, silencing a lot of innocent voices in the process. And remember, the first people silenced are likely to be trafficking victims themselves: it would
be a huge technical challenge to build a filter that removes sex trafficking advertisements but doesn't also censor a victim of trafficking telling her story or trying to find help.
Along with the Center for Democracy and Technology, Access Now, Engine, and many other organizations, EFF signed a letter yesterday urging the Commerce Committee to change course . We explained the silencing effect that SESTA would have on online
Pressures on intermediaries to prevent trafficking-related material from appearing on their sites would also likely drive more intermediaries to rely on automated content filtering tools, in an effort to conduct comprehensive content moderation at
scale. These tools have a notorious tendency to enact overbroad censorship, particularly when used without (expensive, time-consuming) human oversight. Speakers from marginalized groups and underrepresented populations are often the hardest hit by
such automated filtering.
It's ironic that supporters of SESTA insist that computerized filters can serve as a substitute for human moderation: the improvements we've made in filtering technologies in the past two decades would not have happened without the safety provided
by a strong Section 230, which provides legal cover for platforms that might harm users by taking down, editing or otherwise moderating their content (in addition to shielding platforms from liability for illegal user-generated content).
We find it disappointing, but not necessarily surprising, that the Internet Association has endorsed this deeply flawed bill . Its member companies--many of the largest tech companies in the world--will not feel the brunt of SESTA in the same way
as their smaller competitors. Small Internet startups don't have the resources to police every posting on their platforms, which will uniquely pressure them to censor their users--that's particularly true for nonprofit and noncommercial platforms
like the Internet Archive and Wikipedia. It's not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage.
If you rely on online communities in your day-to-day life; if you believe that your right to speak matters just as much on the web as on the street; if you hate seeing sex trafficking victims used as props to advance an agenda of censorship;
please take a moment to write your members of Congress and tell them to oppose SESTA .
The UK's domestic pornography industry is being screwed by age verification laws unveiled by the Government.
New laws passed as part of the Digital Economy Act will require websites hosting pornographic material to verify the ages of visitors from the UK or face being blocked by ISPs.
Pandora/Blake, who described themself as a feminist pornographer, as well as obscenity lawyer and legal officer at Open Rights Group Myles Jackman, told Sky News that this posed an enormous privacy risk to viewers.
They argue the age verification requirements may harm small businesses and curtail the freedom of expression by allowing multinational pornography giants to monopolise the industry.
Many of the most popular pornographic websites (Pornhub, RedTube, YouPorn) and production studios (Brazzers, Digital Playground) are owned by one company: MindGeek.
MindGeek stands to increase its already considerable market share by offering age verification services to smaller sites.
Pandora/Blake said the Government is refusing to engage with pornographers who are concerned the laws will harm their business.
Age checks are going to be expensive, they said, noting figures given to them ranged from 2£0.05 to 2£1.50 per age check. If you know anything about the economics of porn you realise that if you're paying a cost per viewer, rather than per
customer, then you're going to be orders of magnitude making a loss.
I'm seeing a lot of smaller sites simply giving up pre-emptively. There's already a chilling effect of sites not knowing how they're going to possibly be able to comply, said Pandora/Blake.
A Government spokesperson told Sky News that the BBFC was the intended regulator for the age verification system, and would be required to publish guidance regarding the arrangements for making pornographic material available in a compliant
The BBFC said that as it had not yet been appointed the regulator, it could not comment on the concerns raised to Sky News.
On Tuesday 7 November, three joined cases brought by civil liberties and human rights organisations challenging UK Government
surveillance will be heard in the Grand Chamber of the European Court of Human Rights (ECtHR).
Big Brother Watch and Others v UK will be heard alongside 10 Human Rights Organisations and Others v UK and the Bureau of Investigative Journalism and Alice Ross v UK, four years after the initial application to the ECtHR.
Big Brother Watch, English PEN, Open Rights Group and Dr Constanze Kurz made their application to the Court in 2013 following Edward Snowden's revelations that UK intelligence agencies were running a mass surveillance and bulk communications
interception programme, TEMPORA, as well as receiving data from similar US programmes, PRISM and UPSTREAM, interfering with UK citizens' right to privacy.
The case questions the legality of the indiscriminate surveillance of UK citizens and the bulk collection of their personal information and communications by UK intelligence agencies under the Regulation of Investigatory Powers Act (RIPA). The UK
surveillance regime under RIPA was untargeted, meaning that UK citizens' personal communications and information was collected at random without any element of suspicion or evidence of wrongdoing, and this regime was effective indefinitely.
The surveillance regime is being challenged on the grounds that there was no sufficient legal basis, no accountability, and no adequate oversight of these programmes, and as a result infringed UK citizens' Article 8 right to a private life.
In 2014, the Bureau of Investigative Journalism made an application to the ECtHR, followed by 10 Human Rights Organisations and others in 2015 after they received a judgment from the UK Investigatory Powers Tribunal. All three cases were joined
together, and the Court exceptionally decided that there would be a hearing.
The result of these three cases has the potential to impact the current UK surveillance regime under the Investigatory Powers Act. This legal framework has already been strongly criticized by the Court of Justice of the European Union in Watson .
A favourable judgment in this case will finally push the UK Government to constrain these wide-ranging surveillance powers, implement greater judicial control and introduce greater protection such as notifying citizens that they have been put
Daniel Carey of Deighton Pierce Glynn, solicitor for Big Brother Watch, Open Rights Group, English PEN and Constanze Kurz, said:
Historically, it has required a ruling from this Court before improvements in domestic law in this area are made. Edward Snowden broke that cycle by setting in motion last year's Investigatory Power Act, but my clients are asking the Court to
limit bulk interception powers in a much more meaningful way and to require significant improvements in how such intrusive powers are controlled and reported.
Griff Ferris, Researcher at Big Brother Watch, said:
This case raises long-standing issues relating to the UK Government's unwarranted intrusion into people's private lives, giving the intelligence agencies free reign to indiscriminately intercept and monitor people's private communications without
evidence or suspicion.
UK citizens who are not suspected of any wrongdoing should be able to live their lives in both the physical and the digital world safely and securely without such Government intrusion.
If the Court finds that the UK Government infringed UK citizens' right to privacy, this should put further pressure on the Government to implement measures to ensure that its current surveillance regime doesn't make the same mistakes.
Antonia Byatt, Interim Director of English PEN, said:
More than four years since Edward Snowden's revelations and nearly one year since the Investigatory Powers Act was passed, this is a landmark hearing that seeks to safeguard our privacy and our right to freedom of expression.
The UK now has the most repressive surveillance legislation of any western democracy, this is a vital opportunity to challenge the unprecedented erosion of our private lives and liberty to communicate.
Jim Killock, Executive Director of Open Rights Group, said:
Mass surveillance must end. Our democratic values are threatened by the fact of pervasive, constant state surveillance. This case gives the court the opportunity to rein it back, and to show the British Government that there are clear limits.
Hoovering everything up and failing to explain what you are doing is not acceptable.
The truth is that a lot of the material that terrorists share is not actually illegal at all. Instead, it was often comprised of news reports about perceived injustices in Palestine, stuff that you could never censor in a free society.
A trade group representing giants of Internet business from Facebook to Microsoft has just endorsed a "compromise" version of the Stop Enabling Sex Traffickers Act (SESTA), a misleadingly named bill that would be disastrous for free
speech and online communities.
Just a few hours after Senator Thune's amended version of SESTA surfaced online, the Internet Association rushed to praise the bill's sponsors for their "careful work and bipartisan collaboration." The compromise bill has all of the same
fundamental flaws as the original. Like the original, it does nothing to fight sex traffickers, but it would silence legitimate
It shouldn't really come as a surprise that the Internet Association has fallen in line to endorse SESTA. The Internet Association doesn't represent the Internet--it represents the few companies that profit the most off of Internet activity.
The Internet Association can tell itself and its members whatever it wants--that it held its ground for as long as it could despite overwhelming political opposition, that the law will motivate its members to make amazing strides in filtering
technologies--but there is one thing that it simply cannot say: that it has done something to fight sex trafficking.
A serious problem calls for serious solutions, and SESTA is not a serious solution. At the heart of the sex trafficking problem lies a complex set of economic, social, and legal issues. A
broken immigration system
and a torn safety net. A law enforcement regime that puts trafficking victims at risk for reporting their traffickers. Officers who aren't adequately trained to use the online tools at their disposal, or use them against victims. And yes, if there
are cases where online platforms themselves directly contribute to unlawful activity , it's a problem that the Department of
Justice won't use the powers Congress has already given it
. These are the factors that deserve intense deliberation and debate by lawmakers, not a hamfisted attempt to punish online communities.
The Internet Association let the Internet down today. Congress should not make the same mistake.
A federal court in California has rendered an order from the Supreme Court of Canada unenforceable. The order in question required Google to remove
a company's websites from search results globally, not just in Canada. This ruling violates US law and puts free speech at risk, the California court found.
When the Canadian company Equustek Solutions requested Google to remove competing websites claimed to be illegally using intellectual property, it refused to do so globally.
This resulted in a legal battle that came to a climax in June, when the Supreme Court of Canada ordered Google to remove a company's websites from its search results. Not just in Canada, but all over the world.
With options to appeal exhausted in Canada, Google took the case to a federal court in the US. The search engine requested an injunction to disarm the Canadian order, arguing that a worldwide blocking order violates the First Amendment.
Surprisingly, Equustek decided not to defend itself and without opposition, a California District Court sided with Google. During a hearing, Google attorney Margaret Caruso stressed that it should not be possible for foreign countries to implement
measures that run contrary to core values of the United States.
The search engine argued that the Canadian order violated Section 230 of the Communications Decency Act, which immunizes Internet services from liability for content created by third parties. With this law, Congress specifically chose not to deter
harmful online speech by imposing liability on Internet services.
In an order, signed shortly after the hearing, District Judge Edward Davila concludes that Google qualifies for Section 230 immunity in this case. As such, he rules that the Canadian Supreme Court's global blocking order goes too far.
The ruling is important in the broader scheme. If foreign courts are allowed to grant worldwide blockades, free speech could be severely hampered. Today it's a relatively unknown Canadian company, but what if the Chinese Government asked Google to
block the websites of VPN providers?
Prager University, a nonprofit that creates educational videos with conservative slants, has filed a lawsuit against YouTube and its
parent company, Google, alleging that the company is censoring its content.
PragerU claims that more than three dozen of its videos have been restricted by YouTube over the past year. As a result, those who browse YouTube in restricted mode -- including many college and high school students -- are prevented from viewing
the content. Furthermore, restricted videos cannot earn any ad revenue.
PragerU says that by limiting access to their videos without a clear reason, YouTube has infringed upon PragerU's First Amendment rights.
YouTube has restricted edgy content in order to protect advertisers' brands. A number of advertisers told Google that they did not want their brand to be associated with edgy content. Google responded by banning all advertising from videos
claimed to contain edgy content. It keeps the brands happy but it has decimated many an online small business.
The Reddit moderators have explained new censorship rules in the following post:
We want to let you know that we have made some updates to our site-wide rules regarding violent content. We did this to alleviate user and moderator confusion about allowable content on the site. We also are making this update so that Reddit's
content policy better reflects our values as a company.
In particular, we found that the policy regarding inciting violence was too vague, and so we have made an effort to adjust it to be more clear and comprehensive. Going forward, we will take action against any content that encourages, glorifies,
incites, or calls for violence or physical harm against an individual or a group of people; likewise, we will also take action against content that glorifies or encourages the abuse of animals. This applies to ALL content on Reddit, including
memes, CSS/community styling, flair, subreddit names, and usernames.
We understand that enforcing this policy may often require subjective judgment, so all of the usual caveats apply with regard to content that is newsworthy, artistic, educational, satirical, etc, as mentioned in the policy. Context is key.
Whilst speaking about the Government's recently published Internet Safety Strategy green paper, Suzie Hargreaves of the Internet Watch Foundation
noted upcoming changes to the UK Council for Child Internet Safety (UKCCIS). This is a government run body that includes many members from industry and child protection campaigners. It debates many internet issues about the protection of children
which routinely touches on internet control and censorship. Hargreaves noted that the UKCCIS looks set to expand its remit. She writes:
The Government recognises the work of UKCCIS and wants to align it more closely with the Internet Safety Strategy. Renaming it the UK Council for Internet Safety (UKCIS), the Government is proposing broadening the council's remit to adults,
having a smaller and higher-profile executive board, reconsidering the role of the working groups to ensure that there is flexibility to respond to new issues, looking into an independent panel or working group to discuss the social media levy,
and reviewing available online safety resources.
There was plenty of strong language flying around on Twitter in response to the Harvey Weinstein scandal. Twitter got a bit
confused about who was harassing who, and ended up suspending Weinstein critic Rose McGowan for harassment. Twitter ended up being boycotted over its wrong call, and so Twitter bosses have been banging their heads together to do something.
Wired has got hold of an email outline an expansion of content liable to Twitter censorship and also for more severe sanctions for errant tweeters. Twitter's head of safety policy wrote of new measures to rolled out in the coming weeks:
Our definition of "non-consensual nudity" is expanding to more broadly include content like upskirt imagery, "creep shots," and hidden camera content. Given that people appearing in this content often do not know the material
exists, we will not require a report from a target in order to remove it.
While we recognize there's an entire genre of pornography dedicated to this type of content, it's nearly impossible for us to distinguish when this content may/may not have been produced and distributed consensually. We would rather error on the
side of protecting victims and removing this type of content when we become aware of it.
Unwanted sexual advances
Pornographic content is generally permitted on Twitter, and it's challenging to know whether or not sexually charged conversations and/or the exchange of sexual media may be wanted. To help infer whether or not a conversation is consensual, we
currently rely on and take enforcement action only if/when we receive a report from a participant in the conversation.
We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement action when we receive a report from someone directly involved in the conversation.
Hate symbols and imagery (new)
We are still defining the exact scope of what will be covered by this policy. At a high level, hateful imagery, hate symbols, etc will now be considered sensitive media (similar to how we handle and enforce adult content and graphic violence).
More details to come.
Violent groups (new)
We are still defining the exact scope of what will be covered by this policy. At a high level, we will take enforcement action against organizations that use/have historically used violence as a means to advance their cause. More details to come
here as well
Tweets that glorify violence (new)
We already take enforcement action against direct violent threats ("I'm going to kill you"), vague violent threats ("Someone should kill you") and wishes/hopes of serious physical harm, death, or disease ("I hope someone
kills you"). Moving forward, we will also take action against content that glorifies ("Praise be to for shooting up. He's a hero!") and/or condones ("Murdering makes sense. That way they won't be a drain on social
services"). More details to come.
Offsite Article: Changes to the way that 'sensitive' content is defined and blocked from Twitter search
Facebook and Google, along with other online publishers, may soon be required in the US to disclose funding for paid political ads.
Two US senators, Amy Klobuchar and Mark Warner, proposed a bill called The Honest Ads Act to extend the funding disclosure requirements for political ads on TV, radio, and in print, to online ads. Similar legislation is expected to be introduced
in the US House of Representatives.
Under these disclosure requirements, traditional media has to produce and reveal lists identifying organizations that have bought political adverts. If the Honest Ads Act is passed into law, top online sites, from Facebook to Twitter, will fall
under these requirements, too.
The bill is an attempt to respond to Russian efforts to influence the 2016 Presidential Election through social media.
Facebook , Google , and Twitter have all said they sold politically-oriented ads to accounts linked to Russia. Facebook has characterized the ads it sold as amplifying divisive social and political messages.
If the bill becomes law, the rules would require digital platforms averaging 50 million monthly viewers to maintain a public list of political ads purchased by a person or organization spending more than $500 cumulatively on such ads, on a
per-platform basis. And it would direct digital platforms to make all reasonable efforts to prevent foreign individuals and organizations from purchasing domestic political ads.
Article 13: Monitoring and filtering of internet content is unacceptable. Index on Censorship joined with 56 other NGOs to call for the deletion of Article
13 from the proposal on the Digital Single Market, which includes obligations on internet companies that would be impossible to respect without the imposition of excessive restrictions on citizens' fundamental rights.
Dear President Juncker,
Dear President Tajani,
Dear President Tusk,
Dear Prime Minister Ratas,
Dear Prime Minister Borissov,
Dear MEP Voss, MEP Boni
The undersigned stakeholders represent fundamental rights organisations.
Fundamental rights, justice and the rule of law are intrinsically linked and constitute core values on which the EU is founded. Any attempt to disregard these values undermines the mutual trust between member states required for the EU to
function. Any such attempt would also undermine the commitments made by the European Union and national governments to their citizens.
Article 13 of the proposal on Copyright in the Digital Single Market include obligations on internet companies that would be impossible to respect without the imposition of excessive restrictions on citizens' fundamental rights.
Article 13 introduces new obligations on internet service providers that share and store user-generated content, such as video or photo-sharing platforms or even creative writing websites, including obligations to filter uploads to their services.
Article 13 appears to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens' communications if they are to have any chance of staying in business.
Article 13 contradicts existing rules and the case law of the Court of Justice. The Directive of Electronic Commerce ( 2000/31/EC) regulates the liability for those internet companies that host content on behalf of their users. According to
the existing rules, there is an obligation to remove any content that breaches copyright rules, once this has been notified to the provider.
Article 13 would force these companies to actively monitor their users' content, which contradicts the 'no general obligation to monitor' rules in the Electronic Commerce Directive. The requirement to install a system for filtering electronic
communications has twice been rejected by the Court of Justice, in the cases Scarlet Extended ( C 70/10) and Netlog/Sabam (C 360/10). Therefore, a legislative provision that requires internet companies to install a filtering system would
almost certainly be rejected by the Court of Justice because it would contravene the requirement that a fair balance be struck between the right to intellectual property on the one hand, and the freedom to conduct business and the right to freedom
of expression, such as to receive or impart information, on the other.
In particular, the requirement to filter content in this way would violate the freedom of expression set out in Article 11 of the Charter of Fundamental Rights. If internet companies are required to apply filtering mechanisms in order to
avoid possible liability, they will. This will lead to excessive filtering and deletion of content and limit the freedom to impart information on the one hand, and the freedom to receive information on the other.
If EU legislation conflicts with the Charter of Fundamental Rights, national constitutional courts are likely to be tempted to disapply it and we can expect such a rule to be annulled by the Court of Justice. This is what happened with the
Data Retention Directive (2006/24/EC), when EU legislators ignored compatibility problems with the Charter of Fundamental Rights. In 2014, the Court of Justice declared the Data Retention Directive invalid because it violated the Charter.
Taking into consideration these arguments, we ask the relevant policy-makers to delete Article 13.
European Digital Rights (EDRi)
Associação D3 -- Defesa dos Direitos Digitais
Associação Nacional para o Software Livre (ANSOL)
Association for Progressive Communications (APC)
Association for Technology and Internet (ApTI)
Association of the Defence of Human Rights in Romania (APADOR)
Bangladesh NGOs Network for Radio and Communication (BNNRC)
Bits of Freedom (BoF)
Bulgarian Helsinki Committee
Center for Democracy & Technology (CDT)
Centre for Peace Studies
Coalizione Italiana Liberta@ e Diritti Civili (CILD)
Code for Croatia
Culture Action Europe
Electronic Frontier Foundation (EFF)
Estonian Human Rights Centre
Freedom of the Press Foundation
Frënn vun der Ënn
Helsinki Foundation for Human Rights
Hermes Center for Transparency and Digital Human Rights
Human Rights Monitoring Institute
Human Rights Watch
Human Rights Without Frontiers
Hungarian Civil Liberties Union
Index on Censorship
International Partnership for Human Rights (IPHR)
International Service for Human Rights (ISHR)
Justice & Peace
La Quadrature du Net
Media Development Centre
Miklos Haraszti (Former OSCE Media Representative)
Modern Poland Foundation
Netherlands Helsinki Committee
One World Platform
Open Observatory of Network Interference (OONI)
Open Rights Group (ORG)
Plataforma en Defensa de la Libertad de Información (PDLI)
Reporters without Borders (RSF)
Rights International Spain
South East Europe Media Organisation (SEEMO)
South East European Network for Professionalization of Media (SEENPM)
The Right to Know Coalition of Nova Scotia (RTKNS)
After several days of radio silence, VPN provider PureVPN has responded to criticism that it provided information which helped the
FBI catch a cyberstalker. In a fairly lengthy post, the company reiterates that it never logs user activity. What it does do, however, is log both the real and assigned 'anonymous' IP addresses of users accessing its service.
In a fairly lengthy statement, PureVPN begins by confirming that it definitely doesn't log what websites a user views or what content he or she downloads. However, that's only half the problem. While it doesn't log user activity (what sites people
visit or content they download), it does log the IP addresses that customers use to access the PureVPN service. These, given the right circumstances, can be matched to external activities thanks to logs carried by other web companies.
If for instance a user accesses a website of interest to the authorities, then that website, or various ISPs involved in the route can see the IP address doing the accessing. And if they look it up, they will find that it belongs to PureVPN. They
would then ask PureVPN to identify the real IP address of the user who was assigned the observed PureVPN IP address at the time it was observed.
Now, if PureVPN carried no logs -- literally no logs -- it would not be able to help with this kind of inquiry. That was the case last year when the FBI approached Private Internet Access for information and the company was unable to assist .
But in this case, PureVPN does keep the records of who was assigned each IP address and when, and so the user can be readily identified (albeit with the help of the user's ISP too).
It is for this reason that in TorrentFreak's annual summary of no-logging VPN providers , the very first question we ask every single company reads as follows:
Do you keep ANY logs which would allow you to match an IP-address and a time stamp to a user/users of your service? If so, what information do you hold and for how long?
Clearly, if a company says yes we log incoming IP addresses and associated timestamps, any claim to total user anonymity is ended right there and then.
While not completely useless (a logging service will still stop the prying eyes of ISPs and similar surveillance, while also defeating throttling and site-blocking), if you're a whistle-blower with a job or even your life to protect, this level
of protection is entirely inadequate.
At EFF, we see endless attempts
to misuse copyright law in order to silence content that a person dislikes. Copyright law is sadly less protective of speech than other speech regulations like defamation, so plaintiffs are motivated to find ways to turn many kinds of disputes
into issues of copyright law. Yesterday, a federal appeals court rejected one such ploy: an attempt to use copyright to get rid of a negative review.
The website Ripoff Report hosts criticism of a variety of professionals and companies, who doubtless would prefer that those critiques not exist. In order to protect platforms for speech like Ripoff Report, federal law sets a very high bar for
private litigants to collect damages or obtain censorship orders against them. The gaping exception to this protection is intellectual property claims, including copyright, for which a lesser protection applies.
One aggrieved professional named Goren (and his company) went to court to get a negative review taken down from Ripoff Report. If Goren had relied on a defamation claim alone, the strong protection of CDA 230 would protect Ripoff Report. But Goren
sought to circumvent that protection by getting a court order seizing ownership of the copyright from its author for himself, then suing Ripoff Report's owner for copyright infringement. We
filed a brief
explaining several reasons why his claims should fail, and urging the court to prevent the use of copyright as a pretense for suppressing speech.
Fortunately, the Court of Appeals for the First Circuit agreed that Ripoff Report is not liable. It ruled on a narrow basis, pointing out that the person who originally posted the review on Ripoff Report gave the site's owners irrevocable
permission to host that content. Therefore, continuing to host it could not be an infringement, even if Goren did own the copyright.
Goren paid the price for his improper assertion of copyright here: the appeals court upheld an award of over $100,000 in attorneys' fees. The award of fees in a case like this is important both because it deters improper assertion of copyright,
and because it helps compensate defendants who choose to litigate rather than settling for nuisance value simply to avoid the expense of defending their rights.
We're glad the First Circuit acted to limit the ways that private entities can censor speech online.
This summer, the Egyptian government started to block access to news websites. At last count, it had blocked more than 400 websites.
Realising that citizens are using Virtual Private Network (VPN) services to bypass such censorship, the government also started to block access to VPN websites.
In addition to this, ISPs have started using deep packet inspection (DPI) techniques in order to identify and block VPN traffic. Egypt blocked the Point-to-Point Tunneling Protocol (PPTP) and Layer 2 Tunneling Protocol (L2TP) VPN protocols in
August. However, until now OpenVPN, worked fine. This allowed ordinary Egyptians to access the uncensored internet.
On 3 October, however, the situation changed. It was reported on reddit that Egypt has now blocked OpenVPN as well. It seems that ISPs are using DPI techniques to detect OpenVPN packets.
Leila has two identities, but Facebook is only supposed to know about one of them.
Leila is a sex worker. She goes to great lengths to keep separate identities for ordinary life and for sex work, to avoid stigma, arrest, professional blowback, or clients who might be stalkers (or worse).
Her "real identity"--the public one, who lives in California, uses an academic email address, and posts about politics--joined Facebook in 2011. Her sex-work identity is not on the social network at all; for it, she uses a different
email address, a different phone number, and a different name. Yet earlier this year, looking at Facebook's "People You May Know" recommendations, Leila (a name I'm using using in place of either of the names she uses) was shocked to see
some of her regular sex-work clients.
Despite the fact that she'd only given Facebook information from her vanilla identity, the company had somehow discerned her real-world connection to these people--and, even more horrifyingly, her account was potentially being presented to them as
a friend suggestion too, outing her regular identity to them.
Because Facebook insists on concealing the methods and data it uses to link one user to another, Leila is not able to find out how the network exposed her or take steps to prevent it from happening again.
Social media companies look set to be hit with a new tax to pay for schemes to raise people's awareness of the dangers
of the internet and to tackle what the government considers their worst effects.
Web firms will have a chance to give their views on the levy being proposed by Culture Secretary Karen Bradley in a public consultation.
Among the options proposed in Bradley's internet safety green paper is an industry-wide levy so social media companies and service providers fund schemes that raise awareness and counter internet harms.
The Independent understands that the Government is interested to see what action the private sector takes first -- with a voluntary funded approach possible -- before imposing any new levy on firms.
Offsite Analysis: For the forthcoming 'Digital Charter'
Broadly speaking the new paper , which will help to form a foundation for the Government's forthcoming Digital Charter , doesn't include much that would concern internet access (broadband) providers. Instead it appears to be predominantly focused
upon internet content providers (e.g. social networks like Facebook).
The EU is considering forcing websites to vet uploaded content for pirated material. Of course only the media giants have the capability to do this and so the smaller players would be killed (probably as intended)
If you've been following the slow progress of the European Commission's proposal to introduce new
upload filtering mandates for Internet platforms
, or its equally misguided plans to impose a new link tax
on those who publish snippets from news stories, you should know that the end game is close at hand. The LIBE (Civil Liberties) Committee is the last committee of the European Parliament that is due to vote on its opinion on the so-called
"Digital Single Market" proposals this Thursday October 5, before the proposals return to their home committee of the Parliament (the JURI or Legal Affairs Committee) for the preparation of a final draft.
The Confused Thinking Behind the Upload Filtering Mandate
The Commission's rationale for the upload filtering mandate seems to be that in order to address unwelcome behavior online (in this case, copyright infringement), you have to not only make that behavior illegal, but you also have to make it impossible
. The same rationale also underpins other similar notice and stay-down schemes, such as one that already
exists in Italy
; they are meant to stop would-be copyright infringement in its tracks by preventing presumptively-infringing material from being uploaded to begin with, thereby preventing it from being downloaded by anyone else.
But this kind of prior restraint on speech or behavior isn't commonly applied to citizens in any other area of their lives. You car isn't speed-limited so that it's impossible for you to exceed the speed limit. Neither does your telephone contain
a bugging device that makes it impossible for you to slander your neighbor. Why is copyright treated so differently, that it requires not only that actual infringements be dealt with (Europe's existing DMCA-like
notice and takedown system
already provides for this), but that predicted future infringements also be prevented?
More importantly, what about the rights of those whose uploaded content is flagged as being copyright-infringing, when it really isn't? The European Commission's own research, in a commissioned report that they
attempted to bury
, suggests that the harm to copyright holders from copyright infringement is much less than has been often assumed. At the very least, this has to give us pause before adopting new extreme copyright enforcement measures that will impact users'
Even leaving aside the human impact of the upload filter, European policymakers should also be concerned about the impact of the mandate on small businesses and startups. A market-leading tool required to implement upload filtering just for
audio files would cost a medium-sized file hosting company between $10,000 to $25,000 per month in license fees
alone. In the name of copyright enforcement, European policymakers would give a market advantage to entrenched large companies at the expense of smaller local companies and startups.
The Link Tax Proposal is Also Confused
The link tax proposal is also based on a false premise. But if you are expecting some kind of doctrinally sound legal argument for why a new link-tax ought to inhere in news publishers, you will be sorely disappointed. Purely and simply, the
proposal is founded on the premise that because news organizations are struggling to maintain their revenues in the post-millennial digital media space, and because Internet platforms are doing comparatively better, it is politically expedient
that the latter industry be made to subsidize the former. There's nothing more coherent behind this proposal than that kind of base realpolitik.
But the proposal doesn't even work on that level. In fact, we agree that news publishers are struggling. We just don't think that taxing those who publish snippets of news articles will do anything to help them. Indeed, the fact that
small news publishers have rejected the link tax proposal
, and that previous implementations of the link tax in Spain and Germany were dismal failures
, tells you all that you need to know about whether taxing links would really be good for journalism.
So as these two misguided and harmful proposals make their way through the LIBE committee this week, it's time to call an end to this nonsense. Digital rights group OpenMedia has launched a click-to-call tool that you can use, available in
, and Polish
. If you're a European citizen, the tool will call your representative on the LIBE committee, and if you don't have an MEP, it calls the committee chair, Claude Moraes. As the counter clicks closer to midnight on these regressive and cynical
copyright measures, it's more important than ever for individual users like you to be heard.
11th October 2017. From OpenMedia
With only 48 hours notice we received word that the vote had been delayed. Why? The content censorship measures have become so controversial that MEPs decided that they needed more work to improve them, before they would be ready to go vote.
There's never been a better time to call your MEP about these rules. This week they are back in their offices and ready to start thinking with a fresh head. The delay means we have even more time to say no to content censorship, and no to the Link
Tax. With so many people speaking up, it's clear our opponents are rattled. Now we must keep up the pressure.
Home Secretary Amber Rudd has announced a new national hub to tackle online hate crime.
It will be run by police officers for the National Police Chiefs Council (NPCC) with the aim of ensuring that online cases are managed effectively and efficiently.
The hub will receive complaints through Truevision, the police website for reporting hate crime, following which they will be assessed and assigned to the local force for investigation. Specialist officers will provide case management and support
and advice to victims of online hate crime.
Its functions will include combining duplicate reports, trying to identify perpetrators, referring appropriate cases to online platforms hosting relevant content, providing evidence for local recording and response, and updating the complainant on
progress. It will also provide intelligence to the National Intelligence Model, the police database that gathers intelligence on a range of crimes.
The Home Office said the hub will ensure all online cases are properly investigated and will help to increase prosecutions for online hate crimes. It should also simplify processes and help to prevent any duplication in investigations.
Home Secretary Amber Rudd said:
The national online hate crime hub that we are funding is an important step to ensure more victims have the confidence to come forward and report the vile abuse to which they are being subjected.
The hub will also improve our understanding of the scale and nature of this despicable form of abuse. With the police, we will use this new intelligence to adapt our response so that even more victims are safeguarded and perpetrators punished.
The hub is expected to be operational before the end of the year.
The trouble with politicians claiming that censorship is the answer, is that when the censorship inevitably fails to solve the problem, they can never admit fallibility, and so their only answer is to censor more
Home secretary Amber Rudd used her keynote speech at the Conservative party conference in Manchester to announce new laws,
which would see anyone caught repeatedly watching extremist content on the internet to face up to 15 years jail.
At present laws prohibiting material that could be useful to terrorists only apply to hardcopy or downloaded material . They do not apply to material that is not actually in one's possession.
Security and digital rights experts have dumped on the home secretary's proposal for the new laws, calling the move incredibly dangerous. Jim Killock, Executive Director of Open Rights Group, said:
This is incredibly dangerous. Journalists, anti-terror campaigns and others may need to view extremist content, regularly and frequently.
People tempted towards extremism may fear discussing what they have read or seen with anyone in authority. Even potential informants may be dissuaded from coming forward because they are already criminalised.
Martha Spurrier, director of Liberty, said:
This shocking proposal would make thoughtcrime a reality in the UK. Blurring the boundary between thought and action like this undermines the bedrock principles of our criminal justice system and will criminalise journalists, academics and many
other innocent people.
We have a vast number of laws to tackle terror. The Government's own reviewer of terror legislation Max Hill QC has said repeatedly that we need fewer, not more. A responsible Home Secretary would listen to the evidence -- not grandstand for
cheap political points at the expense of our fundamental freedoms.
In terms of how people would be identified -- it's hard for us to say without seeing more detail about the proposals. It's likely identifying people would mean intrusive surveillance measures like those in the Investigatory Powers Act. In terms
of enforceability -- it's likely to be really difficult because so many people will be caught up who have a legitimate reason and will then run that defence.
Shashank Joshi, a research fellow at the security think tank RUSI, told BuzzFeed News that Rudd's proposal lacked specific detail and ran the risk of criminalising parts of some newspapers:
The risk is that [Rudd] runs into the same problems as her predecessor, Theresa May, did in 2015, when she sought to ban 'extremism', Joshi said. These are broad and nebulous terms, and they require very careful definition in order to avoid
curbing legitimate free speech.
Otherwise we would risk criminalising some of the material that appears in certain mainstream newspaper columns.
Amber Rudd also decided to bang on about prohibiting encryption, even rather haplessly admitting that she did not understand who it worked.
Again campaigners were not impressed. Jim Killock, Executive Director of Open Rights Group, noted:
Amber Rudd needs to be realistic and clear about what she wants. It is no better saying she wishes to deny criminals the use of encryption than to say she wishes to deny them access to gravity. And if she succeeds in pushing them off major
platforms, terrorists may end up being harder to detect.
Lib Dem Ed Davey also weighed in:
Encryption keeps us all secure online. It allows businesses to operate and thrive securely. Any weakening of encryption will ultimately make us all less safe. For if you weaken encryption, you run the risk of letting in the bad guys
But this Conservative government can only see things in black and white -- ignoring the realities of technology. The Home Secretary's key note speech called on tech giants to work together and, with government, to take down extremist content
faster than ever before. My party completely support her in that mission. The only way we will defeat this scourge is to band together -- exchange information, invest in new technologies and present a united front.
The ruthless efficiency with which the Spanish government censored the Internet ahead of the referendum on Catalonian independence foreshadowed the
severity of its crackdown at polling places on October 1 . We have previously written about one aspect of that censorship; the raid of the .cat top-level domain registry . But there was much more to it than that, and many of the more than 140
censored domains and Internet services continue to be blocked today.
It began with the seizure of the referendum.cat domain, the official referendum website, on September 13 by the Guardia Civil (Spanish military police), pursuant to a warrant issued by the Supreme Court of Catalonia. Over the ensuring days this
order was soon extended to a number of other and unofficial mirrors of the website, such as ref1oct.cat , ref1oct.eu , which were seized if they were hosted at a .cat domain, and blocked by ISPs if they were not. (The fact that Spanish ISPs
already blocked websites such as the Pirate Bay under court order enabled the blocking of additional websites to be rolled out swiftly.)
One of these subsequent censorship orders, issued on September 23 , was especially notable in that it empowered the Guardia Civil to block not only a list of named websites, but also any future sites with content related to content about the
referendum, publicized on any social network by a member of the Catalonian Government. This order accelerated the blocking of further websites without any further court order. These apparently included the censorship of non-partisan citizen
collectives (e.g. empaperem.cat ) and other non-profit organizations ( assemblea.cat , webdelsi.cat , alerta.cat ), and campaign websites by legal political parties ( prenpartit.cat ).
On Friday a separate court order was obtained requiring Google to remove a voting app from the Google Play app store. Similar to the September 23 order, the order also required Google to remove any other apps developed by the same developer. Those
violating such orders by setting up mirrors, reverse proxies, or alternative domains for blocked content were summoned to court and face criminal charges . One of these activists also had his GitHub and Google accounts seized.
On the day of the referendum itself, the Internet was shut down at polling places in an effort to prevent votes from being transmitted to returning officers.
Throughout this unrest, a group of activists sharing the Twitter account @censura1oct has been verifying the blocks from multiple ISPs, and sharing information about the technical measures used. All of the censorship measures that were put in
place in the leadup to the referendum appear to remain in place today, though we don't know for how much longer. The Spanish government no doubt hopes that its repression of political speech in Catalonia will be forgotten if the censored sites
come back online quickly. We need to ensure that that isn't the case.
The Spanish government's censorship of online speech during the Catalonian referendum period is so wildly disproportionate and overbroad, that its violation of these instruments seems almost beyond dispute.
Germany's new internet censorship law came into force on 1st October. The law nominally targets 'hate speech', but massively high penalties coupled
with ridiculously short time scales allowed to consider the issues, mean that the law ensures that anything the authorities don't like will have to be immediately censored...just in case.
Passed earlier this summer, the law will financially penalize social media platforms, like Facebook, Twitter, and YouTube, if they don't remove hate speech, as per its definition in Germany's current criminal code within 24 hours. They will be
allowed up to a week to decide for comments that don't fall into the blatant hate speech category. The top fine for not deleting hate speech within 24 hours is 50 million euro though that would be for repeatedly breaking the law, not for
Journalists, lawyers, and free-speech advocates have been voicing their concerns about the new law for months. They say that, to avoid fines, Facebook and others will err on the side of caution and just delete swathes of comments, including ones
that are not illegal. They worry that social media platforms are being given the power to police and effectively shut down people's right to free opinion and free speech in Germany.
The German Journalists Association (DJV) is calling on journalists and media organizations to start documenting all deletions of their posts on social media as of today. The borders of free speech must not be allowed to be drawn by profit-driven
businesses, said DJV chairman Frank 3cberall in a recent statement.
Reporters Without Borders also expressed their strong opposition to the law when it was drafted in May, saying it would contribute to the trend to privatize censorship by delegating the duties of judges to commercial online platforms -- as if the
internet giants can replace independent and impartial courts.
In a glass tower in a trendy part of China's eastern city of Tianjin, hundreds of young people sit in front of computer screens, scouring the internet for videos and messages that run counter to Communist Party doctrine