Google is escalating its campaign of internet censorship, announcing that it will expand its workforce of human censors to over 10,000. The
censors' primary focus will be videos and other content on YouTube, but will work across Google to censor content and train its automated systems, which remove videos at a rate four times faster than its human employees.
Human censors have already reviewed over 2 million videos since June. YouTube has already removed over 150,000 videos, 50 percent of which were removed within two hours of upload. The company is working to accelerate the rate of takedown through
machine-learning from manual censorship.
YouTube CEO Susan Wojcicki explained the move in an official blog post:
Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly
2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future. We are also taking aggressive action on comments, launching new comment moderation tools and in some cases
shutting down comments altogether. In the last few weeks we've used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments. Our teams also work closely with NCMEC, the IWF,
and other child safety organizations around the world to report predatory behavior and accounts to the correct law enforcement agencies.
We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.
At the same time, we are expanding the network of academics, industry groups and subject matter experts who we can learn from and support to help us better understand emerging issues.
We will use our cutting-edge machine learning more widely to allow us to quickly and efficiently remove content that violates our guidelines. In June we deployed this technology to flag violent extremist content for human review and we've seen
Since June we have removed over 150,000 videos for violent extremism.
Machine learning is helping our human reviewers remove nearly five times as many videos than they were previously.
Today, 98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms.
Our advances in machine learning let us now take down nearly 70 percent of violent extremist content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed.
Since we started using machine learning to flag violent and extremist content in June, the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.
The European Commission has joined the list of organisations calling on the likes of Google, Facebook and Twitter to
do more to remove extremist content - or face further legislation.
EU home affairs commissioner Dimitris Avramopoulos warned the real battlefield is against 21st century terrorism. He said most of the recent terrorist attackers had never travelled to Syria or Iraq. But most of them had been influenced, groomed
and recruited to terrorism on the internet.
Avramopoulos said he believed it was feasible to reduce the time it takes to remove content to a few hours. There is a lot of room for improvement, for this cooperation to produce even better results.
Avramopoulos also said he thought it was worthwhile to harness artificial intelligence to complete the task. You now.. like Facebook censoring Robin Redbreast Christmas cards because the word 'breast' appeared in filenames.
The Commission said it would make a decision by May next year on whether additional measures -- including legislation -- are required in order to better address the problem of illegal content on the internet.
Back in March, Australia shelved plans to extend its copyright safe harbor provisions to services such as Google and
Facebook. Now, following consultations with the entertainment industries, the government has revealed it will indeed exclude such platforms from safe harbour provisions.
Services such as Google, Facebook and YouTube now face massive legal uncertainty as they themselves can be held responsible for copyright infringing posts by users. The logical result would be that the companies will have to check every post
before upload. The vast quantity of posts to check would make this an economically unviable option.
Proposed amendments to the Copyright Act earlier this year would've seen enhanced safe harbor protections for such platforms but they were withdrawn at the eleventh hour due to lobbying by media companies. Such companies accuse platforms like
YouTube of exploiting safe harbor provisions in the US and Europe, which forces copyright holders into an expensive battle to have infringing content taken down.
Communications Minister Mitch Fifield has confirmed the exclusions, so now it is up to Google and Facebook to consider how they can operate under this law.
Iran's telecommunications minister says that his ministry wants to customize Internet blocking based on user's occupation, age, and other factors.
The attorney general's office has conditionally agreed with this plan, Minister Mohammad Javad Azari Jahromi announced on December 4.
Without providing any details, he said his ministry had reviewed suggestions made by the attorney general and prepared appropriate technical responses. He expressed hope that the office would give its final approval for the implementation of the
Despite the regime's extenisve efforts to censor the Internet, Iranian users currently get around the restrictions by using anti-filtering programs or virtual private networks.
Google makes their internal processes difficult to track by design, but the author of a report By Karlaplan states that these changes are fairly
recent, suspected to have been implemented on the 30th of August -- the changes having only been discovered in late October.
However, until the publication of this document , little other than anecdotal evidence was presented with complaints from YouTube content creators.
Through extensive analysis of the YouTube Data API and other sources, Karlaplan found that YouTube tags demonetized videos according to both severity and type of sensitive content -- neither of which is transparent to the uploader.
The report also notes that videos are more likely to be hidden from viewers if their likely viewership is low. Perhaps as higher viewership videos may be more likely to be appealed, or more likely to be spotted as examples of censorship and hence
generate bad publicity for Google.
Google have published an information page that is quite useful in detailing which videos get censored. Google outlines two levels of sensitivity that advertisers can select when not wanting to be associated with sensitive content. Google
While the Standard content filter excludes the most inappropriate content, it doesn't exclude everything that a particular advertiser may find objectionable. The Sensitive content categories allow you to opt out of additional content that many
advertisers find inappropriate. Eg:
Tragedy and conflict
Standard: Excludes graphic footage of combat or war
Sensitive: Excludes the above plus footage of soldiers marching with weapons
Sensitive social issues
Standard: Excludes videos intended to elicit a response about controversial issues
Sensitive: Excludes the above plus news commentary about controversial issues
Sexually suggestive content
Standard: Excludes videos about sex or sexual products
Sensitive: Excludes the above plus music videos with suggestive themes
Sensational and shocking
Standard: Excludes videos of disasters or accidents that show casualties or death
Sensitive: Excludes the above plus videos of moderate disasters or accidents that show minimal casualties or harm
Profanity and rough language
Standard: Excludes videos with frequent use of profanity
Sensitive: Excludes the above plus videos with profanity that has been bleeped out
Cloudflare's decision to ban the Daily Stormer has led to an increase in censorship requests. Since August, Cloudflare has received more than 7,000 requests from across the political spectrum for removal of content
Senior police officers are to lose the power to self-authorise access to personal phone and web browsing records under a series of late changes
to the snooper's charter law proposed by ministers in an attempt to comply with a European court ruling on Britain's mass surveillance powers.
A Home Office consultation paper published on Thursday also makes clear that the 250,000 requests each year for access to personal communications data by the police and other public bodies will in future excluded for investigations into minor
crimes that carry a prison sentence of less than six months.
But the government says the 2016 European court of justice (ECJ) ruling in a case brought by Labour's deputy leader, Tom Watson , initially with David Davis, now the Brexit secretary, does not apply to the retention or acquisition of personal
phone, email, web history or other communications data by national security organisations such as GCHQ, MI6 or MI5, claiming that national security is outside the scope of EU law.
The Open Rights Group has been campaigning hard on issues of liberty and privacy and writes:
This is major victory for ORG, although one with dangers. The government has conceded that independent authorisation is necessary for communications data requests, but refused to budge on retained data and is pushing ahead with the Request
Filter, to enable rapid interrogation and analysis of the stored communications data.
Adding independent authorisation for communications data requests will make the police more effective, as corruption and abuse will be harder. It will improve operational effectiveness, even if less data is used during investigations and trust in
the police should improve.
Nevertheless the government has disregarded many key elements of the judgment
It isn't going to reduce the amount of data retained
It won't notify people whose data is used during investigations
It won't keep data within the EU, instead it will continue to transfer it, presumably specifically to the USA
The Home Office has opted for a six month sentence definition of serious crime rather than the Lords' definition of crimes capable of sentences of at least one year.
These are clear evasions and abrogations of the judgment. The mission of the Home Office is to uphold the rule of law. By failing to do what the courts tell them, the Home Office is undermining the very essence of the rule of law.
If the Home Office won't do what the highest courts tell it to do, why should anybody else? By picking and choosing the laws they are willing to care about, they are playing with fire.
There was one final surprise. The Code of Practice covers the operation of the Request Filter . Yet again we are told that this police search engine is a privacy safeguard. We will now run through the code in fine detail to see if any such
safeguards are there. On a first glance, there are not.
If the Home Office genuinely believe the Request Filter is a benign tool, they must rewrite this section to make abundantly clear that it is not a mini version of X-Keyscore (the NSA / GCHQ'S tool to trawl their databases of people linked to
their email and web visits) and does not operate as a facility to link and search the vast quantities of retained and collected communications data.
The Russian government is currently discussing plans to build its own independent internet infrastructure that will be used
by BRICS member states 204 Brazil, Russia, India, China, and South Africa.
The Russian Security Council has today formally asked the country's government to start the building of a global DNS system that Russia and fellow BRICS member states could use to take control of the internet as used within the BRICS countries.
Russia and fellow BRICS nations would have the option to flip a switch and move Internet traffic from today's main DNS system to their own private system. The states will then have absolute and direct control of sites to be blocked. Furthermore,
the alternative DNS system also allows oppressive regimes to deanonymize Tor traffic and hunt for dissidents, via an attack called DefecTor.
Russia, China, and many other countries have criticized the US for hoarding control over the domain naming system (DNS), a position they claim has allowed the US to intercept and tap global Internet traffic. Last year, the US handed over control
over the DNS system to ICANN , an independent organization. While Russia and China welcomed the move, they actually wanted the DNS system to be controlled by the United Nations' International Telecommunication Union. This is because the two
countries have more power in UN matters than control over an NGO, like ICANN.
Twitter announced yesterday that it would begin removing verification badges for famous tweeters that it does not
approve of. Not for what is tweeted, but for offline behaviour Twitter does not like.
The key phrase in Twitter's policy update is this one: Reasons for removal may reflect behaviors on and off Twitter. Before yesterday, the rules explicitly applied only to behavior on Twitter. From now on, holders of verified badges will be held
accountable for their behavior in the real world as well. Twitter has promised further information about the new censorship policy in due course.
Many questions remain unanswered. What will the company's review consist of? How will it examine users' offline behavior? Will it simply respond to reports, or will it actively look for violations? Will it handle the work with its existing team,
or will it expand its trust and safety team?
Twitter has immediately rescinded blue tick verification from accounts belonging to far-right activists, including Jason Kessler, a US white supremacist, and Tommy Robinson, founder of the English Defence League.
Offsite Comment: Twitter has turned its back on free speech
The platform plans to exercise ideological control over its users.
The European Union voted on November 14, to pass the new internet censorship regulation nominally in the name of consumer protection. But of course
censorship often hides behind consumer protection, eg the UK's upcoming internet porn ban is enacted in the name of protecting under 18 internet consumers.
The new EU-wide law gives extra power to national consumer protection agencies, but which also contains a vaguely worded clause that also grants them the power to block and take down websites without judicial oversight.
Member of the European Parliament Julia Reda said in a speech in the European Parliament Plenary during a last ditch effort to amend the law:
The new law establishes overreaching Internet blocking measures that are neither proportionate nor suitable for the goal of protecting consumers and come without mandatory judicial oversight,
According to the new rules, national consumer protection authorities can order any unspecified third party to block access to websites without requiring judicial authorization, Reda added later in the day on her blog .
This new law is an EU regulation and not a directive, meaning its obligatory for all EU states.
The new law proposal started out with good intentions, but sometimes in the spring of 2017, the proposed regulation received a series of amendments that watered down some consumer protections but kept intact the provisions that ensured national
consumer protection agencies can go after and block or take down websites.
Presumably multinational companies had been lobbying for new weapons n their battle against copyright infringement. For instance, the new law gives national consumer protection agencies the legal power to inquire and obtain information about
domain owners from registrars and Internet Service Providers.
Besides the website blocking clause, authorities will also be able to request information from banks to detect the identity of the responsible trader, to freeze assets, and to carry out mystery shopping to check geographical discrimination or
Comment: European Law Claims to Protect Consumers... By Blocking the Web
The Consumer Protection Regulation provides in Article 8(3)(e) that consumer protection authorities must have the power:
where no other effective means are available to bring about the cessation or the prohibition of the infringement including by requesting a third party or other public authority to implement such measures, in order to prevent the risk of serious
harm to the collective interests of consumers:
to remove content or restrict access to an online interface or to order the explicit display of a warning to consumers when accessing the online interface;
to order a hosting service provider to remove, disable or restrict the access to an online interface; or
where appropriate, order domain registries or registrars to delete a fully qualified domain name and allow the competent authority concerned to register it;
The risks of unelected public authorities being given the power to block websites was powerfully demonstrated in 2014, when the Australian company regulator ASIC
accidentally blocked 250,000 websites
in an attempt to block just a handful of sites alleged to be defrauding Australian consumers.
This likelihood of unlawful overblocking is just one of the reasons that the United Nations Special Rapporteur for Freedom of Expression and Opinion has underlined how web blocking often contravenes international human rights law. In a
[PDF], then Special Rapporteur Frank La Rue set out how extremely limited are the circumstances in which blocking of websites can be justified, noting that where:
the specific conditions that justify blocking are not established in law, or are provided by law but in an overly broad and vague manner, [this] risks content being blocked arbitrarily and excessively. ... [E]ven where justification is provided,
blocking measures constitute an unnecessary or disproportionate means to achieve the purported aim, as they are often not sufficiently targeted and render a wide range of content inaccessible beyond that which has been deemed illegal. Lastly,
content is frequently blocked without the intervention of or possibility for review by a judicial or independent body.
This describes exactly what the new Consumer Protection Regulation will do. It hands over a power that should only be exercised, if at all, under the careful scrutiny of a judge in the most serious of cases, and allows it to be wielded at the whim
of an unelected consumer protection agency. As explained by Member of the European Parliament (MEP) Julia Reda
, who voted against the legislation, it sets the stage for the construction of a censorship infrastructure that could be misused for purposes that we cannot even anticipate, ranging from copyright enforcement through to censorship of political
Regrettably, the Regulation is now law--and is required to be enforced by all European states. It is both ironic and tragic that a law intended to protect consumers actually poses such a dire threat to their right to freedom of expression.
Google News is limiting the reach of two Russian media outlets, RT and Sputnik, according to Alphabet executive chairman Eric Schmidt.
Schmidt said Google is de-ranking sites it claims have been spreading Russian state-sponsored propaganda. We're trying to engineer the systems to prevent it.
However, Schmidt added that he isn't in favor of censorship ...BUT.. his company also has a responsibility to stop the misinformation.
In response of teh censorship, Sputnik quoted research psychologist Robert Epstein:
Google is deciding what people see, which is very dangerous since they are legally a tech company and do not adhere to any type of editorial standards our guidelines
What we're talking about here is a means of mind control on a massive scale that there is no precedent for in human history, he said at the time. Research participants spent a much larger percentage of web browsing time visiting search results
that were higher up. According to Epstein, biased Google results could have provided an extra 2.6 million votes in support of Democratic candidate Hillary Clinton in the 2016 race.
A group of international broadcasters have come together to support a new website that aims to help internet users around the world access news and information.
The Broadcasting Board of Governors (US), the BBC (UK), Deutsche Welle (Germany) and France M39dias Monde (France) have co-sponsored the Bypass Censorship website: bypasscensorship.org
Bypass Censorship provides internet users information on how to access and download security-conscious tools which will enable them to access news websites and social media blocked by governments.
When governments try to block these circumvention tools, the site is updated with information to help users stay ahead of the censors and maintain access to news sites.
BBG CEO, John F. Lansing said:
The right to seek, and impart, facts and ideas is a universal human right which many repressive governments seek to control. This website presents an incredible opportunity to provide citizens around the world with the resources they need to
access a free and open internet for uncensored news and information essential to making informed decisions about their lives and communities.
The broadcasters supporting the Bypass Censorship site are part of the DG7 group of media organisations which are consistent supporters of UN resolutions on media freedom and the safety of journalists.
On 11th November, thousands of people marched in the streets of Warsaw, Poland, to celebrate the
country's Independence Day. The march attracted massive numbers of people from the nationalist or far right end of the political spectrum.
The march proved very photogenic, with images showing the scale of the march and also the stylised symbology proved very powerful and thought provoking.
But the images caused problems for the likes of Facebook, on what should be censored and what should not.
Once could argue that the world needs to see what is going on amongst large segments of the population in Poland, and indeed across Europe. Perhaps if they see the popularity of the far right then maybe communities and politicians can be spurred
into addressing some of the fundamental societal break downs leading to this mass movement.
On the other hand, there will be those that consider the images to be something that could attract and inspire others to join the cause.
But from just looking at news pictures, it would be hard to know what to think. And that dilemma is exactly what caused confusion amongst censors at Facebook.
) reports on a collection of such images, published on Facebook by a renowned photojournalist in Poland, that was taken down by the social media's content censors. Chris Niedenthal attended the march to practice his craft, not to participate, and
posted his photos on Nov. 12, the day after the march. Facebook took them down. He posted them again the next day. Facebook took them down again on Nov. 14. Niedenthal himself was also blocked from Facebook for 24 hours. The author concludes that
a legitimate professional journalist or photojournalist should not be 'punished' for doing his duty.
Facebook told Quartz that the photos, because they contained hate speech symbols, were taken down for violating the platform's community standards policy barring content that shows support for hate groups. The captions on the photos were neutral,
so Facebook's moderators could not tell if the person posting them supported, opposed, or was indifferent about hate groups, a spokesperson said. Content shared that condemns or merely documents events can remain up. But that which is interpreted
to show support for hate groups is banned and will be removed.
Eventually Facebook allowed the photos to remain on the platform. Facebook apologized for the error, in a message, and in a personal phone call.
The European Union is in the process of creating an authority to monitor and censor so-called fake news. It is setting up a High-Level 'Expert'
Group. The EU is currently consulting media professionals and the public to decide what powers to give to this EU body, which is to begin operation next spring.
The World Socialist Web Site
has its own colourful view on the intentions of the body, but I don't suppose it is too far from the truth:
An examination of the EU's announcement shows that it is preparing mass state censorship aimed not at false information, but at news reports or political views that encourage popular opposition to the European ruling class.
It aims to create conditions where unelected authorities control what people can read or say online.
EU Vice-President Frans Timmermans explained the move in ominous tersm
We live in an era where the flow of information and misinformation has become almost overwhelming. The EU's task is to protect its citizens from fake news and to manage the information they receive.
According to an EU press release, the EU Commission, another unelected body, will select the High-Level Expert Group, which is to start in January 2018 and will work over several months. It will discuss possible future actions to strengthen
citizens' access to reliable and verified information and prevent the spread of disinformation online.
Who will decide what views are verified, who is reliable and whose views are disinformation to be deleted from Facebook or removed from Google search results? The EU, of course.
Governments around the world are dramatically increasing their efforts to manipulate information on social media, threatening the notion of the internet as a liberating technology, according to Freedom on the Net 201 7 , the latest
edition of the annual country-by-country assessment of online freedom, released today by Freedom House.
Online manipulation and disinformation tactics played an important role in elections in at least 18 countries over the past year, including the United States, damaging citizens' ability to choose their leaders based on factual news and authentic
debate. The content manipulation contributed to a seventh consecutive year of overall decline in internet freedom, along with a rise in disruptions to mobile internet service and increases in physical and technical attacks on human rights
defenders and independent media.
"The use of paid commentators and political bots to spread government propaganda was pioneered by China and Russia but has now gone global," said Michael J. Abramowitz, president of Freedom House. "The effects of these rapidly
spreading techniques on democracy and civic activism are potentially devastating."
"Governments are now using social media to suppress dissent and advance an antidemocratic agenda," said Sanja Kelly, director of the Freedom on the Net project. "Not only is this manipulation difficult to detect, it is more
difficult to combat than other types of censorship, such as website blocking, because it's dispersed and because of the sheer number of people and bots deployed to do it."
"The fabrication of grassroots support for government policies on social media creates a closed loop in which the regime essentially endorses itself, leaving independent groups and ordinary citizens on the outside," Kelly said.
Freedom on the Net 2017 assesses internet freedom in 65 countries, accounting for 87 percent of internet users worldwide. The report primarily focuses on developments that occurred between June 2016 and May 2017, although some more recent
events are included as well.
Governments in a total of 30 countries deployed some form of manipulation to distort online information, up from 23 the previous year. Paid commentators, trolls, bots, false news sites, and propaganda outlets were among the techniques used by
leaders to inflate their popular support and essentially endorse themselves.
In the Philippines, members of a "keyboard army" are tasked with amplifying the impression of widespread support of the government's brutal crackdown on the drug trade. Meanwhile, in Turkey, reportedly 6,000 people have been enlisted by
the ruling party to counter government opponents on social media.
Most governments targeted public opinion within their own borders, but others sought to expand their interests abroad--exemplified by a Russian disinformation campaign to influence the American election. Fake news and aggressive trolling of
journalists both during and after the presidential election contributed to a score decline in the United States' otherwise generally free environment.
Governments in at least 14 countries actually restricted internet freedom in a bid to address content manipulation. Ukrainian authorities, for example, blocked Russia-based services, including the country's most widely used social network and
search engine, after Russian agents flooded social media with fabricated stories advancing the Kremlin's narrative.
"When trying to combat online manipulation from abroad, it is important for countries not to overreach," Kelly said. "The solution to manipulation and disinformation lies not in censoring websites but in teaching citizens how to
detect fake news and commentary. Democracies should ensure that the source of political advertising online is at least as transparent online as it is offline."
For the third consecutive year, China was the world's worst abuser of internet freedom, followed by Syria and Ethiopia. In Ethiopia, the government shut down mobile networks for nearly two months as part of a state of emergency declared in October
2016 amid large-scale antigovernment protests.
Less than one-quarter of the world's internet users reside in countries where the internet is designated Free, meaning there are no major obstacles to access, onerous restrictions on content, or serious violations of user rights in the form of
unchecked surveillance or unjust repercussions for legitimate speech.
Governments manipulated social media to undermine democracy : Governments in 30 countries of the 65 countries assessed attempted to control online discussions. The practice has become significantly more widespread and technically
sophisticated over last few years.
State censors targeted mobile connectivity : An increasing number of governments have restricted mobile internet service for political or security reasons. Half of all internet shutdowns in the past year were specific to mobile
connectivity, with most others affecting mobile and fixed-line service simultaneously. Most mobile shutdowns occurred in areas populated with ethnic or religious minorities such as Tibetan areas in China and Oromo areas in Ethiopia.
More governments restricted live video : As live video gained popularity with the emergence of platforms like Facebook Live, and Snapchat's Live Stories internet users faced restrictions or attacks for live streaming in at least nine
countries, often to prevent streaming of antigovernment protests. Countries likes Belarus disrupted mobile connectivity to prevent livestreamed images from reaching mass audience.
Technical attacks against news outlets, opposition, and rights defenders increased: Cyberattacks against government critics were documented in 34 out of 65 countries. Many governments took additional steps to restrict encryption, leaving
citizens further exposed.
New restrictions on virtual private networks (VPNs) : 14 countries now restrict tools used to circumvent censorship in some form and six countries introduced new restrictions, either legal bans or technical blocks on VPN websites or
Physical attacks against netizens and online journalists expanded dramatically : The number of countries that featured physical reprisals for online speech increased by 50 percent over the past year--from 20 to 30 of the countries
assessed. In eight countries, people were murdered for their online expression. In Jordan, a Christian cartoonist was murdered for mocking Islamist militants' vision of heaven, while in Myanmar, a journalist was murdered after posting on
Facebook notes that alleged corruption.
Since June 2016, 32 of the 65 countries assessed in Freedom on the Net saw internet freedom deteriorate. Most notable declines were documented in Ukraine, Egypt, and Turkey.
Theresa May has made a speech at the Lord Mayor's Banquet saying that fake news and Russian propaganda are threatening the international
order. She said:
It is seeking to weaponise information. Deploying its state-run media organisations to plant fake stories and photo-shopped images in an attempt to sow discord in the west and undermine our institutions.
The UK did not want to return to the Cold War, or to be in a state of perpetual confrontation but the UK would have to act to protect the interests of the UK, Europe and rest of the world if Russia continues on its current path.
May did not say whether she was concerned with Russian intervention in any UK democratic processes, but Ben Bradshaw, a leading Labour MP, is among those to have called for a judge-led inquiry into the possibility that Moscow tried to influence
the result of the Brexit referendum.
Russia has been accused of running troll factories that disseminate fake news and divisive posts on social media. It emerged on Monday that a Russian bot account was one of those that shared a viral image that claimed a Muslim woman ignored
victims of the Westminster terror attack as she walked across the bridge.
Surely declining wealth and poor economic prospects are a more likely root cause of public discontent rather than a little trivial propaganda.
Three countries are using the European Council to put dangerous pro-censorship amendments into the already controversial Copyright Directive.
The copyright law that Openmedia has been campaigning on -- the one pushing the link tax and censorship machines -- is facing some dangerous sabotage from the European Council. In particular, France, Spain and Portugal are directly harming the
The Bill is currently being debated in the European Parliament but the European Council also gets to make its own proposed version of the law, and the two versions eventually have to compromise with each other. This European Council is made up of
ministers from the governments of all EU member states. Those ministers are usually represented by staff who do most of the negotiating on their behalf. It is not a transparent body, but it does have a lot of power.
The Council can choose to agree with Parliament's amendments, but it doesn't look like that's going to happen in this case. In fact they've been taking worrying steps, particularly when it comes to the censorship machine proposals.
As the proposal stands before the Council intervention, it encourages sites where users upload and make content to install filtering mechanisms -- a kind of censorship machine which would use algorithms to look for copyrighted content and then
block the post. This is despite the fact that there many legal reasons to use copyrighted content.
These new changes want to go a step further. They firstly want to make the censorship machine demand even more explicit. As Julia Reda puts it:
They want to add to the Commission proposal that platforms need to automatically remove media that has once been classified as infringing, regardless of the context in which it is uploaded.
Then, they go all in with a suggested rewrite of existing copyright law to end the liability protections which are vital for a functioning web.
Liability protection laws mean we (not websites) are responsible for what we say and post online. This is so that websites are not obliged to monitor everything we say or do. If they were liable there would be much overzealous blocking and
censorship. These rules made YouTube, podcast platforms, social media, all possible. The web as we know it works because of these rules.
But the governments of France, Spain, Portugal and the Estonian President of the Council want to undo them. It would mean all these sites could be sued for any infringement posted there. It would put off new sites from developing. And it
would cause huge legal confusion -- given that the exact opposite is laid out in a different EU law.
Home Secretary Amber Rudd told an audience at New America, a Washington think tank, on Thursday night that there was an
online arms race between militants and the forces of law and order.
She said that social media companies should press ahead with development and deployment of AI systems that could spot militant content before it is posted on the internet and block it from being disseminated.
Since the beginning of 2017, violent militant operatives have created 40,000 new internet destinations, Rudd said. As of 12 months ago, social media companies were taking down about half of the violent militant material from their sites within two
hours of its discovery, and lately that proportion has increased to two thirds, she said.
YouTube is now taking down 83% of violent militant videos it discovers, Rudd said, adding that UK authorities have evidence that the Islamic State was now struggling to get some of its materials online.
She added that in the wake of an increasing number of vehicle attacks by islamic terrorists British security authorities were reviewing rental car regulations and considering ways for authorities to collect more relevant data from car hire
YouTube has announced an extension of its age restriction policy for parody videos using children's characters but with inappropriate themes
The new policy was announced on Thursday and will see age restrictions apply on content featuring inappropriate use of family entertainment characters like unofficial videos depicting Peppa Pig. The company already had a policy that rendered such
videos ineligible for advertising revenue, in the hope that doing would reduce the motivation to create them in the first place. Juniper Downs, , YouTube's director of policy explained:
Earlier this year, we updated our policies to make content featuring inappropriate use of family entertainment characters ineligible for monetisation,We're in the process of implementing a new policy that age restricts this content in the YouTube
main app when flagged. Age-restricted content is automatically not allowed in YouTube Kids. The YouTube team is made up of parents who are committed to improving our apps and getting this right.
Age-restricted videos can't be seen by users who aren't logged in, or by those who have entered their age as below 18 on both the site and the app. More importantly, they also don't show up on YouTube Kids, a separate app aimed at parents who want
to let their children under 13 use the site unsupervised.
The latest measure to deny Russian people the freedom of expression online will take effect on 1st
November. New laws will require VPNs to comply with the Russian State's online censorship programme and block all websites that are on the government censor's block list.
The Russian State Duma passed the new piece of legislation earlier this year and it was quickly signed into law by President Vladimir Putin.
Most of the major international VPN providers are not expected to comply with the law. Some, including Private Internet Access (PIA) , has already confirmed this. PIA also removed all their servers from Russia last year after a number were seized
without prior warning. It remains to be seen how the Russian state will try and sanction them as a result, but their own websites can certainly be expected to be added to the blacklist.
Online rights activists have also been quick to condemn the new law. Eva Galperin, the Director of Cybersecurity at the Electronic Frontier Foundation (EFF) said she believed the law would only be applied selectively. It is expected that the
Russian regime will use the new powers to target opposition activists ahead of next year's Presidential Elections. Overseas companies and businesspeople based in Russia which use VPNs are unlikely to see their service affected.
Update: Small Russian ISPs won't be able to afford new state blocking requirements
A draft order of Roskomnadzor, Russia's Federal internet censor, requires the most expensive and degrading method of blocking - a deep packet analysis of all traffic passing (DPI, deep packet inspection). Because of its high cost, the requirements
of Roskomnadzor will lead to the sale of small and medium providers business to large one, experts say.
The law on the prohibition of VPNs, enacted in Russia, has not yet affected access to sites that are prohibited. As before, you can still access them via anonymizers, VPN and TOR.
Analysts of Roskomsvoboda - a public organization, which activities are aimed at counteracting censorship on the Internet, explain that users will not see any effects before December anyway, as the process of the law allows 36 days for VPN
providers to respond to blocking requests before taking any action against them.
Some well-known VPN-services have already reacted to the next round of censorship in the Russian segment of the Internet. Representatives of ExpressVPN expressed surprise at this issue, asking how exactly Russia intends to implement this new
regulation in practice?
ExpressVPN will certainly never agree with any standards that would jeopardize the ability of our product to protect the digital rights of users, remarks the company.
Tunnelbear imparted that the service belongs to a Canadian company, hence operates according to the local laws, which do not limit them in any way.
VPN-service TorGuard also does not intend to cooperate with Roskomnadzor, directly declaring that it will refuse to block sites if they are approached with such requests.
The Senate Commerce Committee just approved a slightly modified version of SESTA, the Stop Enabling Sex Traffickers Act ( S. 1693 ).
SESTA was and continues to be a deeply flawed bill. It is intended to weaken the section commonly known as CDA 230 or simply Section 230, one of the most important laws protecting free expression online . Section 230 says that for purposes of
enforcing certain laws affecting speech online, an intermediary cannot be held legally responsible for any content created by others.
It's not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage.
SESTA would create an exception to Section 230 for laws related to sex trafficking, thus exposing online platforms to an immense risk of civil and criminal litigation. What that really means is that online platforms would be forced to take drastic
measures to censor their users.
Some SESTA supporters imagine that compliance with SESTA would be easy--that online platforms would simply need to use automated filters to pinpoint and remove all messages in support of sex trafficking and leave everything else untouched. But
such filters do not and cannot exist: computers aren't good at recognizing subtlety and context, and with severe penalties at stake, no rational company would trust them to .
Online platforms would have no choice but to program their filters to err on the side of removal, silencing a lot of innocent voices in the process. And remember, the first people silenced are likely to be trafficking victims themselves: it would
be a huge technical challenge to build a filter that removes sex trafficking advertisements but doesn't also censor a victim of trafficking telling her story or trying to find help.
Along with the Center for Democracy and Technology, Access Now, Engine, and many other organizations, EFF signed a letter yesterday urging the Commerce Committee to change course . We explained the silencing effect that SESTA would have on online
Pressures on intermediaries to prevent trafficking-related material from appearing on their sites would also likely drive more intermediaries to rely on automated content filtering tools, in an effort to conduct comprehensive content moderation at
scale. These tools have a notorious tendency to enact overbroad censorship, particularly when used without (expensive, time-consuming) human oversight. Speakers from marginalized groups and underrepresented populations are often the hardest hit by
such automated filtering.
It's ironic that supporters of SESTA insist that computerized filters can serve as a substitute for human moderation: the improvements we've made in filtering technologies in the past two decades would not have happened without the safety provided
by a strong Section 230, which provides legal cover for platforms that might harm users by taking down, editing or otherwise moderating their content (in addition to shielding platforms from liability for illegal user-generated content).
We find it disappointing, but not necessarily surprising, that the Internet Association has endorsed this deeply flawed bill . Its member companies--many of the largest tech companies in the world--will not feel the brunt of SESTA in the same way
as their smaller competitors. Small Internet startups don't have the resources to police every posting on their platforms, which will uniquely pressure them to censor their users--that's particularly true for nonprofit and noncommercial platforms
like the Internet Archive and Wikipedia. It's not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage.
If you rely on online communities in your day-to-day life; if you believe that your right to speak matters just as much on the web as on the street; if you hate seeing sex trafficking victims used as props to advance an agenda of censorship;
please take a moment to write your members of Congress and tell them to oppose SESTA .