Theresa May is to urge internet companies to take down extremist content being shared by terrorist groups within two hours,
during a summit with the French president and the Italian prime minister.
Home Office analysis shows that Isis shared 27,000 links to extremist content in the first five months of the 2017 and, once shared, the material remained available online for an average of 36 hours. The government would like that reduced to two
hours, and ultimately they are urging companies to develop technology to spot material early and prevent it being shared in the first place.
The issue is of particular concern after last week's attack on a London Underground train at Parsons Green, and follows a British thinktank report, which found that online jihadist propaganda attracts more clicks in Britain than anywhere else in
Extremist material is shared very rapidly when it is first published in what experts call a starburst effect: more than two-thirds of shares take place within the first two hours, so reducing the amount of time the material is visible can
drastically squeeze the number of users who see it.
A government source noted that once an internet user has shown interest in extremist content, the web giants' algorithms keep pushing similar material towards them online. We want them to break the echo chambers, he said.
Allegedly Islamophobic terms used by Chinese Internet users to stigmatize Muslims have been censored by authorities on Chinese social media amid a
backlash against national policies considered overly favorable to Muslim minorities.
Searches for green religion and peaceful religion , often used by Internet users to refer to Islam and to circumvent censorship of online speech, showed no results on China's Weibo microblog. Posts containing the phrases cannot be
posted for violations of Weibo's complaints related rules. Worse insults against Islam are also blocked in Weibo's search engine.
Discontent and fears of Muslims have been on the rise on China's Internet in recent years. There is unease at Chinese authorities' discrimination policies in favour of ethnic minorities, especially Muslim groups.
To achieve national unity and social stability , ethnic minorities including Hui and Uyghur people enjoy favorable policies including receiving extra points in China's college entrance examinations, more lenient family planning policies and
securing a certain ratio of positions in government. The favorable policies are aimed at helping ethnic minorities who lag behind in economic and educational development. They are intended to accelerate development toward greater ethnic unity,
Al Jazeera is Middle Eastern news service based in Qatar that competes with the likes BBC World News. It
seems to provide a balanced view of world news, perhaps modelled on the BBC, as opposed to the more propaganda based services along the lines of RT from Russia.
Balanced reporting on Middle Eastern affairs doesn't seem to go down well in Middle Eastern countries, who clearly would prefer something a little more under their control. So these other countries are putting a lot of pressure on Qatar to silence
The latest example of such censorship pressure is that the social network service, Snap has been censored in Saudi Arabia over its inclusion of Al Jazeera in its Discover App.
Snap has now complied with Saudi censorship demands to remove news outlet Al Jazeera's curated content from its Discover Publisher Channels in Saudi Arabia. Discover is Snap's digital media selection of content tailored to a young audience.
Al Jazeera is less than thrilled. Acting Director-General Mostefa Souag said in a statement:
We find Snapchat's action to be alarming and worrying. This sends a message that regimes and countries can silence any voice or platform they don't agree with by exerting pressure on the owners of social media platforms and content distribution
companies, This step is a clear attack on the rights of journalists and media professionals to report and cover stories freely from around the world.
Companies including Google and Facebook could face repressive legislation if they don't proactively remove illegal
content from their platforms that is deemed illegal. That's according to draft EU censorship rules due to be published at the end of the month, which will require internet service providers to significantly step up their actions to address the
In the current climate, creators and distributors are forced to play a giant game of whac-a-mole to limit the unlicensed spread of their content on the Internet.
The way the law stands today in the United States, EU, and most other developed countries, copyright holders must wait for content to appear online before sending targeted takedown notices to hosts, service providers, and online platforms.
After sending several billion of these notices, patience is wearing thin, so a new plan is beginning to emerge. Rather than taking down content after it appears, major entertainment industry groups would prefer companies to take proactive action.
The upload filters currently under discussion in Europe are a prime example but are already causing controversy .
The guidelines are reportedly non-binding but further legislation in this area isn't being ruled out for Spring 2018, if companies fail to address the EU's demands.
Interestingly, however, a Commission source told Reuters that any new legislation would not change the liability exemption for online platforms. Maintaining these so-called safe harbors is a priority for online giants such as Google and Facebook
203 anything less would almost certainly be a deal-breaker.
The guidelines, due to be published at the end of September, will also encourage online platforms to publish transparency reports. These should detail the volume of notices received and actions subsequently taken. The guidelines contain some
safeguards against excessive removal of content, such as giving its owners a right to contest such a decision.
EFF opposes the Senate's Stop Enabling Sex Trafficking Act
( S. 1693
) ("SESTA"), and its House counterpart the Allow States and Victims to Fight Online Sex Trafficking Act ( H.R. 1865
), because they would open up liability for Internet intermediaries--the ISPs, web hosting companies, websites, and social media platforms that enable users to share and access content online--by amending Section 230's immunity for user-generated
content ( 47 U.S.C. § 230
). While both bills have the laudable goal of curbing sex trafficking, including of minor children, they would greatly weaken Section 230's protections for
online free speech and innovation
Proponents of SESTA and its House counterpart view Section 230 as a broken law that prevents victims of sex trafficking from seeking justice. But Section 230 is not broken. First, existing federal criminal law allows federal prosecutors to go
after bad online platforms, like Backpage.com, that knowingly play a role in sex trafficking. Second, courts have allowed civil claims against online platforms--despite Section 230's immunity--when a platform had a direct hand in creating the
illegal user-generated content.
Thus, before Congress fundamentally changes Section 230, lawmakers should ask whether these bills are necessary to begin with.
Why Section 230 Matters
Section 230 is the part of the Telecommunications Act of 1996 that provides broad immunity to Internet intermediaries from liability for the content that their users create or post (i.e., user-generated content or third-party content).
Section 230 can be credited with creating today's Internet--with its abundance of unique platforms and services that enable a vast array of user-generated content. Section 230 has provided the legal buffer online entrepreneurs need to experiment
with news ways for users to connect online--and this is just as important for today's popular platforms with billions of users as it is for startups.
Congress' rationale for crafting Section 230 is just as applicable today as when the law was passed in 1996: if Internet intermediaries are not largely shielded from liability for content their users create or post--particularly given their huge
numbers of users--existing companies risk being prosecuted or sued out of existence, and potential new companies may not even enter the marketplace for fear of being prosecuted or sued out of existence (or because venture capitalists fear this).
This massive legal exposure would dramatically change the Internet as we know it: it would not only thwart innovation in online platforms and services, but free speech as well. As companies fall or fail to be launched in the first place, the
ability of all Internet users to speak online would be disrupted. For those companies that remain, they may act in ways that undermine the open Internet. They may act as gatekeepers by preventing whole accounts from being created in the first
place and pre-screening content before it is even posted. Or they may over-censor already posted content, pursuant to very strict terms of service in order to avoid the possibility of any user-generated content on their platforms and services that
could get them into criminal or civil hot water. Again, this would be a disaster for online free speech. The current proposals to gut Section 230 raise the exact same problems that Congress dealt with in 1996.
By guarding online platforms from being held legally responsible for what thousands or millions or even billions of users might say online, Section 230 has protected online free speech and innovation for more than 20 years.
But Congress did not create blanket immunity. Section 230 reflects a purposeful balance that permits Internet intermediaries to be on the hook for their users' content in certain carefully considered circumstances, and the courts have expanded
upon these rules.
Section 230 Does Not Bar Federal Prosecutors From Targeting Criminal Online Platforms
Section 230 has never provided immunity to Internet intermediaries for violations of federal criminal law --like the federal criminal sex trafficking statute (
18 U.S.C. § 1591
). In 2015, Congress passed the SAVE Act, which amended Section 1591 to expressly include "advertising" as a criminal action. Congress intended to go after websites that host ads knowing that such ads involve sex trafficking. If these
companies violate federal criminal law, they can be criminally prosecuted in federal court alongside their users who are directly engaged in sex trafficking.
In a parallel context, a federal judge in the Silk Road case
correctly ruled that Section 230 did not provide immunity against federal prosecution to the operator of a website that hosted other people's ads for illegal drugs.
By contrast, Section 230 does provide immunity to Internet intermediaries from liability for user-generated content under state criminal law . Congress deliberately chose not to expose these companies to criminal prosecutions in 50
different states for content their users create or post. Congress fashioned this balance so that federal prosecutors could bring to justice culpable companies while still ensuring that free speech and innovation could thrive online.
However, SESTA and its House counterpart would expose Internet intermediaries to liability under state criminal sex trafficking statutes. Although EFF understands the desire of state attorneys general to have more tools at their disposal to combat
sex trafficking, such an amendment to Section 230 would upend the carefully crafted policy balance Congress embodied in Section 230.
More fundamentally, it cannot be said that Section 230's current approach to criminal law has failed. A
earlier this year and a recent
article both uncovered information suggesting that Backpage.com not only knew that their users were posting sex trafficking ads to their website, but that the company also took affirmative steps to help those ads get posted. Additionally, it has
been reported that a federal grand jury
has been empaneled in Arizona to investigate Backpage.com. Congress should wait and see what comes of these developments before it exposes Internet intermediaries to additional criminal liability.
Civil Litigants Are Not Always Without a Remedy Against Internet Intermediaries
Section 230 provides immunity to Internet intermediaries from liability for user-generated content under civil law--whether federal or state civil law. Again, Congress made this deliberate policy choice to protect online free speech and
Congress recognized that exposing companies to civil liability would put the Internet at risk even more than criminal liability because: 1) the standard of proof in criminal cases is "beyond a reasonable doubt," whereas in civil cases it
is merely "preponderance of the evidence," making the likelihood higher that a company will lose a civil case; and 2) criminal prosecutors as agents of the government tend to exercise more restraint in filing charges, whereas civil
litigants often exercise less restraint in suing other private parties, making the likelihood higher that a company will be sued in the first place for third-party content.
However, Section 230's immunity against civil claims is not absolute. The courts have interpreted this civil immunity as creating a presumption of civil immunity that plaintiffs can rebut if they have evidence that an Internet intermediary
did not simply host illegal user-generated content, but also had a direct hand in creating the illegal content. In a seminal 2008 decision, the U.S. Court of Appeals for the Ninth Circuit in
Fair Housing Council v. Roommates.com
held that a website that helped people find roommates violated fair housing laws by "inducing third parties to express illegal preferences." The website had required users to answer profile questions related to personal characteristics
that may not be used to discriminate in housing (e.g., gender, sexual orientation, and the presence of children in the home). Thus, the court held that the website lost Section 230 civil immunity because it was "directly involved with
developing and enforcing a system that subjects subscribers to allegedly discriminatory housing practices." Although EFF is concerned with some of the implications of the
decision and its potential to chill online free speech and innovation, it is the law.
Thus, even without new legislation, victims of sex trafficking may bring civil cases against websites or other Internet intermediaries under the federal civil cause of action (
18 U.S.C. § 1595
), and overcome Section 230 civil immunity if they can show that the websites had a direct hand in creating ads for illegal sex. As mentioned above, a
article both strongly indicate that Backpage.com would not enjoy Section 230 civil immunity today.
SESTA and its House counterpart would expose Internet intermediaries to liability under federal and state civil sex trafficking laws. Removing Section 230's rebuttable presumption of civil immunity would, as with the criminal amendments, disrupt
the carefully crafted policy balance found in Section 230. Moreover, victims of sex trafficking can already bring civil suits against the pimps and "johns" who harmed them, as these cases against the direct perpetrators do not implicate
Therefore, the bills' amendments to Section 230 are not necessary--because Section 230 is not broken. Rather, Section 230 reflects a delicate policy balance that allows the most egregious online platforms to bear responsibility along with their
users for illegal content, while generally preserving immunity so that free speech and innovation can thrive online.
SESTA opens websites up to civil suits over user posts that promote sex trafficking. "What you're going to end up seeing is mass
lawsuits," said Julie Samuels of Engine Advocacy, a nonprofit organization perhaps best known for its work around patent reform. She expressed concern that the lawsuits would sweep up legitimate good actors and end up crushing small startups.
It's obvious why companies like Google, Facebook, and Twitter would fear SESTA. But most tech companies have been circumspect about their opposition to the bill, choosing to voice their concerns by proxy through trade groups like the Internet
Association , which includes Google, Facebook, Microsoft, Amazon, and Twitter among its members.
The problem, of course, is that they're taking a stand against a bill that purports to fight sex trafficking. "Obviously no one supports human trafficking," said Samuels. "But you're going to start to play with fire when you play
with how the internet works."
Facebook touts its partnership with outside fact-checkers as a key prong in its fight against fake news, but a major new Yale University
study finds that fact-checking and then tagging inaccurate news stories on social media doesn't work.
The study , reported for the first time by POLITICO, found that tagging false news stories as disputed by third party fact-checkers has only a small impact on whether readers perceive their headlines as true. Overall, the existence of disputed
tags made participants just 3.7 percentage points more likely to correctly judge headlines as false, the study said.
The researchers also found that, for some groups--particularly, Trump supporters and adults under 26--flagging bogus stories could actually end up increasing the likelihood that users will believe fake news. This because not all fake stories are
fact checked, and the absence of a warning tends to add to the credibility of an unchecked, but fake, story.
Researchers Gordon Pennycook & David G. Rand of Yale University write in their abstract:
Assessing the effect of disputed warnings and source salience on perceptions of fake news accuracy
What are effective techniques for combatting belief in fake news? Tagging fake articles with Disputed by 3rd party fact-checkers warnings and making articles' sources more salient by adding publisher logos are two approaches that have received
large-scale rollouts on social media in recent months.
Here we assess the effect of these interventions on perceptions of accuracy across seven experiments [involving 7,534 people].
With respect to disputed warnings, we find that tagging articles as disputed did significantly reduce their perceived accuracy relative to a control without tags, but only modestly (d=.20, 3.7 percentage point decrease in headlines judged as
Furthermore, we find a backfire effect -- particularly among Trump supporters and those under 26 years of age -- whereby untagged fake news stories are seen as more accurate than in the control.
We also find a similar spillover effect for real news, whose perceived accuracy is increased by the presence of disputed tags on other headlines.
With respect to source salience, we find no evidence that adding a banner with the logo of the headline's publisher had any impact on accuracy judgments whatsoever.
Together, these results suggest that the currently deployed approaches are not nearly enough to effectively undermine belief in fake news, and new (empirically supported) strategies are needed.
Presented with the study, a Facebook spokesperson questioned the researchers' methodology--pointing out that the study was performed via Internet survey, not on Facebook's platform--and added that fact-checking is just one part of the company's
efforts to combat fake news. Those include disrupting financial incentives for spammers, building new products and helping people make more informed choices about the news they read, trust and share, the spokesperson said.
The Facebook spokesman added that the articles created by the third party fact-checkers have uses beyond creating the disputed tags. For instance, links to the fact checks appear in related article stacks beside other similar stories that
Facebook's software identifies as potentially false. They are powering other systems that limit the spread of news hoaxes and information, the spokesperson said.
YouTube's algorithms, which are used to censor and demonetize videos on the platform, are killing its creators, according
to a report.
Most of the initial censorship is left to algorithms, [which probably flag that a video should be censored as soon as it detects something politically incorrect], which presumably leads to the overcensorship underpinning the complaints].
Creators complain that YouTube has set up a slow and inefficient appeals system to counter cases of unfair censorship. Ad-disabled videos on YouTube must get 1,000 views in the span of seven days just to qualify for a review.
This approach hurts smaller YouTube channels, because it removes the ability for creators to make money on the most important stage of a YouTube video's life cycle: the first seven days, the report explains. Typically, videos receive 70% or more
of their views in the first seven days, according to multiple creators.
Some of the platform's most popular creators, are saying that the majority of their videos are being affected, dramatically reducing their revenue. Last week, liberal interviewer Dave Rubin, who has interviewed dozens of prominent political
figures, announced that a large percentage of his videos had been demonetized, cutting him off from being able to make money on the millions of views he typically gets, perhaps due to the politically incorrect leanings of his guests, eg Ex-Muslim
Ayaan Hirsi Ali, former Minnesota Governor Jesse Ventura, feminist activist and scholar Christina Hoff Sommers, and Larry King.
YouTube issued a response saying little, except that they hope the algorithms get better over time.
The UK is just about to introduce internet censorship for porn via onerous and economically unviable
age verification requirements. In what may be a godsend for porn companies, a parliamentary group is considering widening the age verification requirements to a wider range of age restricted products sold on the internet. If a wider group of
companies become involved in the requirements it may encourage a more technically feasible and cost effective solution to be found.
XBIZ writes that online companies that sell e-cigarettes, knives, alcohol and pharmaceuticals, which typically would require identification at brick-and-mortar stores, could be regulated under the law, which focused originally on mandatory age
verification for the consumption of commercial adult content.
London attorney Myles Jackman, who also is the legal director of the Open Rights Group told XBIZ that the likely expansion of the Digital Economy Act to include other products and services sold online beyond pornography is predictably inevitable.
In fact, later this month the London-based Digital Policy Alliance, a cross party group of parliamentarians, plans on addressing the wider application of age-gating to other sectors at a formal meeting on September 19.
XBIZ also notes that the U.K. has yet to appoint an official regulator, although fingers have pointed to the British Board of Film Classification (BBFC) to assume the role. A decision over the appointment will be announced in coming weeks.
A man who sold VPN software via a website has been sentenced to nine months in prison by China's Supreme People's Court. The decision otes that the
software supplied by the man allowed the public to circumvent China's Great Firewall while granting access to foreign websites.
Back in January, China's Ministry of Industry and Information Technology announced that it would take measures to strengthen network information security management and would embark on a nationwide Internet network access services clean-up.
One of the initial targets was reported as censorship-busting VPNs, which allow citizens to evade the so-called Great Firewall of China. Operating such a service without a corresponding telecommunications business license would constitute an
offense, the government said.
Then early July, a further report suggested that the government would go a step further by ordering ISPs to block VPNs altogether. Apple then banned VPN software and services from its app store.
With an effort clearly underway to target VPNs, news today from China suggests that the government is indeed determined to tackle the anti-censorship threat presented by such tools. According to local media, Chinese man Deng Mouwei who ran a small
website through which he sold VPN software, has been sentenced to prison. He set up a website to sell VPNs. Just two products were on offer but this was enough to spring authorities into action.
Youtube has been introduced a new tier of censorship designed to restrict the audience for videos deemed to be
inappropriate or offensive to some audiences.
The site is now putting videos into a limited state if they are deemed controversial enough to be considered objectionable, but not hateful, pornographic or violent enough to be banned altogether.
This policy was announced several months ago but has come into force in the past week, prompting anger among members of the YouTube community.
YouTube defines Limited Videos as follows:
Our Community Guidelines prohibit hate speech that either promotes violence or has the primary purpose of inciting hatred against individuals or groups based on certain attributes. YouTube also prohibits content intended to recruit for terrorist
organizations, incite violence, celebrate terrorist attacks, or otherwise promote acts of terrorism. Some borderline videos, such as those containing inflammatory religious or supremacist content without a direct call to violence or a primary
purpose of inciting hatred, may not cross these lines for removal. Following user reports, if our review teams determine that a video is borderline under our policies, it may have some features disabled.
These videos will remain available on YouTube, but will be placed behind a warning message, and some features will be disabled, including comments, suggested videos, and likes. These videos are also not eligible for ads.
Having features disabled on a video will not create a strike on your account.
Videos which are put into a limited state cannot be embedded on other websites. They also cannot be easily published on social media using the usual share buttons and other users cannot comment on them. Crucially, the person who made the video
will no longer receive any payment.
Earlier this week, Julian Assange wrote:
'Controversial' but contract-legal videos [which break YouTube's terms and conditions] cannot be liked, embedded or earn [money from advertising revenue].
What's interesting about the new method deployed is that it is a clear attempt at social engineering. It isn't just turning off the ads. It's turning off the comments, embeds, etc too. Everything possible to strangle the reach without
As of October 1, 2017, Chinese netizens who have not registered their user accounts with online platforms under a new real name system will not be able to post comments on online content, while bans await trouble-makers.
The Regulation on the Management of Internet Comments was announced by the Cyberspace Administration of China on August 25. The regulation specifies that platforms that provide services for netizens to comment on original content, including
films, posts, online games or news, should force users to provide their authentic identity via an individual user account system before posting. Platform operators should not offer such services to those who have not verified their identity.
The regulation will dramatically reduce space for online comments as large number of unauthenticated users will not be able to write original posts and leave comments. Moreover, many platforms will be unable to bear the burden of the identity
According to Article 2 of the regulation, commenting services refer to websites, mobile applications, interactive platforms, news sites, and other social platforms that allow or facilitate users to create original content, reply to posts, leave
comments on news threads or other items in the form of written text, symbols, emojis, images, voice messages or video.
The responsibilities of comment service operators, according to Article 5, include the verification of user identities, the setting up of a comment management system to pre-screen comments on news, preventing the spread of illegal information and
reporting comments to the authorities.
Controversially, the regulation also specifies in Article 9 that comment service operators should manage their users by rating their social credit, an algorithm to measure a person's overall 'goodness' as a citizen.
Those with low credit should be blacklisted from posting and prevented from registering new accounts to use the service. At the same time, state, province and city-level cyberspace affairs offices will set up a management system to evaluate the
overall social credit of comment service operators on a regular basis.
The Orwellian social credit system for regulating internet users' activities was revealed in 2014 and the Chinese government authorized a number of credit service agencies to collect, evaluate and manage peoples's credit information the following
According to the Chinese government's Planning Outline for the Construction of a Social Credit System , the system aims to measure and enhance 'trust' between and among government, commercial sectors and citizens and to strengthen sincerity in
government affairs, commercial sincerity, social sincerity and the construction of judicial credibility. However, the allocation of individual credit is not transparent and the current regulation on comment services indicates that individual
online speech is a key factor in its calculation.
Thus far only national and large-scale social media and content service operators have implemented real name registration and they have not introduced measures to penalize unauthenticated users beyond limiting the circulation of their posts.
The majority of small-to-medium-size local websites and forums have not implemented real name registration because they simply don't have the capital and infrastructure to do so. The new regulation compels such websites to shut down their
Tech-blogger William Long who has discussed the issue with regulators in the past wrote in his blog:
I have discussed with the relevant authorities how small forums and websites can implement real name registration. Their view is, they can either shut the comment section down or ask their users to verify their identity by providing mobile phone
Owners of small websites can only afford a few hundred yuan to hire a server. The cost of mobile verification is RMB 6 cents per message. They would have to spend RMB 6 yuan per 100 comments. If their competitors deliberately overload them by
posting a few thousand comments a day, they will not be able to afford the cost [of verification]. In the end they will be forced to ban comments.