Culture Secretary Matt Hancock has issued to the following press release from the Department for Digital, Culture, Media
New laws to make social media safer
New laws will be created to make sure that the UK is the safest place in the world to be online, Digital Secretary Matt Hancock has announced.
The move is part of a series of measures included in the government's response to the Internet Safety Strategy green paper, published today.
The Government has been clear that much more needs to be done to tackle the full range of online harm.
Our consultation revealed users feel powerless to address safety issues online and that technology companies operate without sufficient oversight or transparency. Six in ten people said they had witnessed inappropriate or harmful content online.
The Government is already working with social media companies to protect users and while several of the tech giants have taken important and positive steps, the performance of the industry overall has been mixed.
The UK Government will therefore take the lead, working collaboratively with tech companies, children's charities and other stakeholders to develop the detail of the new legislation.
Matt Hancock, DCMS Secretary of State said:
Digital technology is overwhelmingly a force for good
across the world and we must always champion innovation and change for the better. At the same time I have been clear that we have to address the Wild West elements of the Internet through legislation, in a way that supports innovation. We
strongly support technology companies to start up and grow, and we want to work with them to keep our citizens safe.
People increasingly live their lives through online platforms so it's more important than ever that people are safe and parents can have confidence they can keep their children from harm. The measures we're taking forward today will help make
sure children are protected online and balance the need for safety with the great freedoms the internet brings just as we have to strike this balance offline.
DCMS and Home Office will jointly work on a White Paper with other government departments, to be published later this year. This will set out legislation to be brought forward that tackles a range of both legal and illegal harms, from
cyberbullying to online child sexual exploitation. The Government will continue to collaborate closely with industry on this work, to ensure it builds on progress already made.
Home Secretary Sajid Javid said:
Criminals are using the internet to further their exploitation and abuse of children, while terrorists are abusing these platforms to recruit people and incite atrocities. We need to protect our communities from these heinous crimes and vile
propaganda and that is why this Government has been taking the lead on this issue.
But more needs to be done and this is why we will continue to work with the companies and the public to do everything we can to stop the misuse of these platforms. Only by working together can we defeat those who seek to do us harm.
The Government will be considering where legislation will have the strongest impact, for example whether transparency or a code of practice should be underwritten by legislation, but also a range of other options to address both legal and illegal
We will work closely with industry to provide clarity on the roles and responsibilities of companies that operate online in the UK to keep users safe.
The Government will also work with regulators, platforms and advertising companies to ensure that the principles that govern advertising in traditional media -- such as preventing companies targeting unsuitable advertisements at children -- also
apply and are enforced online.
It seems that the latest call for internet censorship is driven by some sort revenge for having been snubbed by the
The culture secretary said he does not have enough power to police social media firms after admitting only four of 14 invited to talks showed up.
Matt Hancock told the BBC it had given him a big impetus to introduce new laws to tackle what he has called the internet's Wild West culture.
He said self-policing had not worked and legislation was needed.
He told BBC One's Andrew Marr Show , presented by Emma Barnett, that the government just don't know how many children of the millions using using social media were not old enough for an account and he was very worried about age
verification. He told the programme he hopes we get to a position where all users of social media users has to have their age verified.
Two government departments are working on a White Paper expected to be brought forward later this year. Asked about the same issue on ITV's Peston on Sunday , Hancock said the government would be legislating in the next couple of years
because we want to get the details right.
For its updated news application, Google is claiming it is using artificial intelligence as part of an effort to weed out
disinformation and feed users with viewpoints beyond their own filter bubble.
Google chief Sundar Pichai, who unveiled the updated Google News earlier this month, said the app now surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. It marks Google's latest
effort to be at the centre of online news and includes a new push to help publishers get paid subscribers through the tech giant's platform.
In reality Google has just banned news from the likes of the Daily Mail whilst all the 'trusted sources' are just the likes of the politically correct papers such as the Guardian and Independent.
According to product chief Trystan Upstill, the news app uses the best of artificial intelligence to find the best of human intelligence - the great reporting done by journalists around the globe. While the app will enable users to get
personalised news, it will also include top stories for all readers, aiming to break the so-called filter bubble of information designed to reinforce people's biases.
Nicholas Diakopoulos, a Northwestern University professor specialising in computational and data journalism, said the impact of Google's changes remain to be seen. Diakopoulos said algorithmic and personalised news can be positive for engagement
but may only benefit a handful of news organisations. His research found that Google concentrates its attention on a relatively small number of publishers, it's quite concentrated. Google's effort to identify and prioritise trusted news
sources may also be problematic, according to Diakopoulos. Maybe it's good for the big guys, or the (publishers) who have figured out how to game the algorithm, he said. But what about the local news sites, what about the new news sites that
don't have a long track record?
I tried it out and no matter how many times I asked it not to provide stories about the royal wedding and the cup final, it just served up more of the same. And indeed as Diakopoulos said, all it wants to do is push news stories from the
politically correct papers, most notably the Guardian. I can't see it proving very popular. I'd rather have an app that feeds me what I actually like, not what I should like.
Beginning on May 10, Spotify users will no longer be able to find R. Kelly 's music on any of the streaming service's editorial or algorithmic playlists. Under the terms of a new public hate content and hateful conduct policy Spotify is
putting into effect, the company will no longer promote the R&B singer's music in any way, removing his songs from flagship playlists like RapCaviar, Discover Weekly or New Music Friday, for example, as well as its other genre- or mood-based
"We are removing R. Kelly's music from all Spotify owned and operated playlists and algorithmic recommendations such as Discover Weekly," Spotify told Billboard in a statement. "His music will still be available on the
service, but Spotify will not actively promote it. We don't censor content because of an artist's or creator's behavior, but we want our editorial decisions -- what we choose to program -- to reflect our values. When an artist or creator does
something that is especially harmful or hateful, it may affect the ways we work with or support that artist or creator."
Over the past several years, Kelly has been accused by multiple women of sexual violence, coercion and running a "sex cult," including two additional women who came forward to Buzzfeed this week. Though he has never been convicted of a
crime, he has come under increasing scrutiny over the past several weeks, particularly with the launch of the #MuteRKelly movement at the end of April. Kelly has vociferously defended himself , saying those accusing him are an "attempt to
distort my character and to destroy my legacy." And while RCA Records has thus far not dropped Kelly from his recording contract, Spotify has distanced itself from promoting his music.
New Zealand's Chief Censor David Shanks warned parents and caregivers of vulnerable children and
teenagers to be prepared for the release of Netflix's Season 2 release of 13 Reasons Why scheduled to screen this week on Friday, May 18, at 7pm.
The Office of Film and Literature Classification consulted with the Mental Health Foundation in classifying 13 Reasons Why: Season 2 as RP18 with a warning that it contains rape, suicide themes, drug use, and bullying. Shanks said:
"There is a strong focus on rape and suicide in Season 2 , as there was in Season 1 . We have told Netflix it is really important to warn NZ audiences about that."
"Rape is an ugly word for an ugly act. But young New Zealanders have told us that if a series contains rape -- they want to know beforehand."
An RP18 classification means that someone under 18 must be supervised by a parent or guardian when viewing the series. A guardian is considered to be a responsible adult (18 years and over), for example a family member or teacher who can provide
guidance. Shanks said:
"This classification allows young people to access it in a similar fashion to the first season, while requiring the support from an adult they need to stay safe and to process the challenging topics in the series."
Netflix is required to clearly display the classification and warning.
"If a child you care for is planning to watch the show, you should sit down and watch it with them -- if not together then at least around the same time. That way you can at least try to have informed and constructive discussions with them
about the content."
"The current picture about what our kids can be exposed to online is grim. We need to get that message across to parents that they need to help young people with this sort of content."
For parents and caregivers who don't have time to watch the entire series, the Classification Office and Mental Health Foundation have produced an episode-by-episode guide with synopses of problematic content, and conversation starters to have
with teens. This will be available on both organisations' websites from 7pm on Friday night.
In response to the continued restriction and censorship of
conservatives and their organizations by tech giants Facebook, Twitter, Google and YouTube, the Media Research Center (MRC) along with 18 leading conservative organizations announced Tuesday, May 15, 2018 the formation of a new, permanent
coalition, Conservatives Against Online Censorship .
Conservatives Against Online Censorship will draw attention to the issue of political censorship on social media. This new coalition will urge Facebook, Twitter, Google and YouTube to address the four following key areas of concern:
Provide Transparency: We need detailed information so everyone can see if liberal groups and users are being treated the same as those on the right. Social media companies operate in a black-box environment, only releasing anecdotes
about reports on content and users when they think it necessary. This needs to change. The companies need to design open systems so that they can be held accountable, while giving weight to privacy concerns.
Provide Clarity on 'Hate Speech': "Hate speech" is a common concern among social media companies, but no two firms define it the same way. Their definitions are vague and open to interpretation, and their interpretation often
looks like an opportunity to silence thought. Today, hate speech means anything liberals don't like. Silencing those you disagree with is dangerous. If companies can't tell users clearly what it is, then they shouldn't try to regulate it.
Provide Equal Footing for Conservatives: Top social media firms, such as Google and YouTube, have chosen to work with dishonest groups that are actively opposed to the conservative movement, including the Southern Poverty Law Center.
Those companies need to make equal room for conservative groups as advisers to offset this bias. That same attitude should be applied to employment diversity efforts. Tech companies need to embrace viewpoint diversity.
Mirror the First Amendment: Tech giants should afford their users nothing less than the free speech and free exercise of religion embodied in the First Amendment as interpreted by the U.S. Supreme Court. That standard, the result of
centuries of American jurisprudence, would enable the rightful blocking of content that threatens violence or spews obscenity, without trampling on free speech liberties that have long made the United States a beacon for freedom.
"Social media is the most expansive and most game-changing form of communication today. It is these facts that make online political censorship one of the largest threats to free speech we have ever seen. Conservatives should be given the
same ability to express their political ideas online as liberals, without the fear of being suppressed or censored," said Media Research Center President Brent Bozell.
"Meaningful debate only happens when both sides are given equal footing. Freedom of speech, regardless of ideological leaning, is something Americans hold dear. Facebook, Twitter and all other social media companies must acknowledge this and
work to rectify these concerns unless they want to lose all credibility with the conservative movement. As leaders of this effort, we are launching this coalition to make sure that the recommendations we put forward on behalf of the conservative
movement are followed through."
The Media Research Center sent letters to representatives at Facebook, Twitter, Google and YouTube last week asking each company to address these complaints and begin a conversation about how they can repair their credibility within the
conservative movement. As of Tuesday, May 15, 2018 , only Facebook has issued a formal response.
Twitter has outlined further censorship measures in a blog post:
In March, we introduced our new approach to improve the health of the public conversation on Twitter. One important issue we've been working to address is what some might refer to as "trolls." Some troll-like behavior is fun, good and
humorous. What we're talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search. Some of these accounts and Tweets violate our
policies, and, in those cases, we take action on them. Others don't but are behaving in ways that distort the conversation.
To put this in context, less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what's reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large --
and negative -- impact on people's experience on Twitter. The challenge for us has been: how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?
A New Approach
Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we're tackling issues of behaviors that distort and detract
from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we're able to improve the health of the conversation, and
everyone's experience on Twitter, without waiting for people who use Twitter to report potential issues to us.
There are many new signals we're taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that
repeatedly Tweet and mention accounts that don't follow them, or behavior that might indicate a coordinated attack. We're also looking at how accounts are connected to those that violate our rules and how they interact with each other.
These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn't violate our policies, it will remain on Twitter, and will be available if you click on
"Show more replies" or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.
In our early testing in markets around the world, we've already seen this new approach have a positive impact, resulting in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. That means fewer people are seeing
Tweets that disrupt their experience on Twitter.
Our work is far from done. This is only one part of our work to improve the health of the conversation and to make everyone's Twitter experience better. This technology and our team will learn over time and will make mistakes. There will be false
positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. We'll continue to be open and honest about the mistakes we make and the progress we are making. We're encouraged by the results we've seen so
far, but also recognize that this is just one step on a much longer journey to improve the overall health of our service and your experience on it.
We're often asked how we decide what's allowed on Facebook -- and how much bad stuff is out there. For years, we've had Community Standards
that explain what stays up and what comes down. Three weeks ago, for the first time, we published the internal guidelines we use to enforce those standards. And today we're releasing numbers in a Community Standards Enforcement Report so
that you can judge our performance for yourself.
Alex Schultz, our Vice President of Data Analytics, explains in more detail how exactly we measure what's happening on Facebook in both this Hard Questions post and our guide to Understanding the Community Standards Enforcement Report . But it's
important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important and what works.
This report covers our enforcement efforts between October 2017 to March 2018, and it covers six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts. The numbers show you:
How much content people saw that violates our standards;
How much content we removed; and
How much content we detected proactively using our technology -- before people who use Facebook reported it.
Most of the action we take to remove bad content is around spam and the fake accounts they use to distribute it. For example:
We took down 837 million pieces of spam in Q1 2018 -- nearly 100% of which we found and flagged before anyone reported it; and
The key to fighting spam is taking down the fake accounts that spread it. In Q1, we disabled about 583 million fake accounts -- most of which were disabled within minutes of registration. This is in addition to the millions of fake account
attempts we prevent daily from ever registering with Facebook. Overall, we estimate that around 3 to 4% of the active Facebook accounts on the site during this time period were still fake.
In terms of other types of violating content:
We took down 21 million pieces of adult nudity and sexual activity in Q1 2018 -- 96% of which was found and flagged by our technology before it was reported. Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, 7
to 9 views were of content that violated our adult nudity and pornography standards.
For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 -- 86% of which was identified by our technology before it was reported to Facebook.
For hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 -- 38% of which was flagged by our technology.
As Mark Zuckerberg said at F8 , we have a lot of work still to do to prevent abuse. It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so
important. For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue. And more generally, as I explained two
weeks ago, technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported. In addition, in many areas -- whether it's spam,
porn or fake accounts -- we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts. It's why we're investing heavily in more people and
better technology to make Facebook safer for everyone.
It's also why we are publishing this information. We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too. This is the
same data we use to measure our progress internally -- and you can now see it to judge our progress for yourselves. We look forward to your feedback.
Here is an update on the Facebook app investigation and audit that Mark Zuckerberg promised on March 21.
As Mark explained, Facebook will investigate all the apps that had access to large amounts of information before we changed our platform policies in 2014 -- significantly reducing the data apps could access. He also made clear that where we had
concerns about individual apps we would audit them -- and any app that either refused or failed an audit would be banned from Facebook.
The investigation process is in full swing, and it has two phases. First, a comprehensive review to identify every app that had access to this amount of Facebook data. And second, where we have concerns, we will conduct interviews, make requests
for information (RFI) -- which ask a series of detailed questions about the app and the data it has access to -- and perform audits that may include on-site inspections.
We have large teams of internal and external experts working hard to investigate these apps as quickly as possible. To date thousands of apps have been investigated and around 200 have been suspended -- pending a thorough investigation into
whether they did in fact misuse any data. Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website. It will show people if they or their friends installed an app that misused data before
2015 -- just as we did for Cambridge Analytica.
There is a lot more work to be done to find all the apps that may have misused people's Facebook data -- and it will take time. We are investing heavily to make sure this investigation is as thorough and timely as possible. We will keep you
updated on our progress.
Adults who want to watch online porn (or maybe by adults only products such as alcohol)
will be able to buy codes from newsagents and supermarkets to prove that they are over 18 when online.
One option available to the estimated 25 million Britons who regularly visit such websites will be a 16-digit code, dubbed a 'porn pass'.
While porn viewers will still be able to verify their age using methods such as registering credit card details, the 16-digit code option would be a fully anonymous option. According to AVSecure's the cards will be sold for £10 to anyone who
looks over 18 without the need for any further identification. It doesn't say on the website, but presumably in the case where there is doubt about a customer's age, then they will have to show ID documents such as a passport or driving licence,
but hopefully that ID will not have to be recorded anywhere.
It is hope he method will be popular among those wishing to access porn online without having to hand over personal details to X-rated sites.
The user will type in a 16 digit number into websites that belong to the AVSecure scheme. It should be popular with websites as it offers age verification to them for free (with the £10 card fee being the only source of income for the company).
This is a lot better proposition for websites than most, if not all, of the other age verification companies.
AVSecure also offer an encrypted implementation via blockchain that will not allow websites to use the 16 digit number as a key to track people's website browsing. But saying that they could still use a myriad of other standard technologies to
The BBFC is assigned the task of deciding whether to accredit different technologies and it will be very interesting to see if they approve the AVSecure offering. It is easily the best solution to protect the safety and privacy of porn viewers,
but it maybe will test the BBFC's pragmatism to accept the most workable and safest solution for adults which is not quite fully guaranteed to protect children. Pragmatism is required as the scheme has the technical drawback of having no further
checks in place once the card has been purchased. The obvious worry is that an over 18s can go around to other shops to buy several cards to pass on to their under 18 mates. Another possibility is that kids could stumble on their parent's card
and get access. Numbers shared on the web could be easily blocked if used simultaneously from different IP addresses.
Mental health campaigners have criticised the return of the Netflix drama 13 Reasons Why , expressing concern that the second series of the drama about a teenager's suicide is due for release as summer exam stress peaks. The story of
17-year-old Hannah Baker's life and death continues on Friday 18 May.
The Royal College of Psychiatrists described the timing as callous, noting that suicide rates among young people typically rise during exam season and warning that the Netflix drama could trigger a further increase. Dr Helen Rayner, of the Royal
College of Psychiatrists, said:
I feel extremely disappointed and angry. This glamourises suicide and makes it seductive. It also makes it a possibility for young people -- it puts the thought in their mind that this is something that's possible. It's a bad programme that
should not be out there, and it's the timing.
The US-based series was a big hit for Netflix despite -- or perhaps because of -- the controversy surrounding the suicide storyline. The first series of 13 episodes depicted Hannah's friends listening to tapes she had made for each of them
explaining the difficulties she faced that had prompted her to kill herself.
Supporters of the first series said it was an accurate portrayal of high school life that would spark conversations between parents and their children and encourage viewers to seek information on depression, suicide, bullying and sexual assault.
US lawmakers from both political parties have come together to reintroduce a bill that, if passed, would prohibit the US
government from forcing tech product makers to undermine users safety and security with back door access.
The bill, known as the Secure Data Act of 2018 , was returned to the US House of Representatives by Representative Zoe Lofgren and Thomas Massie.
The Secure Data Act forbids any government agency from demanding that a manufacturer, developer, or seller of covered products design or alter the security functions in its product or service to allow the surveillance of any user of such product
or service, or to allow the physical search of such product, by any agency. It also prohibits courts from issuing orders to compel access to data.
Covered products include computer hardware, software, or electronic devices made available to the public. The bill makes an exception for telecom companies, which under the 1994 Communications Assistance for Law Enforcement Act (CALEA) would
still have to help law enforcement agencies access their communication networks.
Monday's ban on the popular encrypted Telegram messaging app by Iran's powerful judiciary has not been well received.
Telegram serves many Iranians as a kind of combination of Facebook and Whatsapp, allowing people inside the country to chat securely and to disseminate information to large audiences abroad. Until the court ban, the application was widely used by
Iranian state media, politicians, companies and ordinary Iranians for business, pleasure and political organizing. Telegram is believed to have some 20 million users in Iran out of a total population of 80 million.
The judiciary's Culture and Media Court banned the app citing among its reasons its use by international terrorist groups and anti-government protesters, and the company's refusal to cooperate with Iran's Ministry of Information and
Communications Technology to provide decryption keys.
The move came after extensive public debate in Iran, some conducted via the messaging service itself, about the limits of free expression, government authority and access to information in the Islamic Republic.
President Hassan Rouhani and other prominent reformers, who advocate increased freedom while retaining Iran's current Islamic system of government, argued against the proposed ban, saying that it would make society anxious.
Similarly, in the wake of the judiciary's announcement that the application would be blocked, Information and Communications Technology Minister Muhammad-Javad Azari Jahromi criticized the move on Twitter. Citizens' access to information sources
is unstoppable, he wrote the day after the decision. Whenever one application or program is blocked, another will take its place, he wrote. This is the unique aspect and necessity of the free access to information in the age of communication.
Rouhani was even more forthright in his response to the ban in a message posted to Instagram on Friday. The government policy is... a safe, but not controlled Internet, he wrote. No Internet service or messaging app has been banned by this
government, and none will be. He added that the block was the direct opposite to democracy.
Update: The judicial censorship of Telegram could be challenged by the president
Two lawyers in Tehran told the Center for Human Rights in Iran (CHRI) that the Iranian president has the authority to refuse to the prosecutor's order to ban the Telegram messaging app.
An attorney in Tehran specializing in media affairs, who spoke on the condition of anonymity due to the threat of reprisals by the judiciary, told CHRI: From a legal standpoint, orders issued by assistant prosecutors must be enforced but they can
be challenged. As the target of this order, the government can lodge a complaint and ask the provincial court to make a ruling. But the question is, does the government want to take legal action or not? This is more of a political issue. In the
same manner, the judiciary had invoked security laws to shut down 40 newspapers in 2000.
Razzia is a 2017 France / Morocco / Belgium drama by Nabil Ayouch.
Starring Maryam Touzani, Arieh Worthalter and Amine Ennaji.
The streets of Casablanca provide the centerpiece for five separate narratives that all collide into one.
Egypt's film censors have banned Nabil Ayouch's film Razzia for supposedly encouraging revolution, especially that the film tells the story of the marginalized poor in search of justice in Morocco.
The film censor specifically referred to events in the movie that recall the 2011 Egyptian revolution. The censor also reported concerns with the impact of religion, as it strongly believe that projecting Razzia will inspire the sympathy and
compassion of the audience, as the movie follows the daily life of a Jewish restaurateur.
It's not the first time that the French-Moroccan director Nabil Ayouch has had to deal with censorship, as the Moroccan government banned his controversial film Much Loved in Moroccan cinemas in 2015.
It hasn't taken long for Germany's new internet censorship to be used against the trivial name calling of politicians.
A recent German law was intended to put a stop to hate speech, but its difficult and commercially expensive to bother considering every case on its merits, so its just easier and cheaper for internet companies to censor everything asked for.
So of course easily offended politician are quick to ask for trivial name calling insults to be taken down. But now there's a twist, for an easily offended politician, it is not enough for Facebook to block an insult in Germany, it must be
Courthouse News Service reports that a German court has indulged a politician's hypocritical outrage to demand the disappearance of an insulting comment posted to Facebook.
Alice Weidel, co-leader of the Alternative for Germany (AfD) party, objected to a Facebook post calling her a dirty Nazi swine for her opposition to same-sex marriage. Facebook immediately complied, but Weidel's lawyers complained it hadn't been
vanished hard enough, pointing out that German VPN users could still access the comment.
Facebook's only comment, via Reuters, was to note it had already blocked the content in Germany , which is all the law really requires.
Of course once you allow mere insults to be censorable, you then hit the issue of fairness. Insults against some PC favoured groups are totally off limits and are considered to be a PC crime of the century, whilst insults against others (eg white
men) are positively encouraged.
The wildly popular children's character Peppa Pig was recently scrubbed from Douyin, a video sharing platform in China , which deleted more than 30,000 clips. The hashtag #PeppaPig was also banned, according to the Global Times, a state-run
Chinese authorities have claimed that Peppa pig has become associated with low lifes and slackers. The Global Times whinged:
People who upload videos of Peppa Pig tattoos and merchandise and make Peppa-related jokes run counter to the mainstream value and are usually poorly educated with no stable job. They are unruly slackers roaming around and the antithesis of the
young generation the [Communist] party tries to cultivate.
A demonstration in Moscow against the Russian government's effort to block the messaging app Telegram quickly morphed on Monday
into a protest against President Vladimir Putin, with thousands of participants chanting against the Kremlin's increasingly restrictive censorship regime.
The key demand of the rally, with the hashtag #DigitalResistance, was that the Russian internet remain free from government censorship.
One speaker, Sergei Smirnov, editor in chief of Mediazona, an online news service , asked the crowd. Is he to blame for blocking Telegram? The crowd responded with a resounding Yes!
Telegram is just the first step, Smirnov continued. If they block Telegram, it will be worse later. They will block everything. They want to block our future and the future of our children.
Russian authorities blocked Telegram after not being provided with decryption keys. The censors also briefly blocked thousands other websites sharing hosting facilities with Telegram in the hop of pressurising the hosts into taking down Telegram.
The censorship effort has provoked anger and frustration far beyond the habitual supporters of the political opposition, especially in the business sector, where the collateral damage continues to hurt the bottom line. There has been a flood of
complaints on Twitter and elsewhere that the government broke the internet.
We, the undersigned 26 international human rights, media and Internet freedom organisations, strongly condemn
the attempts by the Russian Federation to block the Internet messaging service Telegram, which have resulted in extensive violations of freedom of expression and access to information, including mass collateral website blocking.
We call on Russia to stop blocking Telegram and cease its relentless attacks on Internet freedom more broadly. We also call the United Nations (UN), the Council of Europe (CoE), the Organisation for Security and Cooperation
in Europe (OSCE), the European Union (EU), the United States and other concerned governments to challenge Russia's actions and uphold the fundamental rights to freedom of expression and privacy online as well as offline. Lastly, we call on
Internet companies to resist unfounded and extra-legal orders that violate their users' rights.
Massive Internet disruptions
On 13 April 2018, Moscow's Tagansky District Court granted Roskomnadzor, Russia's communications regulator, its request to block access to Telegram on the grounds that the company had not complied with a 2017 order to
provide decryption keys to the Russian Federal Security Service (FSB). Since then, the actions taken by the Russian authorities to restrict access to Telegram have caused mass Internet disruption, including:
Between 16-18 April 2018, almost 20 million Internet Protocol (IP) addresses were ordered to be blocked by Roskomnadzor as it attempted to restrict access to Telegram. The majority of the blocked addresses are owned by
international Internet companies, including Google, Amazon and Microsoft. Currently 14.6 remain blocked.
This mass blocking of IP addresses has had a detrimental effect on a wide range of web-based services that have nothing to do with Telegram, including, but not limited to, online banking and booking sites, shopping, and
Agora, the human rights and legal group, representing Telegram in Russia, has reported it has received requests for assistance with issues arising from the mass blocking from about 60 companies, including online stores,
delivery services, and software developers.
At least six online media outlets ( Petersburg Diary, Coda Story, FlashNord, FlashSiberia, Tayga.info , and 7x7 ) found access to their websites was temporarily blocked.
On 17 April 2018, Roskomnadzor requested that Google and Apple remove access to the Telegram app from their App stores, despite having no basis in Russian law to make this request. The app remains available, but Telegram
has not been able to provide upgrades that would allow better proxy access for users.
Virtual Private Network (VPN) providers -- such as TgVPN, Le VPN and VeeSecurity proxy - have also been targeted for providing alternative means to access Telegram. Federal Law 276-FZ bans VPNs and Internet anonymisers
from providing access to websites banned in Russia and authorises Roskomnadzor to order the blocking of any site explaining how to use these services.
Restrictive Internet laws
Over the past six years, Russia has adopted a huge raft of laws restricting freedom of expression and the right to privacy online. These include the creation in 2012 of a blacklist of Internet websites, managed by
Roskomnadzor, and the incremental extension of the grounds upon which websites can be blocked, including without a court order.
The 2016 so-called 'Yarovaya Law' , justified on the grounds of "countering extremism", requires all communications providers and Internet operators to store metadata about their users' communications activities,
to disclose decryption keys at the security services' request, and to use only encryption methods approved by the Russian government - in practical terms, to create a backdoor for Russia's security agents to access internet users' data, traffic,
In October 2017, a magistrate found Telegram guilty of an administrative offense for failing to provide decryption keys to the Russian authorities -- which the company states it cannot do due to Telegram's use of end-to-end
encryption. The company was fined 800,000 rubles (approx. 11,000 EUR). Telegram lost an appeal against the administrative charge in March 2018, giving the Russian authorities formal grounds to block Telegram in Russia, under Article 15.4 of the
Federal Law "On Information, Information Technologies and Information Protection".
The Russian authorities' latest move against Telegram demonstrates the serious implications for people's freedom of expression and right to privacy online in Russia and worldwide:
For Russian users apps such as Telegram and similar services that seek to provide secure communications are crucial for users' safety. They provide an important source of information on critical issues of politics,
economics and social life, free of undue government interference. For media outlets and journalists based in and outside Russia, Telegram serves not only as a messaging platform for secure communication with sources, but also as a publishing
venue. Through its channels, Telegram acts as a carrier and distributor of content for entire media outlets as well as for individual journalists and bloggers. In light of direct and indirect state control over many traditional Russian media
and the self-censorship many other media outlets feel compelled to exercise, instant messaging channels like Telegram have become a crucial means of disseminating ideas and opinions.
Companies that comply with the requirements of the 'Yarovaya Law' by allowing the government a back-door key to their services jeopardise the security of the online communications of their Russian users and the people they
communicate with abroad. Journalists, in particular, fear that providing the FSB with access to their communications would jeopardise their sources, a cornerstone of press freedom. Company compliance would also signal that communication
services providers are willing to compromise their encryption standards and put the privacy and security of all their users at risk, as a cost of doing business.
Beginning in July 2018, other articles of the 'Yarovaya Law' will come into force requiring companies to store the content of all communications for six months and to make them accessible to the security services without a
court order. This would affect the communications of both people in Russia and abroad.
Such attempts by the Russian authorities to control online communications and invade privacy go far beyond what can be considered necessary and proportionate to countering terrorism and violate international law.
Blocking websites or apps is an extreme measure , analogous to banning a newspaper or revoking the license of a TV station. As such, it is highly likely to constitute a disproportionate interference with freedom of
expression and media freedom in the vast majority of cases, and must be subject to strict scrutiny. At a minimum, any blocking measures should be clearly laid down by law and require the courts to examine whether the wholesale blocking of
access to an online service is necessary and in line with the criteria established and applied by the European Court of Human Rights. Blocking Telegram and the accompanying actions clearly do not meet this standard.
Various requirements of the 'Yarovaya Law' are plainly incompatible with international standards on encryption and anonymity as set out in the 2015 report of the UN Special Rapporteur on Freedom of Expression report (
A/HRC/29/32 ). The UN Special Rapporteur himself has written to the Russian government raising serious concerns that the 'Yarovaya Law' unduly restricts the rights to freedom of expression and privacy online. In the European Union, the Court of
Justice has ruled that similar data retention obligations were incompatible with the EU Charter of Fundamental Rights. Although the European Court of Human Rights has not yet ruled on the compatibility of the Russian provisions for the
disclosure of decryption keys with the European Convention on Human Rights, it has found that Russia's legal framework governing interception of communications does not provide adequate and effective guarantees against the arbitrariness and the
risk of abuse inherent in any system of secret surveillance.
We, the undersigned organisations, call on:
The Russian authorities to guarantee internet users' right to publish and browse anonymously and ensure that any restrictions to online anonymity are subject to requirements of a court order, and comply fully with
Articles 17 and 19(3) of the ICCPR, and articles 8 and 10 of the European Convention on Human Rights, by:
Desisting from blocking Telegram and refraining from requiring messaging services, such as Telegram, to provide decryption keys in order to access users private communications;
Repealing provisions in the 'Yarovaya Law' requiring Internet Service Providers (ISPs) to store all telecommunications data for six months and imposing mandatory cryptographic backdoors, and the 2014 Data Localisation law,
which grant security service easy access to users' data without sufficient safeguards.
Repealing Federal Law 241-FZ, which bans anonymity for users of online messaging applications; and Law 276-FZ which prohibits VPNs and Internet anonymisers from providing access to websites banned in Russia;
Amending Federal Law 149-FZ "On Information, IT Technologies and Protection of Information" so that the process of blocking websites meets international standards. Any decision to block access to a website or app
should be undertaken by an independent court and be limited by requirements of necessity and proportionality for a legitimate aim. In considering whether to grant a blocking order, the court or other independent body authorised to issue such an
order should consider its impact on lawful content and what technology may be used to prevent over-blocking.
Representatives of the United Nations (UN), the Council of Europe (CoE), the Organisation for the Cooperation and Security in Europe (OSCE), the European Union (EU), the United States and other concerned governments
to scrutinise and publicly challenge Russia's actions in order to uphold the fundamental rights to freedom of expression and privacy both online and-offline, as stipulated in binding international agreements to which Russia is a party.
Internet companies to resist orders that violate international human rights law. Companies should follow the United Nations' Guiding Principles on Business & Human Rights, which emphasise that the responsibility
to respect human rights applies throughout a company's global operations regardless of where its users are located and exists independently of whether the State meets its own human rights obligations.
In a verdict with grave implications for press freedom, a Malaysian court has handed down the nation's first conviction under its recently enacted 'fake news' law.
Salah Salem Saleh Sulaiman, a Danish citizen, was sentenced to one week in prison and fined 10,000 ringgit (US$2,500) for posting to the internet a two-minute video criticizing police's response to the April 21 assassination of a member of the
militant group Hamas in Kuala Lumpur.
Shawn Crispin, CPJ's senior Southeast Asia representative said:
Malaysia's first conviction under its 'fake news' law shows authorities plan to abuse the new provision to criminalize critical reporting. The dangerous precedent should be overturned and this ill-conceived law repealed for the sake of press