As the introduction of age verification for websites approaches, it seems that the most likely outcome is that
Mindgeek, the company behind most of the tube sites, is set to become the self appointed gatekeeper of porn. Its near monopoly on free porn means that it will be the first port of call for people wanting access, and people who register with them
will be able to surf large numbers of porn sites with no verification hassle. And they are then not going to be very willing go through all the hassle again for a company not enrolled in the Mindgeek scheme. Mindgeek is therefore set to become the
Amazon,eBay/Google/Facebook of porn.
There is another very promising age verification system AVSecure, that sounds way better than Midgeek's AgeID. AVSecure plans to offer age verification passes from supermarkets and post offices. They will give you a card with a code that requires
no further identification whatsoever beyond looking obviously over 25. 18-25 yea olds will have to show ID but it need not be recorded in the system, Adult websites can then check the verification code that will reveal only that the holder
is over 18. All website interactions will be further protected by blockchain encryption.
The Mindgeek scheme is the most well promoted for the moment and so is seem as the favourite to prevail. TheDaily Mail is now having doubts about the merits of trusting a porn company with age verification on the grounds that the primary
motivation is to make money. The Daily Mail has also spotted that the vast swathes of worldwide porn is nominally held to be illegal by the government under the Obscene Publications Act. Notably female ejaculation is held to be obscene as the
government claims it to be illegal because the ejaculate contains urine. I think the government is on a hiding to nothing if it persists in its silly claims of obscenity, they are simply years out of date and the world has move on.
Anyway the Daily Mail spouts:
The moguls behind the world's biggest pornography websites have been entrusted by the Government with policing the internet to keep it safe for children. MindGeek staff have held a series of meetings with officials in preparation for the new age
verification system which is designed to ensure that under-18s cannot view adult material.
Tens of millions of British adults are expected to have to entrust their private details to MindGeek, which owns the PornHub and YouPorn websites.
Critics have likened the company's involvement to entrusting the cigarette industry with stopping underage smoking and want an independent body to create the system instead.
A Mail on Sunday investigation has found that material on the company's porn websites could be in breach of the Obscene Publications Act. A search for one sexual act, which would be considered illegal to publish videos of under the Obscene
Publications Act, returned nearly 20,000 hits on PornHub. The Mail on Sunday did not watch any of the videos.
Shadow Culture Minister Liam Byrne said:
It is alarming that a company given the job of checking whether viewers of pornography are over 18 can't even police publication of illegal material on its own platform.
A DCMS spokesman said:
The Government will not be endorsing individual age-verification solutions but they will need to abide by data protection laws to be compliant.
Google has tweaked its Google's image search to make it slightly more difficult to view images in full size before downloading
them. Google has also added a more prominent copyright warning.
Google acted as part of a peace deal with photo library Getty Images. In 2017, Getty Images complained to the European Commission, accusing Google of anti-competitive practices.
Google said it had removed some features from image search, including the view image button. Images can still be viewed in full size from the right click menu, at least on my Windows version of Firefox. Google also removed the search by image
button, which was an easy way of finding larger copies of photographs. Perhaps the tweaks are more about restricting the finding of high resolution version of image rather than worrying about standard sized images.
Getty Images is a photo library that sells the work of photographers and illustrators to businesses, newspapers and broadcasters. It complained that Google's image search made it easy for people to find Getty Images pictures and take them, without
the appropriate permission or licence.
In a statement, Getty Images said:
We are pleased to announce that after working cooperatively with Google over the past months, our concerns are being recognised and we have withdrawn our complaint.
Since their ascendance in the 2000s, Google and Facebook have largely defined how ads and other corporate content would appear, where they would flow, and the metrics of online advertising success.
On Monday, one top advertiser, Unilever, went public with its criticism, calling social media little better than a swamp and threatening to pull ads from platforms that leave children unprotected, create social division, or promote anger or hate.
That comes a year after Procter & Gamble adjusted its own ad strategy, voicing similar concerns. ,
Keith Weed, Unilever's chief marketing and communications officer, said in a speech Monday to internet advertisers.
Fake news, racism, sexism, terrorists spreading messages of hate, toxic content directed at children -- parts of the internet we have ended up with is a million miles from where we thought it would take us. This is a deep and systematic issue --
an issue of trust that fundamentally threatens to undermine the relationship between consumers and brands.
Jason Kint, chief executive of Digital Content Next, a trade group that represents many big entertainment and news organizations added:
The technology, it appears, is actually allowing bad actors to amplify misinformation and garbage while at the same time squeezing out the economics of the companies that are actually accountable to consumer trust.
Update: Center Parcs outraged at the Daily Mail for its advert placement
Center Parcs has announced it has stopped advertising in the Daily Mail. It took the decision after its advert appeared in an online article by columnist Richard Littlejohn that criticised diver Tom Daley and his husband David Lance Black, who are
expecting a child . Littlejohn claimed children benefit most from being raised by a man and a woman.
Center Parcs was responding to a complaint from a person who tweeted:
My son so wants me to book at your parks, but how can I do that if you support homophobia?
Center Parcs responded:
We take where we advertise very seriously and have a number of steps to prevent our advertising from appearing alongside inappropriate content. We felt this placement was completely unacceptable and therefore ceased advertising with the Daily
Mail with immediate effect.
In a ruling of particular interest to those working in the adult entertainment biz, a German court has ruled that Facebook's real name policy is illegal and that users must be allowed to sign up for the service under pseudonyms.
The opinion comes from the Berlin Regional Court and disseminated by the Federation of German Consumer Organizations, which filed the suit against Facebook. The Berlin court found that Facebook's real name policy was a covert way of obtaining
users' consent to share their names, which are one of many pieces of information the court said Facebook did not properly obtain users' permission for.
The court also said that Facebook didn't provide a clear-cut choice to users for other default settings, such as to share their location in chats. It also ruled against clauses that allowed the social media giant to use information such as profile
pictures for commercial, sponsored or related content.
Facebook told Reuters it will appeal the ruling, but also that it will make changes to comply with European Union privacy laws coming into effect in June.
Facebook has been ordered to stop tracking people without consent, by a court in Belgium. The company has been told to delete all the data it had gathered on people who did not use Facebook. The court ruled the data was gathered illegally.
Belgium's privacy watchdog said the website had broken privacy laws by placing tracking code on third-party websites.
Facebook said it would appeal against the ruling.
The social network faces fines of 250,000 euros a day if it does not comply.
The ruling is the latest in a long-running dispute between the social network and the Belgian commission for the protection of privacy (CPP). In 2015, the CPP complained that Facebook tracked people when they visited pages on the site or clicked
like or share, even if they were not members.
The United Kingdom's reputation for online freedom has suffered significantly in recent years, in no small part due to the draconian Investigatory Powers Act, which came into power last year and created what many people have described as the worst
surveillance state in the free world.
But despite this, the widely held perception is that the UK still allows relatively free access to the internet, even if they do insist on keeping records on what sites you are visiting. But how true, is this perception?
There is undeniably more online censorship in the UK than many people would like to admit to. But is this just the tip of the iceberg? The censorship of one YouTube video suggests that it might just be. The video in question contains footage
filmed by a trucker of refugees trying to break into his vehicle in order to get across the English Channel and into the UK. This is a topic which has been widely reported in British media in the past, but in the wake of the Brexit vote and the
removal of the so-called 'Jungle Refugee Camp', there has been strangely little coverage.
Yet, if you try to access this video in the UK, you will find that it is blocked. It remains accessible to users elsewhere in the world, albeit with content warnings in place.
And it is not alone. It doesn't take too much research to uncover several similar videos which are also censored in the UK. The scale of the issue likely requires further research. But it safe to say, that such censorship is both unnecessary and
potentially illegal as it as undeniably denying British citizens access to content which would feed an informed debate on some crucial issues.
The UK government has unveiled a tool it says can accurately detect jihadist content and block it from being viewed.
Home Secretary Amber Rudd told the BBC she would not rule out forcing technology companies to use it by law. Rudd is visiting the US to meet tech companies to discuss the idea, as well as other efforts to tackle extremism.
The government provided £600,000 of public funds towards the creation of the tool by an artificial intelligence company based in London.
Thousands of hours of content posted by the Islamic State group was run past the tool, in order to train it to automatically spot extremist material.
ASI Data Science said the software can be configured to detect 94% of IS video uploads. Anything the software identifies as potential IS material would be flagged up for a human decision to be taken.
The company said it typically flagged 0.005% of non-IS video uploads. But this figure is meaningless without an indication of how many contained any content that have any connection with jihadis.
In London, reporters were given an off-the-record briefing detailing how ASI's software worked, but were asked not to share its precise methodology. However, in simple terms, it is an algorithm that draws on characteristics typical of IS and its
It sounds like the tool is more about analysing data about the uploading account, geographical origin, time of day, name of poster etc rather than analysing the video itself.
Comment: Even extremist takedowns require accountability
Can extremist material be identified at 99.99% certainty as Amber Rudd claims today? And how does she intend to ensure that there is legal accountability for content removal?
The Government is very keen to ensure that extremist material is removed from private platforms, like Facebook, Twitter and Youtube. It has urged use of machine learning and algorithmic identification by the companies, and threatened fines for
failing to remove content swiftly.
Today Amber Rudd claims to have developed a tool to identify extremist content, based on a database of known material. Such tools can have a role to play in identifying unwanted material, but we need to understand that there are some important
caveats to what these tools are doing, with implications about how they are used, particularly around accountability. We list these below.
Before we proceed, we should also recognise that this is often about computers (bots) posting vast volumes of material with a very small audience. Amber Rudd's new machine may then potentially clean some of it up. It is in many ways a propaganda
battle between extremists claiming to be internet savvy and exaggerating their impact, while our own government claims that they are going to clean up the internet. Both sides benefit from the apparent conflict.
The real world impact of all this activity may not be as great as is being claimed. We should be given much more information about what exactly is being posted and removed. For instance the UK police remove over 100,000 pieces of extremist content
by notice to companies: we currently get just this headline figure only. We know nothing more about these takedowns. They might have never been viewed, except by the police, or they might have been very influential.
The results of the government's' campaign to remove extremist material may be to push them towards more private or censor-proof platforms. That may impact the ability of the authorities to surveil criminals and to remove material in the future. We
may regret chasing extremists off major platforms, where their activities are in full view and easily used to identify activity and actors.
Whatever the wisdom of proceeding down this path, we need to be worried about the unwanted consequences of machine takedowns. Firstly, we are pushing companies to be the judges of legal and illegal. Secondly, all systems make mistakes and require
accountability for them; mistakes need to be minimised, but also rectified.
Here is our list of questions that need to be resolved.
1 What really is the accuracy of this system?
Small error rates translate into very large numbers of errors at scale. We see this with more general internet filters in the UK, where our blocked.org.uk project regularly uncovers and reports errors.
How are the accuracy rates determined? Is there any external review of its decisions?
The government appears to recognise the technology has limitations. In order to claim a high accuracy rate, they say at least 6% of extremist video content has to be missed. On large platforms that would be a great deal of material needing human
review. The government's own tool shows the limitations of their prior demands that technology "solve" this problem.
Islamic extremists are operating rather like spammers when they post their material. Just like spammers, their techniques change to avoid filtering. The system will need constant updating to keep a given level of accuracy.
2 Machines are not determining meaning
Machines can only attempt to pattern match, with the assumption that content and form imply purpose and meaning. This explains how errors can occur, particularly in missing new material.
3 Context is everything
The same content can, in different circumstances, be legal or illegal. The law defines extremist material as promoting or glorifying terrorism. This is a vague concept. The same underlying material, with small changes, can become news, satire or
commentary. Machines cannot easily determine the difference.
4 The learning is only as good as the underlying material
The underlying database is used to train machines to pattern match. Therefore the quality of the initial database is very important. It is unclear how the material in the database has been deemed illegal, but it is likely that these are police
determinations rather than legal ones, meaning that inaccuracies or biases in police assumptions will be repeated in any machine learning.
5 Machines are making no legal judgment
The machines are not making a legal determination. This means a company's decision to act on what the machine says is absent of clear knowledge. At the very least, if material is "machine determined" to be illegal, the poster, and users
who attempt to see the material, need to be told that a machine determination has been made.
6 Humans and courts need to be able to review complaints
Anyone who posts material must be able to get human review, and recourse to courts if necessary.
7 Whose decision is this exactly?
The government wants small companies to use the database to identify and remove material. If material is incorrectly removed, perhaps appealed, who is responsible for reviewing any mistake?
It may be too complicated for the small company. Since it is the database product making the mistake, the designers need to act to correct it so that it is less likely to be repeated elsewhere.
If the government want people to use their tool, there is a strong case that the government should review mistakes and ensure that there is an independent appeals process.
8 How do we know about errors?
Any takedown system tends towards overzealous takedowns. We hope the identification system is built for accuracy and prefers to miss material rather than remove the wrong things, however errors will often go unreported. There are strong incentives
for legitimate posters of news, commentary, or satire to simply accept the removal of their content. To complain about a takedown would take serious nerve, given that you risk being flagged as a terrorist sympathiser, or perhaps having to enter
formal legal proceedings.
We need a much stronger conversation about the accountability of these systems. So far, in every context, this is a question the government has ignored. If this is a fight for the rule of law and against tyranny, then we must not create
arbitrary, unaccountable, extra-legal censorship systems.
The new German law that compels social media companies to remove hate speech and other illegal content can lead to unaccountable, overbroad censorship and should be promptly reversed, Human Rights Watch said today. The law sets a dangerous
precedent for other governments looking to restrict speech online by forcing companies to censor on the government's behalf. Wenzel Michalski, Germany director at Human Rights Watch said:
Governments and the public have valid concerns about the proliferation of illegal or abusive content online, but the new German law is fundamentally flawed. It is vague, overbroad, and turns private companies into overzealous censors to avoid
steep fines, leaving users with no judicial oversight or right to appeal.
Parliament approved the Network Enforcement Act , commonly known as NetzDG, on June 30, 2017, and it took full effect on January 1, 2018. The law requires large social media platforms, such as Facebook, Instagram, Twitter, and YouTube, to promptly
remove "illegal content," as defined in 22 provisions of the criminal code , ranging widely from insult of public office to actual threats of violence. Faced with fines up to 50 million euro, companies are already removing content to
comply with the law.
At least three countries -- Russia, Singapore, and the Philippines -- have directly cited the German law as a positive example as they contemplate or propose legislation to remove "illegal" content online. The Russian draft law,
currently before the Duma, could apply to larger social media platforms as well as online messaging services.
Two key aspects of the law violate Germany's obligation to respect free speech, Human Rights Watch said. First, the law places the burden on companies that host third-party content to make difficult determinations of when user speech violates the
law, under conditions that encourage suppression of arguably lawful speech. Even courts can find these determinations challenging, as they require a nuanced understanding of context, culture, and law. Faced with short review periods and the risk
of steep fines, companies have little incentive to err on the side of free expression.
Second, the law fails to provide either judicial oversight or a judicial remedy should a cautious corporate decision violate a person's right to speak or access information. In this way, the largest platforms for online expression become "no
accountability" zones, where government pressure to censor evades judicial scrutiny.
At the same time, social media companies operating in Germany and elsewhere have human rights responsibilities toward their users, and they should act to protect them from abuse by others, Human Rights Watch said. This includes stating in user
agreements what content the company will prohibit, providing a mechanism to report objectionable content, investing adequate resources to conduct reviews with relevant regional and language expertise, and offering an appeals process for users who
believe their content was improperly blocked or removed. Threats of violence, invasions of privacy, and severe harassment are often directed against women and minorities and can drive people off the internet or lead to physical attacks.
tumblr is an image sharing website. It has just announced that it will changesafe mode wrks. In a email to wisers, tumblr writes: the way that its
Last year we introduced Safe Mode, which filters sensitive content in your dashboard and search results so you have control over what you see and what you don't. And now that it's been out for a while, we want to make sure everyone has the chance
to try it out.
Over the next couple weeks, you might see some things in your dashboard getting filtered. If you like it that way, that's great. If you don't, no problem. You can go back by turning off Safe Mode any time.
Update: Are the safe mode changes related to impending UK porn censorship?
Tumblr has long been one of the freest spaces on the internet for porn and sex-positive content, thanks to lax guidelines compared to Facebook or Instagram. Porn creators, fetish community artists, and more were able to share work with little
trouble. Tumblr made a major change last year with the introduction of a Safe Mode that initially filtered NSFW content if users chose to enable it. Now though, Tumblr is making Safe Mode the default setting for users.
The Safe Mode feature hides sensitive images -- for example nude images, even, as Tumblr's guidelines note, if artistic or education nudity like classic art or anatomy. As Motherboard reports , it's a function that claims to give users more
control over what you see and what you don't, updating the Safe Search option that the platform introduced back in 2012 that removed sensitive stuff from the site's search results.
Rolling out the default setting means users will have to go out of their way to switch back and see unfiltered content. An email sent to Tumblr users last week states that they want to make sure everyone has the chance to try it out.
Many adult content creators are concerned this will affect their work and space on the platform. Tumblr user, freelance artist, and adult comic-maker Kayla-Na told Dazed of her frustrations: I understand wanting to make Tumblr a safer environment
for younger audiences, but Tumblr has to remember that the adult community is still part of the website as a whole, and shouldn't be suppressed into oblivion.
Perhaps the Tumblr safe mode has also been introduced as a step towards the UK's porn censorship by age verification. The next step maybe for the safe mode to mandatorily imposed on internet viewers in Britain and can only be turned off when they
subject themselves to age verification.
Some users have reported seeing pop ups in Instagram (IG) informing them that, from now on, Instagram will be flagging when you
record or take a screenshot of other people's IG stories and informing the originator that you have rsnapped or ecorded the post.
According to a report by Tech Crunch , those who have been selected to participate in the IG trial can see exactly who has been creeping and snapping their stories. Those who have screenshotted an image or recorded a video will have a little
camera shutter logo next to their usernames, much like Snapchat.
Of course, users have already found a nifty workaround to avoid social media stalking exposure. So here's the deal: turning your phone on airplane mode after you've loaded the story and then taking your screenshot means that users won't be
notified of any impropriety (sounds easy for Instagram to fix this by saving the keypress until the next time it communicates with the Instagram server). You could also download the stories from Instagram's website or use an app like Story
Reposter. Maybe PC users just need another small window on the desktop, then move the mouse pointer to the small window before snapping the display.
Clearly, there's concerns on Instagram's part about users' content being shared without their permission, but if the post is shared with someone for viewing, it is pretty tough to stop then from grabbing a copy for themselves as they view it.
The UK's digital and culture secretary, Matt Hancock, has ruled out creating a new internet censor targeting social media
such as Facebook and Twitter.
In an interview on the BBC's Media Show , Hancock said he was not inclined in that direction and instead wanted to ensure existing regulation is fit for purpose. He said:
If you tried to bring in a new regulator you'd end up having to regulate everything. But that doesn't mean that we don't need to make sure that the regulations ensure that markets work properly and people are protected.
Meanwhile the Electoral Commission and the Department for Digital, Culture, Media and Sport select committee are now investigating whether Russian groups used the platforms to interfere in the Brexit referendum in 2016. The DCMS select committee
is in the US this week to grill tech executives about their role in spreading fake news. In a committee hearing in Washington yesterday, YouTube's policy chief said the site had found no evidence of Russian-linked accounts purchasing ads to
interfere in the Brexit referendum.
Amazon's game-streaming site Twitch is cracking down on sexy content in a bid to make the platform more family-friendly. In particular
the site as putting a stop to so-called bikini streamers who wear skimpy outfits to increase their subscriber count, or attract donations.
Some streams involved a squats for subs dynamic, where scantily clad game streamers would perform squats in front of a camera in return for new channel subscribers.
The Amazon-owned gaming website, which is the world's most popular place to live-stream video games, has introduced a strict dress code that will come into effect later this month. Transgressors will be banned from the site. Twitch
We're updating our moderation framework to review your conduct in its entirety when evaluating if the intent is to be sexually suggestive.
The company is planning to examine a whole host of elements, including stream titles, camera angles, emotes, panels, clothing, overlays, and the chat box too.
As far as clothing goes, Twitch recommends wearing something you'd be comfortable in at a shopping centre.
Attire in gaming streams, most at-home streams, and all profile/channel imagery should be appropriate for a public street, mall, or restaurant.
This week, Senators Hatch, Graham, Coons, and Whitehouse introduced a bill that diminishes the data privacy of people around the world.
The Clarifying Overseas Use of Data ( CLOUD
) Act expands American and foreign law enforcement's ability to target and access people's data across international borders in two ways. First, the bill creates an explicit provision for U.S. law enforcement (from a local police department to
federal agents in Immigration and Customs Enforcement) to access "the contents of a wire or electronic communication and any record or other information" about a person regardless of where they live or where that information is located
on the globe. In other words, U.S. police could compel a service provider--like Google, Facebook, or Snapchat--to hand over a user's content and metadata, even if it is stored in a foreign country, without following that foreign country's privacy
Second, the bill would allow the President to enter into "executive agreements" with foreign governments that would allow each government to acquire users' data stored in the other country, without following each other's privacy laws.
For example, because U.S.-based companies host and carry much of the world's Internet traffic, a foreign country that enters one of these executive agreements with the U.S. to could potentially wiretap people located anywhere on the globe (so long
as the target of the wiretap is not a U.S. person or located in the United States) without the procedural safeguards of U.S. law typically given to data stored in the United States, such as a warrant, or even notice to the U.S. government. This is
an enormous erosion of current data privacy laws.
This bill would also moot legal proceedings now before the U.S. Supreme Court. In the spring, the Court will decide whether or not current U.S. data privacy laws allow U.S. law enforcement to serve warrants for information stored outside the
United States. The case, United States v. Microsoft
(often called "Microsoft Ireland"), also calls into question principles of international law, such as respect for other countries territorial boundaries and their rule of law.
Notably, this bill would expand law enforcement access to private email and other online content, yet the Email Privacy Act
, which would create a warrant-for-content requirement, has still not passed the Senate, even though it has enjoyed
in the House for the past two years
The CLOUD Act and the US-UK Agreement
The CLOUD Act's proposed language is not new. In 2016, the Department of Justice first proposed
legislation that would enable the executive branch to enter into bilateral agreements with foreign governments to allow those foreign governments direct access to U.S. companies and U.S. stored data. Ellen Nakashima at the Washington Post
the story that these agreements (the first iteration has already been negotiated with the United Kingdom) would enable foreign governments to wiretap any communication in the United States, so long as the target is not a U.S. person. In
, the Justice Department re-submitted the bill for Congressional review, but added a few changes: this time including broad language to allow the extraterritorial application of U.S. warrants outside the boundaries of the United States.
In September 2017, EFF, with a coalition of 20 other privacy advocates, sent a letter
to Congress opposing the Justice Department's revamped bill.
The executive agreement language in the CLOUD Act is nearly identical to the language in the DOJ's 2017 bill. None of EFF's concerns
have been addressed. The legislation still:
Includes a weak standard for review that does not rise to the protections of the warrant requirement under the 4th Amendment.
Fails to require foreign law enforcement to seek individualized and prior judicial review.
Grants real-time access and interception to foreign law enforcement without requiring the heightened warrant standards that U.S. police have to adhere to under the Wiretap Act.
Fails to place adequate limits on the category and severity of crimes for this type of agreement.
Fails to require notice on any level -- to the person targeted, to the country where the person resides, and to the country where the data is stored. (Under a separate provision regarding U.S. law enforcement extraterritorial orders, the bill
allows companies to give notice to the foreign countries where data is stored, but there is no parallel provision for company-to-country notice when foreign police seek data stored in the United States.)
The CLOUD Act also creates an unfair two-tier system. Foreign nations operating under executive agreements are subject to minimization and sharing rules when handling data belonging to U.S. citizens, lawful permanent residents, and corporations.
But these privacy rules do not extend to someone born in another country and living in the United States on a temporary visa or without documentation. This denial of privacy rights is unlike other U.S. privacy laws. For instance, the
Stored Communications Act
protects all members of the "public" from the unlawful disclosure of their personal communications.
An Expansion of U.S. Law Enforcement Capabilities
The CLOUD Act would give unlimited jurisdiction to U.S. law enforcement over any data controlled by a service provider, regardless of where the data is stored and who created it. This applies to content, metadata, and subscriber information --
meaning private messages and account details could be up for grabs. The breadth of such unilateral extraterritorial access creates a dangerous precedent for other countries who may want to access information stored outside their own borders,
including data stored in the United States.
EFF argued on this basis (among others) against unilateral U.S. law enforcement access to cross-border data, in our Supreme Court
in the Microsoft Ireland case.
When data crosses international borders, U.S. technology companies can find themselves caught in the middle between the conflicting data laws of different nations: one nation might use its criminal investigation laws to demand data located beyond
its borders, yet that same disclosure might violate the data privacy laws of the nation that hosts that data. Thus, U.S. technology companies lobbied for and received provisions in the CLOUD Act allowing them to move to quash or modify U.S. law
enforcement orders for extraterritorial data. The tech companies can quash a U.S. order when the order does not target a U.S. person and might conflict with a foreign government's laws. To do so, the company must object within 14 days, and undergo
a complex "comity" analysis -- a procedure where a U.S. court must balance the competing interests of the U.S. and foreign governments.
Failure to Support Mutual Assistance
Of course, there is another way to protect technology companies from this dilemma, which would also protect the privacy of technology users around the world: strengthen the existing international system of Mutual Legal Assistance Treaties (MLATs).
This system allows police who need data stored abroad to obtain the data through the assistance of the nation that hosts the data. The MLAT system encourages international cooperation.
It also advances data privacy. When foreign police seek data stored in the U.S., the MLAT system requires them to adhere to the Fourth Amendment's warrant requirements. And when U.S. police seek data stored abroad, it requires them to follow the
data privacy rules where the data is stored, which may include important " necessary and proportionate
" standards. Technology users are most protected when police, in the pursuit of cross-border data, must satisfy the privacy standards of both countries.
While there are concerns from law enforcement that the MLAT system has become too slow, those concerns should be addressed with improved resources, training, and streamlining.
The CLOUD Act raises dire implications for the international community, especially as the Council of
is beginning a process to review the MLAT system that has been supported for the last two decades by the Budapest Convention. Although Senator Hatch has in the past introduced
that would support the MLAT system, this new legislation fails to include any provisions that would increase resources for the U.S. Department of Justice to tackle its backlog of MLAT requests, or otherwise improve the MLAT system.
A growing chorus of privacy groups in the United States opposes the CLOUD Act's broad expansion of U.S. and foreign law enforcement's unilateral powers over cross-border data. For example, Sharon Bradford Franklin of
(and the former executive director of the U.S. Privacy and Civil Liberties Oversight Board) objects that the CLOUD Act will move law enforcement access capabilities "in the wrong direction, by sacrificing digital rights."
and Access Now
also oppose the bill.
Sadly, some major U.S. technology companies
and legal scholars support the legislation. But, to set the record straight, the CLOUD Act is not a " good
." Nor does it do a " remarkable job
of balancing these interests in ways that promise long-term gains in both privacy and security." Rather, the legislation reduces protections for the personal privacy of technology users in an attempt to mollify tensions between law
enforcement and U.S. technology companies.
Legislation to protect the privacy of technology users from government snooping has long been overdue in the United States. But the CLOUD Act does the opposite, and privileges law enforcement at the expense of people's privacy. EFF strongly
opposes the bill. Now is the time to strengthen the MLAT system, not undermine it.
Illegal content and terrorist propaganda are still spreading rapidly online in the European Union -- just not on mainstream platforms, new analysis shows.
Twitter, Google and Facebook all play by EU rules when it comes to illegal content, namely hate speech and terrorist propaganda, policing their sites voluntarily.
But with increased scrutiny on mainstream sites, alt-right and terrorist sympathizers are flocking to niche platforms where illegal content is shared freely, security experts and anti-extremism activists say.
Government outlines next steps to make the UK the safest place to be online
The Prime Minister has announced plans to review laws and make sure that what is illegal offline is illegal online as the Government marks Safer Internet Day.
The Law Commission will launch a review of current legislation on offensive online communications to ensure that laws are up to date with technology.
As set out in the Internet Safety Strategy Green Paper
, the Government is clear that abusive and threatening behaviour online is totally unacceptable. This work will determine whether laws are effective enough in ensuring parity between the treatment of offensive behaviour that happens offline and
The Prime Minister has also announced:
That the Government will introduce a comprehensive new social media code of practice this year, setting out clearly the minimum expectations on social media companies
The introduction of an annual internet safety transparency report - providing UK data on offensive online content and what action is being taken to remove it.
Other announcements made today by Secretary of State for Digital, Culture, Media and Sport (DCMS) Matt Hancock include:
A new online safety guide
for those working with children, including school leaders and teachers, to prepare young people for digital life
A commitment from major online platforms including Google, Facebook and Twitter to put in place specific support during election campaigns to ensure abusive content can be dealt with quickly -- and that they will provide advice and guidance to
Parliamentary candidates on how to remain safe and secure online
DCMS Secretary of State Matt Hancock said:
We want to make the UK the safest place in the world to be online and having listened to the views of parents, communities and industry, we are delivering on the ambitions set out in our Internet Safety Strategy.
Not only are we seeing if the law needs updating to better tackle online harms, we are moving forward with our plans for online platforms to have tailored protections in place - giving the UK public standards of internet safety unparalleled
anywhere else in the world.
Law Commissioner Professor David Ormerod QC said:
There are laws in place to stop abuse but we've moved on from the age of green ink and poison pens. The digital world throws up new questions and we need to make sure that the law is robust and flexible enough to answer them.
If we are to be safe both on and off line, the criminal law must offer appropriate protection in both spaces. By studying the law and identifying any problems we can give government the full picture as it works to make the UK the safest place to
The latest announcements follow the publication of the Government's Internet Safety Strategy Green Paper
last year which outlined plans for a social media code of practice. The aim is to prevent abusive behaviour online, introduce more effective reporting mechanisms to tackle bullying or harmful content, and give better guidance for users to identify
and report illegal content. The Government will be outlining further steps on the strategy, including more detail on the code of practice and transparency reports, in the spring.
To support this work, people working with children including teachers and school leaders will be given a new guide for online safety, to help educate young people in safe internet use. Developed by the UK Council for Child Internet Safety (
, the toolkit describes the knowledge and skills for staying safe online that children and young people should have at different stages of their lives.
Major online platforms including Google, Facebook and Twitter have also agreed to take forward a recommendation from the Committee on Standards in Public Life (CSPL) to provide specific support for Parliamentary candidates so that they can remain
safe and secure while on these sites. during election campaigns. These are important steps in safeguarding the free and open elections which are a key part of our democracy.
Included in the Law Commission's scope for their review will be the Malicious Communications Act and the Communications Act. It will consider whether difficult concepts need to be reconsidered in the light of technological change - for example,
whether the definition of who a 'sender' is needs to be updated.
The Government will bring forward an Annual Internet Safety Transparency report, as proposed in our Internet Safety Strategy green paper. The reporting will show:
the amount of harmful content reported to companies
the volume and proportion of this material that is taken down
how social media companies are handling and responding to complaints
how each online platform moderates harmful and abusive behaviour and the policies they have in place to tackle it.
Annual reporting will help to set baselines against which to benchmark companies' progress, and encourage the sharing of best practice between companies.
The new social media code of practice will outline standards and norms expected from online platforms. It will cover:
The development, enforcement and review of robust community guidelines for the content uploaded by users and their conduct online
The prevention of abusive behaviour online and the misuse of social media platforms -- including action to identify and stop users who are persistently abusing services
The reporting mechanisms that companies have in place for inappropriate, bullying and harmful content, and ensuring they have clear policies and performance metrics for taking this content down
The guidance social media companies offer to help users identify illegal content and contact online, and advise them on how to report it to the authorities, to ensure this is as clear as possible
The policies and practices companies apply around privacy issues.
The UK Prime Minister's proposals for possible new laws to stop intimidation against politicians have the potential to prevent legal
protests and free speech that are at the core of our democracy, says Index on Censorship. One hundred years after the suffragette demonstrations won the right for women to have the vote for the first time, a law that potentially silences angry
voices calling for change would be a retrograde step.
No one should be threatened with violence, or subjected to violence, for doing their job, said Index chief executive Jodie Ginsberg. However, the UK already has a host of laws dealing with harassment of individuals both off and online that cover
the kind of abuse politicians receive on social media and elsewhere. A loosely defined offence of 'intimidation' could cover a raft of perfectly legitimate criticism of political candidates and politicians -- including public protest.
Greater transparency for users around news broadcasters
Today we will start rolling out notices below videos uploaded by news broadcasters that receive some level of government or public funding.
Our goal is to equip users with additional information to help them better understand the sources of news content that they choose to watch on YouTube.
We're rolling out this feature to viewers in the U.S. for now, and we don't expect it to be perfect. Users and publishers can give us feedback through the send feedback form. We plan to improve and expand the feature over time.
The notice will appear below the video, but above the video's title, and include a link to Wikipedia so viewers can learn more about the news broadcaster.
I've spent years of study perfecting a spell to turn Hermione into a porn star, and some spotty muggle has beaten me to it.
In recent weeks there has been an explosion in what has become known as deepfakes: pornographic videos manipulated so that the original actress' face is replaced with somebody else's.
As these tools have become more powerful and easier to use, it has enabled the transfer of sexual fantasies from people's imaginations to the internet. It flies past not only the boundaries of human decency, but also our sense of believing what we
see and hear.
There are some celebrities in particular that seem to have attracted the most attention from deepfakers.
It seems, anecdotally, to be driven by the shock factor: the extent to which a real explicit video involving this subject would create a scandal.
Fakes depicting actress Emma Watson are among the most popular on deepfake communities, alongside those involving Natalie Portman.
As the practice draws more ire, some of the sites facilitating the sharing of such content are considering their options - and taking tentative action.
Gfycat, an image hosting site, has removed posts it identified as being deepfakes - a task likely to become much more difficult in the not-too-distant future.
Reddit, the community website that has emerged as a central hub for sharing, is yet to take any direct action - but the BBC understands it is looking closely at what it could do.
Proposal for Designation of Age-verification Regulator
Thursday 1 February 2018
The Minister of State, Department for Digital, Culture, Media and Sport (Margot James)
I beg to move,
That the Committee has considered the Proposal for Designation of Age-verification Regulator.
The Digital Economy Act 2017 introduced a requirement for commercial providers of online pornography to have robust age-verification controls in place to prevent children and young people under the age of 18 from accessing
pornographic material. Section 16 of the Act states that the Secretary of State may designate by notice the age-verification regulator and may specify which functions under the Act the age-verification regulator should hold. The debate will focus
on two issues. I am seeking Parliament's approval to designate the British Board of Film Classification as the age-verification regulator and approval for the BBFC to hold in this role specific functions under the Act.
Liam Byrne (Birmingham, Hodge Hill) (Lab)
At this stage, I would normally preface my remarks with a lacerating attack on how the Government are acquiescing in our place in the world as a cyber also-ran, and I would attack them for their rather desultory position and
attitude to delivering a world-class digital trust regime. However, I am very fortunate that this morning the Secretary of State has made the arguments for me. This morning, before the Minister arrived, the Secretary of State launched his new
app, Matt Hancock MP. It does not require email verification, so people are already posting hardcore pornography on it. When the Minister winds up, she might just tell us whether the age-verification regulator that she has proposed, and that we
will approve this morning, will oversee the app of the Secretary of State as well.
Particulars of Proposed Designation of Age-Verification Regulator
01 February 2018
Motion to Approve moved by Lord Ashton of Hyde
Section 16 of the Digital Economy Act states that the Secretary of State may designate by notice the age-verification regulator, and may specify which functions under the Act the age-verification regulator should hold. I am therefore seeking this
House's approval to designate the British Board of Film Classification as the age-verification regulator. We believe that the BBFC is best placed to carry out this important role, because it has unparalleled expertise in this area.
Lord Stevenson of Balmacara (Lab)
I still argue, and I will continue to argue, that it is not appropriate for the Government to give statutory powers to a body that is essentially a private company. The BBFC is, as I have said before204I do not want to go
into any detail -- a company limited by guarantee. It is therefore a profit-seeking organisation. It is not a charity or body that is there for the public good. It was set up purely as a protectionist measure to try to make sure that people
responsible for producing films that were covered by a licensing regime in local authorities that was aggressive towards certain types of films204it was variable and therefore not good for business204could be protected by a system that was
largely undertaken voluntarily. It was run by the motion picture production industry for itself.
L ord Ashton of Hyde
I will just say that the BBFC is set up as an independent non-governmental body with a corporate structure, but it is a not-for-profit corporate structure. We have agreed funding arrangements for the BBFC for the purposes of
the age-verification regulator. The funding is ring-fenced for this function. We have agreed a set-up cost of just under £1 million and a running cost of £800,000 for the first year. No other sources of funding will be required to carry out this
work, so there is absolutely no question of influence from industry organisations, as there is for its existing work—it will be ring-fenced.
Frederic Durand-Baissas, a primary school teacher in Paris, has sued Facebook in French court for violating his freedom of
speech in 2011 by abruptly removing his profile.
Durand-Baissas' account was suspended after he posted a photo of Gustave Courbet's The Origin of the World , a painting from 1866 that depicts female genitalia.
The case was heard on Thursday. His lawyers have asked a Paris civil court to order Facebook Inc. to reactivate the account and to pay Durand-Baissas 20,000 euros ($23,500) in damages. Durand-Baissas also wants Facebook to explain why his account
Lawyers for Facebook argued the lawsuit should be dismissed on a technicality, that Durand-Baissas didn't sue the right Facebook entity. The teacher should have sued Facebook Ireland, the web host for its service in France, and not the
California-based parent company, Facebook Inc., they claimed. Facebook Inc. can't explain why Facebook Ireland deactivated Mr. Durand-Baissas' account, lawyer Caroline Lyannaz said in court.
Facebook's current policy appears to allow postings such as a photo of the Courbet painting. Its standards page now explicitly states: We also allow photographs of paintings, sculptures, and other art that depicts nude figures.
The civil court's ruling in Durand-Baissas' case is expected on March 15.
China will begin blocking overseas providers of virtual private networks (VPN) used to circumvent its Great Firewall of government
censorship at the end of March, official media reported.
Ministry of Industry and Information Technology (MIIT) chief censor Zhang Feng said VPN operators must be licensed by the government, and that unlicensed VPNs are the target of new rules which come into force on March 31. He said that China wants
to ban VPNs which unlawfully conduct cross-border operational activities.
Any foreign companies that want to set up a cross-border operation for private use will need to set up a dedicated line for that purpose, he said. They will be able to lease such a line or network legally from the telecommunications import and
Meanwhile, the American Chamber of Commerce in China said it had carried out a recent survey of U.S. companies in the country that showed that the inability to access certain online tools, internet censorship, and cybersecurity were impeding their
An internet user surnamed Zeng told RFA that the new regulations could also hit any Chinese businesses that need unimpeded communications with the outside world. He explained:
I have a friend who is a businessman, and makes things mainly for export, and this has already affected his order book. He usually uses WhatsApp to communicate [with customers] and now it's very hard to log on, and this has really affected
business. In future, he won't be able to log on at all, so he told me he will likely have to shut down his factory.
Although a majority are in favour of verifying age, it seems far fewer people in our survey would be happy to
actually go through verification themselves. Only 19% said they'd be comfortable sharing information directly with an adult site, and just 11% would be comfortable handing details to a third party.
The UK's mass digital surveillance regime preceding the snoopers charter has been found to be illegal by an appeals court.
The case was brought by the Labour deputy leader, Tom Watson in conjunction with Liberty, the human rights campaign group.
The three judges said Data Retention and Investigatory Powers Act 2014 (Dripa), which paved the way for the snooper's charter legislation, did not restrict the accessing of confidential personal phone and web browsing records to investigations of
serious crime, and allowed police and other public bodies to authorise their own access without adequate oversight. The judges said Dripa was inconsistent with EU law because of this lack of safeguards, including the absence of prior review by a
court or independent administrative authority.
Responding to the ruling, Watson said:
This legislation was flawed from the start. It was rushed through parliament just before recess without proper parliamentary scrutiny. The government must now bring forward changes to the Investigatory Powers Act to ensure that hundreds of
thousands of people, many of whom are innocent victims or witnesses to crime, are protected by a system of independent approval for access to communications data. I'm proud to have played my part in safeguarding citizens' fundamental rights.
Martha Spurrier, the director of Liberty, said:
Yet again a UK court has ruled the government's extreme mass surveillance regime unlawful. This judgement tells ministers in crystal clear terms that they are breaching the public's human rights. She said no politician was above the law. When
will the government stop bartering with judges and start drawing up a surveillance law that upholds our democratic freedoms?
Matthew Rice of the Open Rights Group responded:
Once again, another UK court has found another piece of Government surveillance legislation to be unlawful. The Government needs to admit their legislation is flawed and make the necessary changes to the Investigatory Powers Act to protect the
public's fundamental rights.
The Investigatory Powers Act carves a gaping hole in the public's rights. Public bodies able to access data without proper oversight, and access to that data for reasons other than fighting serious crime. These practices must stop, the courts
have now confirmed it. The ball is firmly in the Government's court to set it right.
Two broadband providers, BT and EE, have gone to the Supreme Court in London to appeal two key aspects of an earlier ruling, which
forced major UK ISPs to start blocking websites that were found to sell counterfeit goods.
Previously major ISPs could only be forced, via a court order, to block websites if they were found to facilitate internet copyright infringement. But in 2014 the High Court extended this to include sites that sell counterfeit goods and thus abuse
The providers initially appealed this decision, not least by stating that Cartier and Montblanc (they raised the original case) had provided no evidence that their networks were being abused to infringe Trade Marks and that the UK Trade Mark Act
did not include a provision for website blocking. Not to mention the risk that such a law could be applied in an overzealous way, eg requiring the blocking of eBay because of one seller.
The ISPs also noted that trademark infringing sites weren't heavily used, and thus they felt as if it would not be proportionate for them to suffer the costs involved.
In April 2016 this case went to the Court of Appeal (London) and the ISPs lost and so the appeal to the Supreme Court.
Firefox is working to protect users from censorship and government control of the Internet. Firefox 59 will recognize new peer to
peer internet protocols such as Dat Project, IPFS, and Secure Scuttlebutt, allowing companies to develop extensions which will deliver the Internet in a way governments will find difficult to control, monitor and censor.
Mozilla believes such freedom is a key ingredient of a healthy Internet, and has sponsored other projects which would offer peer to peer wireless internet which cuts out Internet Service Providers.
While a peer to peer system would never be as fast and easy as a client-server system as we have at present, it does provide a baseline level of service which government and ISPs could not go below, or risk increasing number of users defecting,
which means the mere existence of these systems helps everyone else, even if they never become widespread.
Mozilla has always been a proponent of decentralization , recognizing that it is a key ingredient of a healthy Internet. Starting with Firefox 59, several protocols that support decentralized architectures are approved for
use by extensions. The newly approved protocols are:
Firefox itself does not implement these protocols, but having them on the approved list means the browser recognizes them as valid protocols and extensions are free to provide implementations.
A Republican Virginia lawmaker has revived the nonsense idea to impose a state tax charge on every device sold to enable access to adult
State Representative Dave LaRock's has introduced a bill misleadingly called the Human Trafficking Prevention Act, which would require Virginians to pay a $20 fee to unblock content on adult websites.
LaRock has track record of being anti-porn and anti-gay. He once tore down advertising for an adult bookstore and railed against recognition for a local LGBTQ pride month.
Opponents point out that the proposal amounts to a tax on media content and would violate the First Amendment. The Media Coalition, which tracks legislation involving the First Amendment, sees the bill as nothing more than a tax on content, which
is unconstitutional, said executive director David Horowitz. People have a First Amendment right to access this content, and publishers have a First Amendment right to provide it.
Claire Guthrie Gastañaga, executive director of the ACLU of Virginia, said the organization just can't take the bill seriously.
China's internet censor has shut down some of the most popular sections of Weibo, a Twitter-like social media platform, saying that the website had
failed in its duty to censor content.
The Beijing office of the Cyberspace Administration of China summoned a Weibo executive, complaining of its serious problems including not censoring vulgar and pornographic content. The censor said:
Sina Weibo has violated the relevant internet laws and regulations and spread illegal information. It has a serious problem in promoting 'wrong' values and has had an adverse influence on the internet environment.
It highlighted as problematic sections of the platform such as the hot topics ranking, most searched, most searched celebrities and most searched relationship topics, as well as its question-and-answer section.
Other problems on Weibo included allowing posts that discriminated against ethnic minorities and content that was not in line with what it deemed appropriate social values.
Weibo said it had since shut down a number of services, including its list of top searches, for a week.
Just a bit of background from Thailand explaining how internet is priced for mobile phones, it rather explains how Facebook
amd Youtube are even more dominant than in the west:
We give our littl'un a quid a week to top up her pay as you go mobile phone. She can, and does, spend unlimited time on YouTube, Facebook, Messenger, Skype, Line and a couple of other social media sites. It's as cheap as chips, but the rub is
that she has just a tiny bandwidth allowance to look at any sites apart from the core social media set.
On the other hand wider internet access with enough bandwidth to watch a few videos costs abut 15 quid a month (a recently reduced price, it used to be 30 quid a month a few months ago).
Presumably the cheap service is actually paid for by Google and Facebook etc with the knowledge that people are nearly totally trapped in their walled garden. Its quite useful for kids because they haven't got the bandwidth to go looking round
where they shouldn't. But the price makes it very attractive to many adults too.
Anyway Summer Lopez from PEN America considers how this internet monopoly stitch up is even more sensitive to the announced Facebook feed changes than in the west.
Theresa May is creating a new national security unit to counter supposed fake news and disinformation
spread by Russia and other foreign powers, Downing Street has announced.
The Prime Minister's official spokesman said the new national security communications unit would build on existing capabilities and would be tasked with combating disinformation by state actors and others. The spokesman said:
We are living in an era of fake news and competing narratives. The government will respond with more and better use of national security communications to tackle these interconnected, complex challenges.
To do this we will build on existing capabilities by creating a dedicated national security communications unit. This will be tasked with combating disinformation by state actors and others.
The new unit has already been dubbed the Ministry of Truth.
It is clear that the BBFC are set to censor porn websites but what about the grey area of non-porn websites about porn and sex work. The BBFC falsely claim they don't know yet as they haven't begun work on their guidelines
A few MEPs produce YouTube video highlighting the corporate and state censorship that will be enabled by an EU proposal to require social media posts to be approved before posting by an automated censorship machine
In a new campaign video, several Members of the European Parliament warn that the EU's proposed
mandatory upload filters pose a threat to freedom of speech. The new filters would function as censorship machines which are "completely disproportionate," they say. The MEPs encourage the public to speak up, while they still can.
Through a series of new proposals, the European Commission is working hard to modernize
EU copyright law. Among other things, it will require online services to do more to fight piracy.
These proposals have not been without controversy. Article 13 of the proposed Copyright Directive, for example, has been widely criticized as it would require online services to monitor and filter uploaded content.
This means that online services, which deal with large volumes of user-uploaded content, must use fingerprinting or other detection mechanisms -- similar to YouTube's Content-ID system -- to block copyright infringing files.
The Commission believes that more stringent control is needed to support copyright holders. However, many legal
, digital activists
, and members of the public worry that they will violate the rights of regular Internet users.
In the European Parliament, there is fierce opposition as well. Today, six Members of Parliament (MEPs) from across the political spectrum released a new campaign video warning their fellow colleagues and the public at large.
The MEPs warn that such upload filters would act as censorship machines, something they've made clear to the Council's working group on intellectual property, where the controversial proposal was discussed today.
Imagine if every time you opened your mouth, computers controlled by big companies would check what you were about to say, and have the power to prevent you from saying it, Greens/EFA MEP Julia Reda says.
A new legal proposal would make this a reality when it comes to expressing yourself online: Every clip and every photo would have to be pre-screened by some automated 'robocop' before it could be uploaded and seen online, ALDE MEP Marietje
Stop censorship machines!
Schaake notes that she has dealt with the consequences of upload filters herself. When she uploaded a recording of a political speech to YouTube, the site took it down without explanation. Until this day, the MEP still doesn't know on what grounds
it was removed.
These broad upload filters are completely disproportionate and a danger for freedom of speech, the MEPs warn. The automated systems make mistakes and can't properly detect whether something's fair use, for example.
Another problem is that the measures will be relatively costly for smaller companies ,which puts them at a competitive disadvantage. "Only the biggest platforms can afford them -- European competitors and small businesses will struggle,"
ECR MEP Dan Dalton says.
The plans can still be stopped, the MEPs say. They are currently scheduled for a vote in the Legal Affairs Committee at the end of March, and the video encourages members of the public to raise their voices.
Speak out ...while you can still do so unfiltered! S&D MEP Catherine Stihler says.
Robert Hannigan, a recent director of GCHQ has joined the clamour for internet censorship by US internet monopolies.
Hannigan accused the web giants of doing too little to remove terrorist and extremist content and he threatened that the companies have a year to reform themselves or face government legislation.
Hannigan suggested tech companies were becoming more powerful than governments, and had a tendency to consider themselves above democracy. But he said he believed their window to change themselves was closing and he feared most were missing the
boat. He predicted that if firms do not take credible action by the end of 2018, governments would start to intervene with legislation.
The government publishes it guidance to the new UK porn censor about notifying websites that they are to be censored, asking payment providers and advertisers to end their service, recourse to ISP blocks and an appeals process
A person contravenes Part 3 of the Digital Economy Act 2017 if they make
pornographic material available on the internet on a commercial basis to
persons in the United Kingdom without ensuring that the material is not
normally accessible to persons under the age of 18. Contravention could lead
to a range of measures being taken by the age-verification regulator in
relation to that person, including blocking by internet service providers (ISPs).
Part 3 also gives the age-verification regulator powers to act where a person
makes extreme pornographic material (as defined in section 22 of the Digital
Economy Act 2017) available on the internet to persons in the United
This guidance has been written to provide the framework for the operation of
the age-verification regulatory regime in the following areas:
● Regulator's approach to the exercise of its powers;
● Age-verification arrangements;
● Payment-services Providers and Ancillary Service Providers;
● Internet Service Provider blocking; and
This guidance balances two overarching principles in the regulator's application of its powers under sections 19, 21 and 23 - that it should apply its powers in the way which it thinks will be most effective in ensuring
compliance on a case-by-case basis and that it should take a proportionate approach.
As set out in this guidance, it is expected that the regulator, in taking a proportionate approach, will first seek to engage with the non-compliant person to encourage them to comply, before considering issuing a notice
under section 19, 21 or 23, unless there are reasons as to why the regulator does not think that is appropriate in a given case
Regulator's approach to the exercise of its powers
The age-verification consultation Child Safety Online: Age verification for pornography identified that an extremely large number of websites contain pornographic content - circa 5 million sites or parts of sites. All
providers of online pornography, who are making available pornographic material to persons in the United Kingdom on a commercial basis, will be required to comply with the age-verification requirement .
In exercising its powers, the regulator should take a proportionate approach. Section 26(1) specifically provides that the regulator may, if it thinks fit, choose to exercise its powers principally in relation to persons who,
in the age-verification regulator's opinion:
(a) make pornographic material or extreme pornographic material available on the internet on a commercial basis to a large number of persons, or a large number of persons under the age of 18, in the United Kingdom; or
(b) generate a large amount of turnover by doing so.
In taking a proportionate approach, the regulator should have regard to the following:
a. As set out in section 19, before making a determination that a person is contravening section 14(1), the regulator must allow that person an opportunity to make representations about why the determination should not be
made. To ensure clarity and discourage evasion, the regulator should specify a prompt timeframe for compliance and, if it considers it appropriate, set out the steps that it considers that the person needs to take to comply.
b. When considering whether to exercise its powers (whether under section 19, 21 or 23), including considering what type of notice to issue, the regulator should consider, in any given case, which intervention will be most
effective in encouraging compliance, while balancing this against the need to act in a proportionate manner.
c. Before issuing a notice to require internet service providers to block access to material, the regulator must always first consider whether issuing civil proceedings or giving notice to ancillary service providers and
payment-services providers might have a sufficient effect on the non-complying person's behaviour.
To help ensure transparency, the regulator should publish on its website details of any notices under sections 19, 21 and 23.
Section 25(1) provides that the regulator must publish guidance about the types of arrangements for making pornographic material available that the regulator will treat as complying with section 14(1). This guidance is
subject to a Parliamentary procedure
A person making pornographic material available on a commercial basis to persons in the United Kingdom must have an effective process in place to verify a user is 18 or over. There are various methods for verifying whether
someone is 18 or over (and it is expected that new age-verification technologies will develop over time). As such, the Secretary of State considers that rather than setting out a closed list of age-verification arrangements, the regulator's
guidance should specify the criteria by which it will assess, in any given case, that a person has met with this requirement. The regulator's guidance should also outline good practice in relation to age verification to encourage consumer choice
and the use of mechanisms which confirm age, rather than identity.
The regulator is not required to approve individual age-verification solutions. There are various ways to age verify online and the industry is developing at pace. Providers are innovating and providing choice to consumers.
The process of verifying age for adults should be concerned only with the need to establish that the user is aged 18 or above. The privacy of adult users of pornographic sites should be maintained and the potential for fraud
or misuse of personal data should be safeguarded. The key focus of many age-verification providers is on privacy and specifically providing verification, rather than identification of the individual.
Payment-services providers and ancillary service providers
There is no requirement in the Digital Economy Act for payment-services providers or ancillary service providers to take any action on receipt of such a notice. However, Government expects that responsible companies will wish
to withdraw services from those who are in breach of UK legislation by making pornographic material accessible online to children or by making extreme pornographic material available.
The regulator should consider on a case-by-case basis the effectiveness of notifying different ancillary service providers (and payment-services providers).
There are a wide-range of providers whose services may be used by pornography providers to enable or facilitate making pornography available online and who may therefore fall under the definition of ancillary service provider
in section 21(5)(a) . Such a service is not limited to where a direct financial relationship is in place between the service and the pornography provider. Section 21(5)(b) identifies those who advertise commercially on such sites as ancillary
service providers. In addition, others include, but are not limited to:
a. Platforms which enable pornographic content or extreme pornographic material to be uploaded;
b. Search engines which facilitate access to pornographic content or extreme pornographic material;
c. Discussion for a and communities in which users post links;
d. Cyberlockers' and cloud storage services on which pornographic content or extreme pornographic material may be stored;
e. Services including websites and App marketplaces that enable users to download Apps;
f. Hosting services which enable access to websites, Apps or App marketplaces; that enable users to download apps
g. Domain name registrars.
h. Set-top boxes, mobile applications and other devices that can connect directly to streaming servers
Internet Service Provider blocking
The regulator should only issue a notice to an internet service provider having had regard to Chapter 2 of this guidance. The regulator should take a proportionate approach and consider all actions (Chapter 2.4) before
issuing a notice to internet service providers.
In determining those ISPs that will be subject to notification, the regulator should take into consideration the number and the nature of customers, with a focus on suppliers of home and mobile broadband services. The
regulator should consider any ISP that promotes its services on the basis of pornography being accessible without age verification irrespective of other considerations.
The regulator should take into account the child safety impact that will be achieved by notifying a supplier with a small number of subscribers and ensure a proportionate approach. Additionally, it is not anticipated that
ISPs will be expected to block services to business customers, unless a specific need is identified.
In order to assist with the ongoing review of the effectiveness of the new regime and the regulator's functions, the Secretary of State considers that it would be good practice for the regulator to submit to the Secretary of
State an annual report on the exercise of its functions and their effectiveness.
The US adult trade group, Free Speech Coalition at its inaugural Leadership Conference on Thursday
introduced Murray Perkins, who leads efforts for the UK's new age-verification censorship regime under the Digital Economy Act.
Perkins is the principal adviser for the BBFC, which last year signed on to assume the role of internet porn censor.
Perkins traveled to the XBIZ Show on an informational trip specifically to offer education on the Digital Economy Act's regulatory powers; he continues on to Las Vegas next week and Australia the following week to speak with online adult
The reason why I am here is to be visible, to give people an opportunity to ask questions about what is happening. I firmly believe that the only way to make this work is to with and not against the adult entertainment industry.
This is a challenge; there is no template, but we will figure it out. I am reasonably optimistic [the legislation] will work.
A team of classification examiners will start screening content for potential violations starting in the spring. (In a separate discussion with XBIZ, Perkins said that his army of examiners will total 15.)
Perkins showed himself to be a bit naive, a bit insensitive, or a bit of an idiot when he spouted:
The Digital Economy Act will affect everyone in this room, one way or the other, Perkins said. However, the Digital Economy Act is not anti-porn -- it is not intended to disrupt an adult's journey or access to their content.
[...BUT... it is likely to totally devastate the UK adult industry and hand over all remaining business to the foreign internet giant Mindgeek, who will become the Facebook/Google/Amazon of porn. Not to mention the Brits served on a platter to
scammers, blackmailers and identity thieves].
The third evaluation of the EU's 'Code of Conduct' on censoring 'illegal online hate speech' carried out by NGOs and
public bodies shows that IT companies removed on average 70% of posts claimed to contain 'illegal hate speech'.
However, some further challenges still remain, in particular the lack of systematic feedback to users.
Google+ announced today that they are joining the Code of Conduct, and Facebook confirmed that Instagram would also do so, thus further expanding the numbers of actors covered by it.
Vera Jourová, with the oxymoronic title of EU Commissioner for Justice, Consumers and Gender Equality, said:
The Internet must be a safe place, free from illegal hate speech, free from xenophobic and racist content. The Code of Conduct is now proving to be a valuable tool to tackle illegal content quickly and efficiently. This shows that where there is
a strong collaboration between technology companies, civil society and policy makers we can get results, and at the same time, preserve freedom of speech. I expect IT companies to show similar determination when working on other important issues,
such as the fight with terrorism, or unfavourable terms and conditions for their users.
On average, IT companies removed 70% of all the 'illegal hate speech' notified to them by the NGOs and public bodies participating in the evaluation. This rate has steadily increased from 28% in the first monitoring round in 2016 and 59% in the
second monitoring exercise in May 2017.T
The Commission will continue to monitor regularly the implementation of the Code by the participating IT Companies with the help of civil society organisations and aims at widening it to further online platforms. The Commission will consider
additional measures if efforts are not pursued or slow down.
Of course no mention of the possibility that some of the reports of supposed 'illegal hate speech' are not actioned because they are simply wrong and may be just the politically correct being easily offended. We seem to live in an injust age where
the accuser is always considered right and the merits of the case count for absolutely nothing.
Google is set for its first appearance in a London court over the so-called right to be forgotten in two cases that will test the boundaries between
personal privacy and public interest.
Two anonymous people, who describe themselves in court filings as businessmen, want the search engine to take down links to information about their old convictions.
One of the men had been found guilty of conspiracy to account falsely, and the other of conspiracy to intercept communications. Judge Matthew Nicklin said at a pre-trial hearing that hose convictions are old and are now covered by an English law
-- designed to rehabilitate offenders -- that says they can effectively be ignored. With a few exceptions, they don't have to be disclosed to potential employers.
A Google spokeswoman said:
We work hard to comply with the right to be forgotten, but we take great care not to remove search results that are clearly in the public interest and will defend the public's right to access lawful information.