Here's what worries cybersecurity experts: All age verification options would create a permanent record indicating that a
user had visited a porn site. They could possibly even record the porn that the visitor had watched.
Matt Tait, a cybersecurity expert formerly of the GCHQ (the United Kingdom's equivalent of the National Security Agency) who now teaches at the University of Texas, notes that any registration system could be a monumental national security risk.
He adds, It's beyond insane they're even considering it.
Tait envisions a time coming soon, when a British government official will have to give the following message to the Prime Minister:
Sorry Prime Minister, Russia now knows what porn every MP, civil servant and clearance holder watches and when, and we don't know how much of it they've given to Wikileaks.
If porn consumers in the United Kingdom are the losers, Tait suggests there is a potential winner: Vladimir Putin.
The Government has formally proposed that the British Board of Film Classification (BBFC) be designated as the regulator for the age verification of
online pornography in the UK.
Age verification will mean anyone who makes pornography available online on a commercial basis must ensure under 18s in the UK cannot access it. This is part of the Government's continuing work to make the UK the safest place in the world to be
The BBFC has unparalleled expertise in classifying content and has a proven track record of interpreting and implementing legislation as the statutory authority for age rating videos under the Video Recordings Act.
This, along with its work with industry on the film classification system and more recently classifying material for mobile network operators, makes them the preferred choice for regulator.
Digital Minister Matt Hancock said:
One of the missions of age verification is to harness the freedom of the internet while mitigating its harms. Offline, as a society we protect children from viewing inappropriate adult material by ensuring pornography is sold responsibly using
appropriate age checks. It is now time that the online world follows suit. The BBFC are the best placed in the world to do this important and delicate task.
David Austin, Chief Executive Officer at BBFC said:
The BBFC's primary aim is to protect children and other vulnerable groups from harmful content and we are therefore pleased to accept the Government's proposed designation.
Age-verification barriers will help to prevent children accessing or stumbling across pornographic content online. The UK is leading the way with this age-verification regime and will set an international precedent in child protection.
The government's proposal must be approved by Parliament before the BBFC is officially designated as the age-verification regulator.
The regulator will notify non-compliant pornographic providers, and be able to direct internet service providers to prevent customers accessing these sites. It will also notify payment-services providers and other ancillary service providers of
these sites, with the intention that they can withdraw their services.
The Government will shortly also publish guidance on how the regulator should fulfil its duties in relation to age verification.
Response: The BBFC will struggle to ensure that Age Verification is safe, secure and anonymous
Responding to the news that the BBFC are in line to be appointed Age Verification regulator, Jim Killock Executive Director of the Open Rights Group said:
The BBFC will struggle to ensure that Age Verification is safe, secure and anonymous. They are powerless to ensure people's privacy.
The major publisher, MindGeek, looks like it will dominate the AV market. We are very worried about their product, AgeID, which could track people's porn use. The way this product develops is completely out of BBFC's hands.
Users will not be able to choose how to access websites. They'll be at the mercy of porn companies. And the blame lies squarely with Theresa May's government for pushing incomplete legislation.
Killock also warned that censorship of porn sites could quickly spiral into hundreds or thousands of sites:
While BBFC say they will only block a few large sites that don't use AV, there are tens of thousands of porn sites. Once MPs work out that AV is failing to make porn inaccessible, some will demand that more and more sites are blocked. BBFC will
be pushed to block ever larger numbers of websites.
Response: How to easily get around the UK's porn censorship
Of course, in putting together this hugely draconian piece of legislation, the British Government has overlooked one rather
glaring point. Any efforts to censor online content in the UK can be easily circumvented by anyone using a VPN.
British-based subscribers to a VPN service such as IPVanish or ExpressVPN will be able to get around any blocked sites simply by connecting to a server in another democratic country which hasn't chosen to block websites with adult content.
As much as Governments try to censor online content, so VPN will offer continue to offer people access to the free and uncontrolled internet they are legally entitled to enjoy.
The US's media censor voted to end rules protecting an open internet on Thursday, a move critics warn will hand control of the future of the web to cable and telecoms companies.
At a packed meeting of the Federal Communications Commission (FCC) in Washington, commissioners voted three to two to dismantle the net neutrality rules that prevent internet service providers (ISPs) from charging websites more for delivering
certain services or blocking others should they, for example, compete with services the cable company also offers.
FCC commissioner Mignon Clyburn, a Democrat, denounced the move. I dissent because I am among the millions outraged, outraged because the FCC pulls its own teeth, abdicating responsibility to protect the nation's broadband consumers, she said.
Fellow Democratic commissioner Jessica Rosenworcel said the FCC had shown contempt for public opinion during the review. She called the process corrupt. As a result of today's misguided actions, our broadband providers will get extraordinary new
powers, she said.
Evan Greer, campaign director for internet activists Fight for the Future, said:
Killing net neutrality in the US will impact internet users all over the world. So many of the best ideas will be lost, squashed by the largest corporations at the expense of the global internet-using public.
Michael Cheah of Vimeo said:
ISPs probably won't immediately begin blocking content outright, given the uproar that this would provoke. What's more likely is a transition to a pay-for-play business model that will ultimately stifle startups and innovation, and lead to higher
prices and less choice for consumers.
Ignoring the millions of Americans who protested against the end of net neutrality
In recent months, millions of people have protested the FCC's plan to repeal U.S. net neutrality rules, which were put in place by the Obama administration.
However, an outpouring public outrage , critique from major tech companies, and even warnings from pioneers of the Internet, had no effect. Today the FCC voted to repeal the old rules, effectively ending net neutrality.
Under the net neutrality rules that have been in effect during recent years, ISPs were specifically prohibited from blocking, throttling, and paid prioritization of lawful traffic. In addition, Internet providers could be regulated as carriers
under Title II.
Now that these rules have been repealed, Internet providers will have more freedom to experiment with paid prioritization. Under the new guidelines, they can charge customers extra for access to some online services, or throttle certain types of
Most critics of the repeal fear that, now that the old net neutrality rules are in the trash, fast lanes for some services, and throttling for others, will become commonplace in the U.S.
This could also mean that BitTorrent traffic becomes a target once again. After all, it was Comcast's secretive BitTorrent throttling that started the broader net neutrality debate, now ten years ago.
Despite repeated distortions and biased information, as well as misguided, inaccurate attacks from detractors, our Internet service is not going to change, writes David Cohen, Comcast's Chief Diversity Officer:
We have repeatedly stated, and reiterate today, that we do not and will not block, throttle, or discriminate against lawful content.
It's worth highlighting the term lawful in the last sentence. It is by no means a promise that pirate sites won't be blocked.
Why Net Neutrality Repeal Is Extremely Bad News for Porn
Within minutes of a party-line Federal Communications Commission vote to repeal rules protecting net neutrality, at least three states announced measures to keep the rules204set up to guarantee a level playing field for internet consumers, users
and businesses204in place. New York, California and Washington quickly outlined a mixture of legal actions and legislative moves to keep net neutrality in place, which more than a dozen states expected to follow.
Whether the states can succeed in stopping the Donald Trump-era elimination of the Barack Obama-era net neutrality requirements is of special interest to adult content providers and consumers, because porn appears likely to be among the hardest
hit of all industries affected by the rollback.
Why? Because porn comprises about one third of all internet traffic, and there are an estimated 800 million pages of porn on the World Wide Web, meaning that the giant corporations that now control internet access for most Americans will envision
almost unimaginable profits to be reaped from slapping users with extra fees to access their favorite adult content.
Google is escalating its campaign of internet censorship, announcing that it will expand its workforce of human censors to over 10,000. The
censors' primary focus will be videos and other content on YouTube, but will work across Google to censor content and train its automated systems, which remove videos at a rate four times faster than its human employees.
Human censors have already reviewed over 2 million videos since June. YouTube has already removed over 150,000 videos, 50 percent of which were removed within two hours of upload. The company is working to accelerate the rate of takedown through
machine-learning from manual censorship.
YouTube CEO Susan Wojcicki explained the move in an official blog post:
Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly
2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future. We are also taking aggressive action on comments, launching new comment moderation tools and in some cases
shutting down comments altogether. In the last few weeks we've used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments. Our teams also work closely with NCMEC, the IWF,
and other child safety organizations around the world to report predatory behavior and accounts to the correct law enforcement agencies.
We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.
At the same time, we are expanding the network of academics, industry groups and subject matter experts who we can learn from and support to help us better understand emerging issues.
We will use our cutting-edge machine learning more widely to allow us to quickly and efficiently remove content that violates our guidelines. In June we deployed this technology to flag violent extremist content for human review and we've seen
Since June we have removed over 150,000 videos for violent extremism.
Machine learning is helping our human reviewers remove nearly five times as many videos than they were previously.
Today, 98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms.
Our advances in machine learning let us now take down nearly 70 percent of violent extremist content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed.
Since we started using machine learning to flag violent and extremist content in June, the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.
The European Commission has joined the list of organisations calling on the likes of Google, Facebook and Twitter to do
more to remove extremist content - or face further legislation.
EU home affairs commissioner Dimitris Avramopoulos warned the real battlefield is against 21st century terrorism. He said most of the recent terrorist attackers had never travelled to Syria or Iraq. But most of them had been influenced, groomed
and recruited to terrorism on the internet.
Avramopoulos said he believed it was feasible to reduce the time it takes to remove content to a few hours. There is a lot of room for improvement, for this cooperation to produce even better results.
Avramopoulos also said he thought it was worthwhile to harness artificial intelligence to complete the task. You now.. like Facebook censoring Robin Redbreast Christmas cards because the word 'breast' appeared in filenames.
The Commission said it would make a decision by May next year on whether additional measures -- including legislation -- are required in order to better address the problem of illegal content on the internet.
Back in March, Australia shelved plans to extend its copyright safe harbor provisions to services such as Google and
Facebook. Now, following consultations with the entertainment industries, the government has revealed it will indeed exclude such platforms from safe harbour provisions.
Services such as Google, Facebook and YouTube now face massive legal uncertainty as they themselves can be held responsible for copyright infringing posts by users. The logical result would be that the companies will have to check every post
before upload. The vast quantity of posts to check would make this an economically unviable option.
Proposed amendments to the Copyright Act earlier this year would've seen enhanced safe harbor protections for such platforms but they were withdrawn at the eleventh hour due to lobbying by media companies. Such companies accuse platforms like
YouTube of exploiting safe harbor provisions in the US and Europe, which forces copyright holders into an expensive battle to have infringing content taken down.
Communications Minister Mitch Fifield has confirmed the exclusions, so now it is up to Google and Facebook to consider how they can operate under this law.
Iran's telecommunications minister says that his ministry wants to customize Internet blocking based on user's occupation, age, and other factors.
The attorney general's office has conditionally agreed with this plan, Minister Mohammad Javad Azari Jahromi announced on December 4.
Without providing any details, he said his ministry had reviewed suggestions made by the attorney general and prepared appropriate technical responses. He expressed hope that the office would give its final approval for the implementation of the
Despite the regime's extenisve efforts to censor the Internet, Iranian users currently get around the restrictions by using anti-filtering programs or virtual private networks.
Google makes their internal processes difficult to track by design, but the author of a report By Karlaplan states that these changes are fairly
recent, suspected to have been implemented on the 30th of August -- the changes having only been discovered in late October.
However, until the publication of this document , little other than anecdotal evidence was presented with complaints from YouTube content creators.
Through extensive analysis of the YouTube Data API and other sources, Karlaplan found that YouTube tags demonetized videos according to both severity and type of sensitive content -- neither of which is transparent to the uploader.
The report also notes that videos are more likely to be hidden from viewers if their likely viewership is low. Perhaps as higher viewership videos may be more likely to be appealed, or more likely to be spotted as examples of censorship and hence
generate bad publicity for Google.
Google have published an information page that is quite useful in detailing which videos get censored. Google outlines two levels of sensitivity that advertisers can select when not wanting to be associated with sensitive content. Google explains:
While the Standard content filter excludes the most inappropriate content, it doesn't exclude everything that a particular advertiser may find objectionable. The Sensitive content categories allow you to opt out of additional content that many
advertisers find inappropriate. Eg:
Tragedy and conflict
Standard: Excludes graphic footage of combat or war
Sensitive: Excludes the above plus footage of soldiers marching with weapons
Sensitive social issues
Standard: Excludes videos intended to elicit a response about controversial issues
Sensitive: Excludes the above plus news commentary about controversial issues
Sexually suggestive content
Standard: Excludes videos about sex or sexual products
Sensitive: Excludes the above plus music videos with suggestive themes
Sensational and shocking
Standard: Excludes videos of disasters or accidents that show casualties or death
Sensitive: Excludes the above plus videos of moderate disasters or accidents that show minimal casualties or harm
Profanity and rough language
Standard: Excludes videos with frequent use of profanity
Sensitive: Excludes the above plus videos with profanity that has been bleeped out
Cloudflare's decision to ban the Daily Stormer has led to an increase in censorship requests. Since August, Cloudflare has received more than 7,000 requests from across the political spectrum for removal of content
Senior police officers are to lose the power to self-authorise access to personal phone and web browsing records under a series of late changes
to the snooper's charter law proposed by ministers in an attempt to comply with a European court ruling on Britain's mass surveillance powers.
A Home Office consultation paper published on Thursday also makes clear that the 250,000 requests each year for access to personal communications data by the police and other public bodies will in future excluded for investigations into minor
crimes that carry a prison sentence of less than six months.
But the government says the 2016 European court of justice (ECJ) ruling in a case brought by Labour's deputy leader, Tom Watson , initially with David Davis, now the Brexit secretary, does not apply to the retention or acquisition of personal
phone, email, web history or other communications data by national security organisations such as GCHQ, MI6 or MI5, claiming that national security is outside the scope of EU law.
The Open Rights Group has been campaigning hard on issues of liberty and privacy and writes:
This is major victory for ORG, although one with dangers. The government has conceded that independent authorisation is necessary for communications data requests, but refused to budge on retained data and is pushing ahead with the Request Filter,
to enable rapid interrogation and analysis of the stored communications data.
Adding independent authorisation for communications data requests will make the police more effective, as corruption and abuse will be harder. It will improve operational effectiveness, even if less data is used during investigations and trust in
the police should improve.
Nevertheless the government has disregarded many key elements of the judgment
It isn't going to reduce the amount of data retained
It won't notify people whose data is used during investigations
It won't keep data within the EU, instead it will continue to transfer it, presumably specifically to the USA
The Home Office has opted for a six month sentence definition of serious crime rather than the Lords' definition of crimes capable of sentences of at least one year.
These are clear evasions and abrogations of the judgment. The mission of the Home Office is to uphold the rule of law. By failing to do what the courts tell them, the Home Office is undermining the very essence of the rule of law.
If the Home Office won't do what the highest courts tell it to do, why should anybody else? By picking and choosing the laws they are willing to care about, they are playing with fire.
There was one final surprise. The Code of Practice covers the operation of the Request Filter . Yet again we are told that this police search engine is a privacy safeguard. We will now run through the code in fine detail to see if any such
safeguards are there. On a first glance, there are not.
If the Home Office genuinely believe the Request Filter is a benign tool, they must rewrite this section to make abundantly clear that it is not a mini version of X-Keyscore (the NSA / GCHQ'S tool to trawl their databases of people linked to their
email and web visits) and does not operate as a facility to link and search the vast quantities of retained and collected communications data.
The Russian government is currently discussing plans to build its own independent internet infrastructure that will be used
by BRICS member states 204 Brazil, Russia, India, China, and South Africa.
The Russian Security Council has today formally asked the country's government to start the building of a global DNS system that Russia and fellow BRICS member states could use to take control of the internet as used within the BRICS countries.
Russia and fellow BRICS nations would have the option to flip a switch and move Internet traffic from today's main DNS system to their own private system. The states will then have absolute and direct control of sites to be blocked. Furthermore,
the alternative DNS system also allows oppressive regimes to deanonymize Tor traffic and hunt for dissidents, via an attack called DefecTor.
Russia, China, and many other countries have criticized the US for hoarding control over the domain naming system (DNS), a position they claim has allowed the US to intercept and tap global Internet traffic. Last year, the US handed over control
over the DNS system to ICANN , an independent organization. While Russia and China welcomed the move, they actually wanted the DNS system to be controlled by the United Nations' International Telecommunication Union. This is because the two
countries have more power in UN matters than control over an NGO, like ICANN.