In a survey more about net neutrality than porn censorship, MoneySupermarket noted:
We conducted a survey of over 2,000 Brits on this and it seems that if an ISP decided to block sites, it could result in increasing numbers of Brits switching - 64 per cent of Brits would be likely to switch ISP if they put blocks in place
In reality, this means millions could be considering a switch as nearly six million having tried to access a site that was blocked in the last week - nearly one in 10 across the country.
It's an issue even more pertinent for those aged 18 to 34, with nearly half (45 per cent) having tried to access a site that was blocked at some point.
While ISPs might block sites for various reasons, a quarter of Brits said they would switch ISP if they were blocked from viewing adult sites - with those living with partners the most likely to do so!
Now switching ISPs isn't going to help much if the BBFC, the government appointed porn censor, has dictated that all ISPs block porn sites. But maybe these 25% of internet users will take up alternatives such as subscribing to a VPN service.
As far as I can see if a porn website verifies your age with personal data, it will probably also require you tick tick a consent box with a hol load of small print that nobody ever reads. Now if that small print lets it forward all personal
data, coupled with porn viewing data, to the Kremlin's dirty tricks and blackmail department then that's ok with the the Government's age verification law. So for sure some porn viewers are going to get burnt because of what the government has
legislated and because of what the BBFC have implemented.
So perhaps it is not surprising that the BBFC has asked the government to pick up the tab should the BBFC be sued by people harmed by their decisions. After all it was the government who set up the unsafe environment, not the BBFC.
Margot James The Minister of State, Department for Culture, Media and Sport announced in Parliament:
I am today laying a Departmental Minute to advise that the Department for Digital, Culture, Media and Sport (DCMS) has received approval from Her Majesty's Treasury (HMT) to recognise a new Contingent Liability which will come into effect when
age verification powers under Part 3 of the Digital Economy Act 2017 enter force.
The contingent liability will provide indemnity to the British Board of Film Classification (BBFC) against legal proceedings brought against the BBFC in its role as the age verification regulator for online pornography.
As you know, the Digital Economy Act introduces the requirement for commercial providers of online pornography to have robust age verification controls to protect children and young people under 18 from exposure to online pornography. As the
designated age verification regulator, the BBFC will have extensive powers to take enforcement action against non-compliant sites. The BBFC can issue civil proceedings, give notice to payment-service providers or ancillary service providers, or
direct internet service providers to block access to websites where a provider of online pornography remains non-compliant.
The BBFC expects a high level of voluntary compliance by providers of online pornography. To encourage compliance, the BBFC has engaged with industry, charities and undertaken a public consultation on its regulatory approach. Furthermore, the
BBFC will ensure that it takes a proportionate approach to enforcement and will maintain arrangements for an appeals process to be overseen by an independent appeals body. This will help reduce the risk of potential legal action against the BBFC.
However, despite the effective work with industry, charities and the public to promote and encourage compliance, this is a new law and there nevertheless remains a risk that the BBFC will be exposed to legal challenge on the basis of decisions
taken as the age verification regulator or on grounds of principle from those opposed to the policy.
As this is a new policy, it is not possible to quantify accurately the value of such risks. The Government estimates a realistic risk range to be between 2£1m - 2£10m in the first year, based on likely number and scale of legal challenges. The
BBFC investigated options to procure commercial insurance but failed to do so given difficulties in accurately determining the size of potential risks. The Government therefore will ensure that the BBFC is protected against any legal action
brought against the BBFC as a result of carrying out duties as the age verification regulator.
The Contingent Liability is required to be in place for the duration of the period the BBFC remain the age verification regulator. However, we expect the likelihood of the Contingent Liability being called upon to diminish over time as the regime
settles in and relevant industries become accustomed to it. If the liability is called upon, provision for any payment will be sought through the normal Supply procedure.
It is usual to allow a period of 14 Sitting Days prior to accepting a Contingent Liability, to provide Members of Parliament an opportunity to raise any objections.
The BBFC has made a few changes to its approach since the rather ropey document published prior to the BBFC's public consultation. In general the BBFC seems a little more pragmatic about trying to get adult porn users to buy into the age
verification way of thinking. The BBFC seems supportive of the anonymously bought porn access card from the local store, and has taken a strong stance against age verification providers who reprehensibly want to record people's porn browsing,
claiming a need to provide an audit trail.
The BBFC has also decided to offer a service to certify age verification providers in the way that they protect people's data. This is again probably targeted at making adult porn users a bit more confident in handing over ID.
The BBFC tone is a little bit more acknowledging of people's privacy concerns, but it's the government's law being implemented by the BBFC, that allows the recipients of the data to use it more or less how they like. Once you tick the 'take it or
leave it' consent box allowing the AV provider 'to make your user experience better' then they can do what they like with your data (although GDPR does kindly let you later withdraw that consent and see what they have got on you).
Another theme that runs through the site is a rather ironic acceptance that, for all the devastation that will befall the UK porn industry, for all the lives ruined by people having their porn viewing outed, for all the lives ruined by fraud and
identity theft, that somehow the regime is only about stopping young children 'stumbling on porn'... because the older, more determined, children will still know how to find it anyway.
So the BBFC has laid out its stall, and its a little more conciliatory to porn users, but I for one will never hand over any ID data to anyone connected with a servicing porn websites. I suspect that many others will feel the same. If you can't
trust the biggest companies in the business with your data, what hope is there for anyone else.
There's no word yet on when all this will come into force, but the schedule seems to be 3 months after the BBFC scheme has been approved by Parliament. This approval seems scheduled to be debated in Parliament in early November, eg on 5th
November there will be a House of Lords session:
Implementation by the British Board of Film Classification of age-verifications to prevent children accessing pornographic websites 203 Baroness Benjamin Oral questions
So the earliest it could come into force is about mid February.
The BBFC has published its Age Verification Guidance document that will underipin the implementation of internet porn censorship in the UK.
Perhaps a key section is:
5. The criteria against which the BBFC will assess that an age-verification arrangement meets the requirement under section 14(1) to secure that pornographic material is not normally accessible by those under 18 are set out below:
a. an effective control mechanism at the point of registration or access to pornographic content by the end-user which verifies that the user is aged 18 or over at the point of registration or access
b use of age-verification data that cannot be reasonably known by another person, without theft or fraudulent use of data or identification documents nor readily obtained or predicted by another person
c. a requirement that either a user age-verify each visit or access is restricted by controls, manual or electronic, such as, but not limited to, password or personal identification numbers. A consumer must be logged out by default unless they
positively opt-in for their log in information to be remembered
d. the inclusion of measures which authenticate age-verification data and measures which are effective at preventing use by non-human operators including algorithms
It is fascinating as to why the BBFC feels that bots need to be banned, perhaps they need to be 18 years old too, before they can access porn. I am not sure if porn sites will appreciate Goggle-bot being banned from their sites. I love the idea
that the word 'algorithms' has been elevated to some sort of living entity.
It all smacks of being written by people who don't know what they are talking about.
In a quick read I thought the following paragraph was important:
9. In the interests of data minimisation and data protection, the BBFC does not require that age-verification arrangements maintain data for the purposes of providing an audit trail in order to meet the requirements of the act.
It rather suggests that the BBFC pragmatically accept that convenience and buy-in from porn-users is more important than making life dangerous for everybody, just n case a few teenagers get hold of an access code.
The British Board of Film Classification was designated as the age-verification regulator under Part 3 of the Digital Economy Act on 21 February 2018. The BBFC launched its consultation on the draft Guidance on Age-verification Arrangements and
draft Guidance on Ancillary Service Providers on 26 March 2018. The consultation was available on the BBFC's website and asked for comments on the technical aspects on how the BBFC intends to approach its role and functions as the
age-verification regulator. The consultation ran for 4 weeks and closed on 23 April 2018, although late submissions were accepted until 8 May 2018.
There were a total of 624 responses to the consultation. The vast majority of those (584) were submitted by individuals, with 40 submitted by organisations. 623 responses were received via email, and one was received by post. Where express
consent has been given for their publication, the BBFC has published responses in a separate document. Response summaries from key stakeholders are in part 4 of this document.
Responses from stakeholders such as children's charities, age-verification providers and internet service providers were broadly supportive of the BBFC's approach and age-verification standards. Some responses from these groups asked for
clarification to some points. The BBFC has made a number of amendments to the guidance as a result. These are outlined in chapter 2 of this document. Responses to questions raised are covered in chapter 3 of this document.
A significant number of responses, particularly from individuals and campaign groups, raised concerns about the introduction of age-verification, and set out objections to the legislation and regulatory regime in principle. Issues included
infringement of freedom of expression, censorship, problematic enforcement powers and an unmanageable scale of operation. The government's consultation on age-verification in 2016 addressed many of these issues of principle. More information
about why age-verification has been introduced, and the considerations given to the regulatory framework and enforcement powers can be found in the 2016 consultation response by the Department for Digital Culture Media and Sport1.
As the EU
advances the new Copyright Directive towards becoming law in its 28 member-states, it's important to realise that the EU's plan will end up censoring the Internet for everyone , not just Europeans.
A quick refresher: Under Article 13 of the new Copyright Directive, anyone who operates a (sufficiently large) platform where people can post works that might be copyrighted (like text, pictures, videos, code, games, audio etc) will have to
crowdsource a database of "copyrighted works" that users aren't allowed to post, and block anything that seems to match one of the database entries.
These blacklist databases will be open to all comers (after all, anyone can create a copyrighted work): that means that billions of people around the world will be able to submit anything to the blacklists, without having to prove that
they hold the copyright to their submissions (or, for that matter, that their submissions are copyrighted). The Directive does not specify any punishment for making false claims to a copyright, and a platform that decided to block someone for
making repeated fake claims would run the risk of being liable to the abuser if a user posts a work to which the abuser does own the rights .
The major targets of this censorship plan are the social media platforms, and it's the "social" that should give us all pause.
That's because the currency of social media is social interaction between users . I post something, you reply, a third person chimes in, I reply again, and so on.
Now, let's take a hypothetical Twitter discussion between three users: Alice (an American), Bob (a Bulgarian) and Carol (a Canadian).
Alice posts a picture of a political march: thousands of protesters and counterprotesters, waving signs. As is
world , these signs include copyrighted images, whose use is permitted under US "fair use" rules that permit parody. Because Twitter enables users to communicate significant amounts of user-generated content, they'll fall within
the ambit of Article 13.
Bob lives in Bulgaria, an EU member-state whose copyright law
does not permit parody . He might want to reply to Alice with a quote from the Bulgarian dissident Georgi Markov , whose works were translated into English in the late 1970s and are still in copyright.
Carol, a Canadian who met Bob and Alice through their shared love of Doctor Who, decides to post a witty meme from " The Mark of the Rani ," a 1985 episode in which Colin Baker travels back to witness the Luddite protests of the 19th
Alice, Bob and Carol are all expressing themselves through use of copyrighted cultural works, in ways that might not be lawful in the EU's most speech-restrictive copyright jurisdictions. But because (under today's system) the platform typically
is only required to to respond to copyright complaints when a rightsholder objects to the use, everyone can see everyone else's posts and carry on a discussion using tools and modes that have become the norm in all our modern, digital discourse.
But once Article 13 is in effect, Twitter faces an impossible conundrum. The Article 13 filter will be tripped by Alice's lulzy protest signs, by Bob's political quotes, and by Carol's Doctor Who meme, but suppose that Twitter is only required to
block Bob from seeing these infringing materials.
Should Twitter hide Alice and Carol's messages from Bob? If Bob's quote is censored in Bulgaria, should Twitter go ahead and show it to Alice and Carol (but hide it from Bob, who posted it?). What about when Bob travels outside of the EU and
looks back on his timeline? Or when Alice goes to visit Bob in Bulgaria for a Doctor Who convention and tries to call up the thread? Bear in mind that there's no way to be certain where a user is visiting from, either.
The dangerous but simple option is to subject all Twitter messages to European copyright censorship, a disaster for online speech.
And it's not just Twitter, of course: any platform with EU users will have to solve this problem. Google, Facebook, Linkedin, Instagram, Tiktok, Snapchat, Flickr, Tumblr -- every network will have to contend with this.
With Article 13, the EU would create a system where copyright complainants get a huge stick to beat the internet with, where people who abuse this power face no penalties, and where platforms that err on the side of free speech will get that
stick right in the face.
As the EU's censorship plan
works its way through the next steps on the way to becoming binding across the EU, the whole world has a stake -- but only a handful of appointed negotiators get a say.
If you are a European, the rest of the world would be very grateful indeed if you would take a moment to
contact your MEP and urge them to protect us all in the new Copyright Directive.
The Google+ social network exposed the personal information of hundreds of thousands of people using the site between 2015 and March 2018, according to a report in the Wall Street Journal. But managers at the company chose not to go public with
the failures, because they worried that it would invite scrutiny from regulators, particularly in the wake of Facebook's security failures.
Shortly after the report was published, Google announced that it would be shutting down Google+ by August 2019. In the announcement, Google also announced raft of new security features for Android, Gmail and other Google platforms that it has
taken as a result of privacy failures..
Google said it had discovered the issues during an internal audit called Project Strobe. Ben Smith, Google's vice president of engineering, wrote in a blog post:
Given these challenges and the very low usage of the consumer version of Google+, we decided to sunset the consumer version of Google+.
The audit found that Goggle+ APIs allowed app developers to access the information of Google+ users' friends, even if that data was marked as private by the user. As many as 438 applications had access to the unauthorized Google+ data, according
to the Journal.
Now, users will be given greater control over what account data they choose to share with each app. Apps will be required to inform users what data they will have access to. Users have to provide explicit permission in order for them to gain
access to it. Google is also limiting apps' ability to gain access to users' call log and SMS data on Android devices.Additionally, Google is limiting which apps can seek permission to users' consumer Gmail data. Only email clients, email backup
services and productivity services will be able to access this data.
Google will continue to operate Google+ as an enterprise product for companies.
The Online Forums Bill is a Private Members' Bill that was introduced on Parliament on 11th September 2018 under the Ten Minute Rule. The only details published so far is a summary
A Bill to make administrators and moderators of certain online forums responsible for content published on those forums; to require such administrators and moderators to remove certain content; to require platforms to publish information about
such forums; and for connected purposes.
The next stage for this Bill, Second reading, is scheduled to take place on Friday 26 October 2018.
There is a small petition against the bill
Stop the Online Forums Bill 2017-18 becoming law.
Thought control by politicians, backed by the main stream media has led to ever more sinister intrusions into people's freedom to criticize public policy and assemble into campaign groups. ?More details
By requiring platforms to publish information about closed forums and making Administrators responsible for content is Orwellian and anti-democratic.
In Canada, there have been ongoing discussions and proposals about new levies and fees to compensate creators for supposed missed revenue. There have been calls to levy a tax on mobile devices such as iPhones, for example. This week the Screen
Composers Guild of Canada took things up a notch, calling for a copyright levy on all broadband data use above 15 gigabytes per month.
A proposal from the Screen Composers Guild of Canada (SCGC), put forward during last week's Government hearings, suggests to simply add a levy on Internet use above 15 gigabytes per month.
The music composers argue that this is warranted because composers miss out on public performance royalties. One of the reasons for this is that online streaming services are not paying as much as terrestrial broadcasters.
The composers SCGC represents are not the big music stars. They are the people who write music for TV-shows and other broadcasts. Increasingly these are also shown on streaming services where the compensation is, apparently, much lower. SCGC
With regard to YouTube, which is owned by the advertising company Alphabet-Google, minuscule revenue distribution is being reported by our members. Royalties from the large streaming services, like Amazon and Netflix, are 50 to 95% lower when
compared to those from terrestrial broadcasters.
Statistics like this indicate that our veteran members will soon have to seek employment elsewhere and young screen-composers will have little hope of sustaining a livelihood, the guild adds, sounding the alarm bell.
SCGC's solution to this problem is to make every Canadian pay an extra fee when they use over 15 gigabytes of data per month. This money would then be used to compensate composers and fix the so-called value gap. As a result, all Internet users
who go over the cap will have to pay more. Even those who don't watch any of the programs where the music is used.
However, SCGC doesn't see the problem and believes that 15 gigabytes are enough. People who want to avoid paying can still use email and share photos, they argue. Those who go over the cap are likely streaming not properly compensated videos.
An ISP subscription levy that would provide a minimum or provide a basic 15 gigabytes of data per Canadian household a month that would be unlevied. Lots of room for households to be able to do Internet transactions, business, share photos,
download a few things, emails, no problem.
[W]hen you're downloading and consuming over 15 gigabytes of data a month, you're likely streaming Spotify. You're likely streaming YouTube. You're likely streaming Netflix. So we think because the FANG companies will not give us access to the
numbers that they have, we have to apply a broad-based levy. They're forcing us to.
The last comment is telling. The composers guild believes that a levy is the only option because Netflix, YouTube, and others are not paying their fair share. That sounds like a licensing or rights issue between these services and the authors.
Dragging millions of Canadians into this dispute seems questionable, especially when many people have absolutely nothing to do with it.
As someone who has tracked technology and human rights over the past ten years, I am convinced that digital ID, writ large, poses one of the gravest risks to human rights of any technology that we have encountered. . By Brett Soloman
The recent Fosta law in the US forces internet companies to censor anything to do with legal, adult and consensual sex work. It holds them liable for abetting sex traffickers even when they can't possibly distinguish the trafficking from the
legal sex work. The only solution is therefore to ban the use of their platforms for any personal hook ups. So indeed adult sex work websites have been duly cleansed from the US internet.
But now a woman is claiming that Facebook facilitated trafficking when of course its nigh on impossible for Facebook to detect such use of their networking systems. But of course that's no excuse under the FOSTA.
According to a new lawsuit by an unnamed woman in Houston, Texas, Facebook's morally bankrupt corporate culture for permitting a sex trafficker to force her into prostitution after beating and raping her. She claims Facebook should be held
responsible when a user on the social media platform sexually exploits another Facebook user. The lawsuit says that Facebook should have warned the woman, who was 15 years old at the time she was victimized, that its platform could be used by sex
traffickers to recruit and groom victims, including children.
The lawsuit also names Backpage.com, which according to a Reuters report , hosted pictures of the woman taken by the man who victimized her after he uploaded them to the site.
The classified advertising site Backpage has already been shut down by federal prosecutors in April of this year.
Google's parent company Alphabet has rolled out a new tool aimed at defending against attacks on free speech around the globe.
Jigsaw announced the release of a new app, Intra , designed to protect Android users against the manipulation of DNS resolutions, a commonly used practice among repressive regimes to prohibit users from accessing information deemed off-limits by
In Iran, for example, certain websites redirect to a government censorship page. The same is true of China's Great Firewall (GFW), which returns false and, often instead, seemingly erratic IP addresses in response to DNS queries to
government-blocked domains. Hundreds of websites are likewise blocked in Pakistan.
Intra works, according to its creators, by simply encrypting the user's connection to the DNS server. By default, it points to Google's own DNS servers but for users who prefer to use another ( Cloudflare or IBM's Quad9 , for example) those
settings can be changed within the app.
According to CNET, DNS queries will be encrypted by default in an updated version of Android Pie. Reportedly, however, around 80 percent of Android users aren't using the latest version of the Android operating system. For those, Intra is now
available in Google Play
New rules on audiovisual media services will apply to broadcasters, and also to video-on-demand and video-sharing platforms
MEPs voted on updated rules on audiovisual media services covering children protection, stricter rules on advertising, and a requirement 30% European content in video-on-demand.
Following the final vote on this agreement, the revised legislation will apply to broadcasters, but also to video-on-demand and video-sharing platforms, such as Netflix, YouTube or Facebook, as well as to live streaming on video-sharing
The updated rules will ensure:
Enhanced protection of minors from violence, hatred, terrorism and harmful advertising
Audiovisual media services providers should have appropriate measures to combat content inciting violence, hatred and terrorism, while gratuitous violence and pornography will be subject to the strictest rules. Video-sharing platforms will now be
responsible for reacting quickly when content is reported or flagged by users as harmful.
The legislation does not include any automatic filtering of uploaded content, but, at the request of the Parliament, platforms need to create a transparent, easy-to-use and effective mechanism to allow users to report or flag content.
The new law includes strict rules on advertising, product placement in children's TV programmes and content available on video-on-demand platforms. EP negotiators also secured a personal data protection mechanism for children, imposing measures
to ensure that data collected by audiovisual media providers are not processed for commercial use, including for profiling and behaviourally targeted advertising.
Redefined limits of advertising
Under the new rules, advertising can take up a maximum of 20% of the daily broadcasting period between 6.00 and 18.00, giving the broadcaster the flexibility to adjust their advertising periods. A prime-time window between 18:00 and 0:00 was also
set out, during which advertising will only be allowed to take up a maximum of 20% of broadcasting time.
30% of European content on the video-on-demand platforms' catalogues
In order to support the cultural diversity of the European audiovisual sector, MEPs ensured that 30% of content in the video-on-demand platforms' catalogues should be European.
Video-on-demand platforms are also asked to contribute to the development of European audiovisual productions, either by investing directly in content or by contributing to national funds. The level of contribution in each country should be
proportional to their on-demand revenues in that country (member states where they are established or member states where they target the audience wholly or mostly).
The legislation also includes provisions regarding accessibility, integrity of a broadcaster's signal, strengthening regulatory authorities and promoting media competences.
The deal still needs to be formally approved by the Council of EU ministers before the revised law can enter into force. Member States have 21 months after its entry into force to transpose the new rules into national legislation.
The text was adopted by 452 votes against 132, with 65 abstentions.
A new section has been added to the AVMS rules re censorship
Member States shall take appropriate measures to ensure that audiovisual media services provided by media service providers under their jurisdiction which may impair the physical, mental or moral development of minors are only made available
in such a way as to ensure that minors will not normally hear or see them. Such measures may include selecting the time of the broadcast, age verification tools or other technical measures. They shall be proportionate to the potential harm of
the programme. The most harmful content, such as gratuitous violence and pornography, shall be subject to the strictest measures.
Personal data of minors collected or otherwise generated by media service providers pursuant to paragraph 1 shall not be processed for commercial purposes, such as direct marketing, profiling and behaviourally targeted advertising.
Member States shall ensure that media service providers provide sufficient information to viewers about content which may impair the physical, mental or moral development of minors. For this purpose, media service providers shall use a system
describing the potentially harmful nature of the content of an audiovisual media service. For the implementation of this paragraph, Member States shall encourage the use of co - regulation as provided for in Article 4a(1).
The Commission shall encourage media service providers to exchange best practices on co - regulatory codes of conduct . Member States and the Commission may foster self - regulation, for the purposes of this Article, through Union codes of
conduct as referred to in Article 4a(2).
Article 4a suggests possible organisation of the censors assigned to the task, eg state censors, state controlled organisations eg Ofcom, or nominally state controlled co-regulators like the defunct ATVOD.
Article 4a(3). notes that censorial countries like the UK are free to add further censorship rules of their own:
Member States shall remain free to require media service providers under their jurisdiction to comply with more detailed or stricter rules in compliance with this Directive and Union law, including where their national independent regulatory
authorities or bodies conclude that any code of conduct or parts thereof h ave proven not to be sufficiently effective. Member States shall report such rules to the Commission without undue delay. ;