Gab, the social media site that prides itself as being uncensored, has been forced offline by its service providers after it became clear that the alleged Pittsburgh shooter Robert Bowers had a history of anti-semitic postings on the site.
August 2016 after Twitter began cracking down on hate speech on its social network, Gab describes itself as a free speech website and nothing more. But the platform has proved popular among the alt-right and far right, including the man accused of
opening fire on a synagogue in Pennsylvania on Saturday, killing 11.
In the hours following the attack, when the suspect's postings were discovered on the site, Gab's corporate partners abandoned it one by one. PayPal and Stripe, two of the
company's payment providers, dropped it, arguing that it breached policies around hate speech.
Cloud-hosting company Joyent also withdrew service on Sunday, giving Gab 24 hours notice of its suspension, as did GoDaddy, the site's domain registrar,
which provides the Gab.com address. Both companies said the site had breached their terms of service.
Gab responded in a statement:
We have been systematically no-platformed by App Stores, multiple hosting
providers, and several payment processors, the company said in a statement posted to its site. We have been smeared by the mainstream media for defending free expression and individual liberty for all people and for working with law enforcement to ensure
that justice is served for the horrible atrocity committed in Pittsburgh.
Gab is back online following censorship in the wake of the anti-Semitic shooting at a Pittsburgh synagogue. The social
network had been banned by its hosting provider Joyent and domain registrar GoDaddy, and blacklisted by other services such as PayPal , Stripe and Shopify.
Now, Gab has come back online and has found a new hosting provider in Epik. According to a
blog post published on November 3rd, Epik CEO Robert Monster spoke out against the idea of digital censorship and decided to provide hosting privileges to Gab because he looks forward to partnering with a young, and once brash, CEO who is courageously
doing something that looks useful.
The Government has announced the organisations that will sit on the Executive Board of a new national body to tackle online harms in the UK.
The UK Council for Internet Safety (UKCIS) is the successor
to the UK Council for Child Internet Safety (UKCCIS), with an expanded scope to improve online safety for everyone in the UK.
The Executive Board brings together expertise from a range of organisations in the tech
industry, civil society and public sector.
Margot James, Minister for Digital and the Creative Industries said:
Only through collaborative action will the UK be the
safest place to be online. By bringing together a wealth of expertise from a wide range of fields, UKCIS can be an example to the world on how we can work together to face the challenges of the digital revolution in an effective and responsible way.
UKCIS has been established to allow these organisations to collaborate and coordinate a UK-wide approach to online safety.
It will contribute to the Government's
commitment to make the UK the safest place in the world to be online, and will help to inform the development of the forthcoming Online Harms White Paper.
Priority areas of focus will include online harms experienced
by children such as cyberbullying and sexual exploitation; radicalisation and extremism; violence against women and girls; hate crime and hate speech; and forms of discrimination against groups protected under the Equality Act, for example on the basis
of disability or race.
CEO of Internet Matters Carolyn Bunting said:
We are delighted to sit on the Executive Board of UKCIS where we are able to represent parents needs
in keeping their children safe online.
Online safety demands a collaborative approach and by bringing industry together we hope we can bring about real change and help everyone benefit from the opportunities the
digital world has to offer.
The UKCIS Executive Board consists of the following organisations:
Commission for Countering
End Violence Against Women Coalition
Independent Advisory Group on Hate Crime
Internet Watch Foundation
Internet Service Providers and Mobile Operators (rotating between BT, Sky, TalkTalk, Three, Virgin Media, Vodafone)
National Police Chiefs'
National Crime Agency - CEOP Command
Northern Ireland Executive
UKCIS Evidence Group Chair
The UKCIS Executive Board is jointly chaired by Margot James, Minister for Digital and the Creative Industries (Department for Digital, Culture, Media and Sport); Victoria Atkins, Minister for Crime, Safeguarding and
Vulnerability (Home Office); and Nadeem Zahawi, Minister for Children and Families (Department for Education). It also includes representatives from the Devolved Administrations of Scotland, Wales and Northern Ireland. Board membership will be kept under
periodic review, to ensure it represents the full range of online harms that the government seeks to tackle.
Google is set to be fined in Russia for not complying with Russia's list of websites to censor.
Roskomnadzor, the Russian government's internet and media censor, accused Google of ignoring a law requiring search engines to block censored content.
Roskomnadzor has recorded the fact of Google 's non-compliance with its duty to connect to the federal state 'information system'.
Google is now subject to a fine of up to 700,000 rubles ($10,600).
Vadim Subbotin, Roskomnadzor deputy
chief censor, said Google had three days to respond to its ruling,.
Once states totalling 35% of the EU's population oppose the new Copyright Directive, they can form a "blocking minority" and kill it or cause
it to be substantially refactored. With the Italians opposing the Directive because of its draconian new internet rules (rules introduced at the last moment, which have been hugely controversial), the reputed opponents of the Directive have now crossed
the 35% threshold, thanks to Germany, Finland, the Netherlands, Slovenia, Belgium and Hungary.
Unfortunately, the opponents of Article 11 (the "link tax") and Article 13 (the copyright filters) are not united on their
opposition -- they have different ideas about what they would like to see done with these provisions. If they pull together, that could be the end of these provisions.
If you're a European
this form will let you contact your MEP quickly and painlessly and let them know how you feel about the proposals.
That's where matters stand
now: a growing set of countries who think copyright filters and link taxes go too far, but no agreement yet on rejecting or fixing them.
The trilogues are not a process designed to resolve such large rifts when both the EU states
and the parliament are so deeply divided.
What happens now depends entirely on how the members states decide to go forward: and how hard they push for real reform of Articles 13 and 11. The balance in that discussion has changed,
because Italy changed its position. Italy changed its position because Italians spoke up. If you reach out to your countries' ministry in charge of copyright, and tell them that these Articles are a concern to you, they'll start paying attention too. And
we'll have a chance to stop this terrible directive from becoming terrible law across Europe.
Article 13 as written threatens to shut down the ability of millions of people -- from creators like you to everyday users -- to upload content to platforms like YouTube. And it threatens to
block users in the EU from viewing content that is already live on the channels of creators everywhere. This includes YouTube's incredible video library of educational content, such as language classes, physics tutorials and other how-to's.
This legislation poses a threat to both your livelihood and your ability to share your voice with the world. And, if implemented as proposed, Article 13 threatens hundreds of thousands of jobs, European creators, businesses, artists
and everyone they employ. The proposal could force platforms, like YouTube, to allow only content from a small number of large companies. It would be too risky for platforms to host content from smaller original content creators, because the platforms
would now be directly liable for that content. We realize the importance of all rights holders being fairly compensated, which is why we built Content ID and a platform to pay out all types of content owners. But the unintended consequences of article 13
will put this ecosystem at risk. We are committed to working with the industry to find a better way. This language could be finalized by the end of the year, so it's important to speak up now.
Please take a moment to
learn more about how it could affect your channel and take action immediately. Tell the world through social media (#SaveYourInternet) and your channel why the creator
economy is important and how this legislation will impact you
A committee of MPs has claimed that the government is not taking the urgent action needed to protect democracy from fake news on Facebook and other social media.
The culture committee wants a crackdown on the manipulation of personal data, the spread
of disinformation and Russian interference in elections. Tory MP Damian Collins, who chairs the committee, says he is disappointed by the response to its latest report. Collins has accused ministers of making excuses to further delay desperately needed
announcements on the ongoing issues of harmful and misleading content being spread through social media.
When the Digital Culture Media and Sport Committee issued its interim report on fake news in July it claimed that the UK faced a democratic
crisis founded on the manipulation of personal data.
The MPs called for new powers for the Electoral Commission - including bigger fines - and new regulation of social media firms. But of the 42 recommendations in its interim report, the committee
says only three have been accepted by the government, in its official response, published last week.
The committee has backed calls from the Electoral Commission to force social media advertisers to publish an imprint on political ads to show who
had paid for them, to increase transparency. Collins also criticised the government's continued insistence that there was no evidence of Russian interference in UK elections.
Collins said he would be raising this and other issues with Culture
Secretary Jeremy Wright, when he appears before the committee on Wednesday.
After the recent censorship purge of over 800 independent media outlets on Facebook, the Supreme Court is now hearing a case that could have ramifications for any future attempts at similar purges.
The United States Supreme Court has agreed to take a
case that could change free speech on the Internet. Manhattan Community Access Corp. v. Halleck, No. 17-702, the case that it has agreed to take, will decide if the private operator of a public access network is considered a state actor.
could affect how companies like Facebook, Twitter, Instagram, Google and YouTube are governed. If the Court were to issue a far-reaching ruling it could subject such companies to First Amendment lawsuits and force them to allow a much broader scope of
free speech from its users.
DeeDee Halleck and Jesus Melendez claimed that they were fired from Manhattan Neighborhood Network for speaking critically of the network. And, though the case does not involve the Internet giants, it could create a
ruling that expands the First Amendment beyond the government.
In a survey more about net neutrality than porn censorship, MoneySupermarket noted:
We conducted a survey of over 2,000 Brits on this and it seems that if an ISP decided to block sites, it could result in increasing
numbers of Brits switching - 64 per cent of Brits would be likely to switch ISP if they put blocks in place
In reality, this means millions could be considering a switch as nearly six million having tried to access a site that was
blocked in the last week - nearly one in 10 across the country.
It's an issue even more pertinent for those aged 18 to 34, with nearly half (45 per cent) having tried to access a site that was blocked at some point.
While ISPs might block sites for various reasons, a quarter of Brits said they would switch ISP if they were blocked from viewing adult sites - with those living with partners the most likely to do so!
switching ISPs isn't going to help much if the BBFC, the government appointed porn censor, has dictated that all ISPs block porn sites. But maybe these 25% of internet users will take up alternatives such as subscribing to a VPN service.
As far as I can see if a porn website verifies your age with personal data, it will probably also require you tick tick a consent box with a hol load of small print that nobody ever reads. Now if that small print lets it forward all personal data,
coupled with porn viewing data, to the Kremlin's dirty tricks and blackmail department then that's ok with the the Government's age verification law. So for sure some porn viewers are going to get burnt because of what the government has legislated and
because of what the BBFC have implemented.
So perhaps it is not surprising that the BBFC has asked the government to pick up the tab should the BBFC be sued by people harmed by their decisions. After all it was the government who set up the unsafe
environment, not the BBFC.
Margot James The Minister of State, Department for Culture, Media and Sport announced in Parliament:
I am today laying a Departmental Minute to advise that the Department for Digital, Culture,
Media and Sport (DCMS) has received approval from Her Majesty's Treasury (HMT) to recognise a new Contingent Liability which will come into effect when age verification powers under Part 3 of the Digital Economy Act 2017 enter force.
The contingent liability will provide indemnity to the British Board of Film Classification (BBFC) against legal proceedings brought against the BBFC in its role as the age verification regulator for online pornography.
As you know, the Digital Economy Act introduces the requirement for commercial providers of online pornography to have robust age verification controls to protect children and young people under 18 from exposure to online pornography.
As the designated age verification regulator, the BBFC will have extensive powers to take enforcement action against non-compliant sites. The BBFC can issue civil proceedings, give notice to payment-service providers or ancillary service providers, or
direct internet service providers to block access to websites where a provider of online pornography remains non-compliant.
The BBFC expects a high level of voluntary compliance by providers of online pornography. To encourage
compliance, the BBFC has engaged with industry, charities and undertaken a public consultation on its regulatory approach. Furthermore, the BBFC will ensure that it takes a proportionate approach to enforcement and will maintain arrangements for an
appeals process to be overseen by an independent appeals body. This will help reduce the risk of potential legal action against the BBFC.
However, despite the effective work with industry, charities and the public to promote and
encourage compliance, this is a new law and there nevertheless remains a risk that the BBFC will be exposed to legal challenge on the basis of decisions taken as the age verification regulator or on grounds of principle from those opposed to the policy.
As this is a new policy, it is not possible to quantify accurately the value of such risks. The Government estimates a realistic risk range to be between 2£1m - 2£10m in the first year, based on likely number and scale of legal
challenges. The BBFC investigated options to procure commercial insurance but failed to do so given difficulties in accurately determining the size of potential risks. The Government therefore will ensure that the BBFC is protected against any legal
action brought against the BBFC as a result of carrying out duties as the age verification regulator.
The Contingent Liability is required to be in place for the duration of the period the BBFC remain the age verification
regulator. However, we expect the likelihood of the Contingent Liability being called upon to diminish over time as the regime settles in and relevant industries become accustomed to it. If the liability is called upon, provision for any payment will be
sought through the normal Supply procedure.
It is usual to allow a period of 14 Sitting Days prior to accepting a Contingent Liability, to provide Members of Parliament an opportunity to raise any objections.
The BBFC has made a few changes to its approach since the rather ropey document published prior to the BBFC's public consultation. In general the BBFC seems a little more
pragmatic about trying to get adult porn users to buy into the age verification way of thinking. The BBFC seems supportive of the anonymously bought porn access card from the local store, and has taken a strong stance against age verification providers
who reprehensibly want to record people's porn browsing, claiming a need to provide an audit trail.
The BBFC has also decided to offer a service to certify age verification providers in the way that they protect people's data. This is again
probably targeted at making adult porn users a bit more confident in handing over ID.
The BBFC tone is a little bit more acknowledging of people's privacy concerns, but it's the government's law being implemented by the BBFC, that allows the
recipients of the data to use it more or less how they like. Once you tick the 'take it or leave it' consent box allowing the AV provider 'to make your user experience better' then they can do what they like with your data (although GDPR does kindly let
you later withdraw that consent and see what they have got on you).
Another theme that runs through the site is a rather ironic acceptance that, for all the devastation that will befall the UK porn industry, for all the lives ruined by people
having their porn viewing outed, for all the lives ruined by fraud and identity theft, that somehow the regime is only about stopping young children 'stumbling on porn'... because the older, more determined, children will still know how to find it
So the BBFC has laid out its stall, and its a little more conciliatory to porn users, but I for one will never hand over any ID data to anyone connected with a servicing porn websites. I suspect that many others will feel the same. If you
can't trust the biggest companies in the business with your data, what hope is there for anyone else.
There's no word yet on when all this will come into force, but the schedule seems to be 3 months after the BBFC scheme has been approved by
Parliament. This approval seems scheduled to be debated in Parliament in early November, eg on 5th November there will be a House of Lords session:
Implementation by the British Board of Film Classification of
age-verifications to prevent children accessing pornographic websites 203 Baroness Benjamin Oral questions
So the earliest it could come into force is about mid February.
The BBFC has published its Age Verification Guidance document that will underipin the implementation of internet porn censorship in the UK.
Perhaps a key section is:
5. The criteria against which the BBFC
will assess that an age-verification arrangement meets the requirement under section 14(1) to secure that pornographic material is not normally accessible by those under 18 are set out below:
a. an effective control
mechanism at the point of registration or access to pornographic content by the end-user which verifies that the user is aged 18 or over at the point of registration or access
b use of age-verification data that cannot be
reasonably known by another person, without theft or fraudulent use of data or identification documents nor readily obtained or predicted by another person
c. a requirement that either a user age-verify each visit or access is
restricted by controls, manual or electronic, such as, but not limited to, password or personal identification numbers. A consumer must be logged out by default unless they positively opt-in for their log in information to be remembered
d. the inclusion of measures which authenticate age-verification data and measures which are effective at preventing use by non-human operators including algorithms
It is fascinating as to why the BBFC
feels that bots need to be banned, perhaps they need to be 18 years old too, before they can access porn. I am not sure if porn sites will appreciate Goggle-bot being banned from their sites. I love the idea that the word 'algorithms' has been elevated
to some sort of living entity.
It all smacks of being written by people who don't know what they are talking about.
In a quick read I thought the following paragraph was important:
9. In the interests of
data minimisation and data protection, the BBFC does not require that age-verification arrangements maintain data for the purposes of providing an audit trail in order to meet the requirements of the act.
It rather suggests that the
BBFC pragmatically accept that convenience and buy-in from porn-users is more important than making life dangerous for everybody, just n case a few teenagers get hold of an access code.
The British Board of Film Classification was designated as the age-verification regulator under Part 3 of the Digital Economy Act on 21 February 2018. The BBFC launched its consultation on the
draft Guidance on Age-verification Arrangements and draft Guidance on Ancillary Service Providers on 26 March 2018. The consultation was available on the BBFC's website and asked for comments on the technical aspects on how the BBFC intends to approach
its role and functions as the age-verification regulator. The consultation ran for 4 weeks and closed on 23 April 2018, although late submissions were accepted until 8 May 2018.
There were a total of 624 responses to the
consultation. The vast majority of those (584) were submitted by individuals, with 40 submitted by organisations. 623 responses were received via email, and one was received by post. Where express consent has been given for their publication, the BBFC
has published responses in a separate document. Response summaries from key stakeholders are in part 4 of this document.
Responses from stakeholders such as children's charities, age-verification providers and internet service
providers were broadly supportive of the BBFC's approach and age-verification standards. Some responses from these groups asked for clarification to some points. The BBFC has made a number of amendments to the guidance as a result. These are outlined in
chapter 2 of this document. Responses to questions raised are covered in chapter 3 of this document.
A significant number of responses, particularly from individuals and campaign groups, raised concerns about the introduction of
age-verification, and set out objections to the legislation and regulatory regime in principle. Issues included infringement of freedom of expression, censorship, problematic enforcement powers and an unmanageable scale of operation. The government's
consultation on age-verification in 2016 addressed many of these issues of principle. More information about why age-verification has been introduced, and the considerations given to the regulatory framework and enforcement powers can be found in the
2016 consultation response by the Department for Digital Culture Media and Sport1.
As the EU advances the new Copyright Directive towards becoming law in its 28 member-states, it's important to realise that
the EU's plan will end up censoring the Internet for everyone , not just Europeans.
A quick refresher: Under Article 13 of the new Copyright Directive, anyone who operates a (sufficiently large) platform where people can
post works that might be copyrighted (like text, pictures, videos, code, games, audio etc) will have to crowdsource a database of "copyrighted works" that users aren't allowed to post, and block anything that seems to match one of the database
These blacklist databases will be open to all comers (after all, anyone can create a copyrighted work): that means that billions of people around the world will be able to submit anything to the blacklists, without
having to prove that they hold the copyright to their submissions (or, for that matter, that their submissions are copyrighted). The Directive does not specify any punishment for making false claims to a copyright, and a platform that decided to block
someone for making repeated fake claims would run the risk of being liable to the abuser if a user posts a work to which the abuser does own the rights .
The major targets of this censorship plan are the social media
platforms, and it's the "social" that should give us all pause.
That's because the currency of social media is social interaction between users . I post something, you reply, a third person chimes in, I reply
again, and so on.
Now, let's take a hypothetical Twitter discussion between three users: Alice (an American), Bob (a Bulgarian) and Carol (a Canadian).
Alice posts a picture of a political march: thousands
of protesters and counterprotesters, waving signs. As is common around the world , these signs include copyrighted images, whose use is permitted under US
"fair use" rules that permit parody. Because Twitter enables users to communicate significant amounts of user-generated content, they'll fall within the ambit of Article 13.
Bob lives in Bulgaria, an EU member-state
whose copyright law does not permit parody . He might want to reply to Alice with a quote from the Bulgarian dissident Georgi Markov , whose works were
translated into English in the late 1970s and are still in copyright.
Carol, a Canadian who met Bob and Alice through their shared love of Doctor Who, decides to post a witty meme from " The Mark of the Rani ," a 1985
episode in which Colin Baker travels back to witness the Luddite protests of the 19th Century.
Alice, Bob and Carol are all expressing themselves through use of copyrighted cultural works, in ways that might not be lawful in the
EU's most speech-restrictive copyright jurisdictions. But because (under today's system) the platform typically is only required to to respond to copyright complaints when a rightsholder objects to the use, everyone can see everyone else's posts and
carry on a discussion using tools and modes that have become the norm in all our modern, digital discourse.
But once Article 13 is in effect, Twitter faces an impossible conundrum. The Article 13 filter will be tripped by Alice's
lulzy protest signs, by Bob's political quotes, and by Carol's Doctor Who meme, but suppose that Twitter is only required to block Bob from seeing these infringing materials.
Should Twitter hide Alice and Carol's messages from
Bob? If Bob's quote is censored in Bulgaria, should Twitter go ahead and show it to Alice and Carol (but hide it from Bob, who posted it?). What about when Bob travels outside of the EU and looks back on his timeline? Or when Alice goes to visit Bob in
Bulgaria for a Doctor Who convention and tries to call up the thread? Bear in mind that there's no way to be certain where a user is visiting from, either.
The dangerous but simple option is to subject all Twitter messages
to European copyright censorship, a disaster for online speech.
And it's not just Twitter, of course: any platform with EU users will have to solve this problem. Google, Facebook, Linkedin, Instagram, Tiktok, Snapchat, Flickr,
Tumblr -- every network will have to contend with this.
With Article 13, the EU would create a system where copyright complainants get a huge stick to beat the internet with, where people who abuse this power face no penalties,
and where platforms that err on the side of free speech will get that stick right in the face.
As the EU's censorship plan
works its way through the next steps on the way to becoming binding across the EU, the whole world has a stake -- but only a
handful of appointed negotiators get a say.
If you are a European, the rest of the world would be very grateful indeed if you would take a moment to contact
your MEP and urge them to protect us all in the new Copyright Directive.
The Google+ social network exposed the personal information of hundreds of thousands of people using the site between 2015 and March 2018, according to a report in the Wall Street Journal. But managers at the company chose not to go public with the
failures, because they worried that it would invite scrutiny from regulators, particularly in the wake of Facebook's security failures.
Shortly after the report was published, Google announced that it would be shutting down Google+ by August 2019. In
the announcement, Google also announced raft of new security features for Android, Gmail and other Google platforms that it has taken as a result of privacy failures..
Google said it had discovered the issues during an internal audit called
Project Strobe. Ben Smith, Google's vice president of engineering, wrote in a blog post:
Given these challenges and the very low usage of the consumer version of Google+, we decided to sunset the consumer version of
The audit found that Goggle+ APIs allowed app developers to access the information of Google+ users' friends, even if that data was marked as private by the user. As many as 438 applications had access to the unauthorized
Google+ data, according to the Journal.
Now, users will be given greater control over what account data they choose to share with each app. Apps will be required to inform users what data they will have access to. Users have to provide explicit
permission in order for them to gain access to it. Google is also limiting apps' ability to gain access to users' call log and SMS data on Android devices.Additionally, Google is limiting which apps can seek permission to users' consumer Gmail data. Only
email clients, email backup services and productivity services will be able to access this data.
Google will continue to operate Google+ as an enterprise product for companies.
The Online Forums Bill is a Private Members' Bill that was introduced on Parliament on 11th September 2018 under the Ten Minute Rule. The only details published so far is a summary
A Bill to make administrators and
moderators of certain online forums responsible for content published on those forums; to require such administrators and moderators to remove certain content; to require platforms to publish information about such forums; and for connected purposes.
The next stage for this Bill, Second reading, is scheduled to take place on Friday 26 October 2018.
There is a small petition against the bill
Stop the Online Forums Bill 2017-18 becoming law.
Thought control by politicians, backed by the main stream media has led to ever more sinister intrusions into people's freedom to criticize public policy and assemble into campaign groups. ?More details
By requiring platforms to publish information about closed forums and making Administrators responsible for content is Orwellian and anti-democratic.
In Canada, there have been ongoing discussions and proposals about new levies and fees to compensate creators for supposed missed revenue. There have been calls to levy a tax on mobile devices such as iPhones, for example. This week the Screen Composers
Guild of Canada took things up a notch, calling for a copyright levy on all broadband data use above 15 gigabytes per month.
A proposal from the Screen Composers Guild of Canada (SCGC), put forward during last week's Government hearings, suggests
to simply add a levy on Internet use above 15 gigabytes per month.
The music composers argue that this is warranted because composers miss out on public performance royalties. One of the reasons for this is that online streaming services are not
paying as much as terrestrial broadcasters.
The composers SCGC represents are not the big music stars. They are the people who write music for TV-shows and other broadcasts. Increasingly these are also shown on streaming services where the
compensation is, apparently, much lower. SCGC writes:
With regard to YouTube, which is owned by the advertising company Alphabet-Google, minuscule revenue distribution is being reported by our members. Royalties from
the large streaming services, like Amazon and Netflix, are 50 to 95% lower when compared to those from terrestrial broadcasters.
Statistics like this indicate that our veteran members will soon have to seek employment elsewhere
and young screen-composers will have little hope of sustaining a livelihood, the guild adds, sounding the alarm bell.
SCGC's solution to this problem is to make every Canadian pay an extra fee when they use over 15 gigabytes of data
per month. This money would then be used to compensate composers and fix the so-called value gap. As a result, all Internet users who go over the cap will have to pay more. Even those who don't watch any of the programs where the music is used.
However, SCGC doesn't see the problem and believes that 15 gigabytes are enough. People who want to avoid paying can still use email and share photos, they argue. Those who go over the cap are likely streaming not properly compensated videos. SCGC notes:
An ISP subscription levy that would provide a minimum or provide a basic 15 gigabytes of data per Canadian household a month that would be unlevied. Lots of room for households to be able to do Internet transactions,
business, share photos, download a few things, emails, no problem.
[W]hen you're downloading and consuming over 15 gigabytes of data a month, you're likely streaming Spotify. You're likely streaming YouTube. You're likely
streaming Netflix. So we think because the FANG companies will not give us access to the numbers that they have, we have to apply a broad-based levy. They're forcing us to.
The last comment is telling. The composers guild believes
that a levy is the only option because Netflix, YouTube, and others are not paying their fair share. That sounds like a licensing or rights issue between these services and the authors. Dragging millions of Canadians into this dispute seems questionable,
especially when many people have absolutely nothing to do with it.
As someone who has tracked technology and human rights over the past ten years, I am convinced that digital ID, writ large, poses one of the gravest risks to human rights of any technology that we have encountered. . By Brett Soloman
The recent Fosta law in the US forces internet companies to censor anything to do with legal, adult and consensual sex work. It holds them liable for abetting sex traffickers even when they can't possibly distinguish the trafficking from the legal sex
work. The only solution is therefore to ban the use of their platforms for any personal hook ups. So indeed adult sex work websites have been duly cleansed from the US internet.
But now a woman is claiming that Facebook facilitated trafficking when of
course its nigh on impossible for Facebook to detect such use of their networking systems. But of course that's no excuse under the FOSTA.
According to a new lawsuit by an unnamed woman in Houston, Texas, Facebook's morally bankrupt corporate
culture for permitting a sex trafficker to force her into prostitution after beating and raping her. She claims Facebook should be held responsible when a user on the social media platform sexually exploits another Facebook user. The lawsuit says that
Facebook should have warned the woman, who was 15 years old at the time she was victimized, that its platform could be used by sex traffickers to recruit and groom victims, including children.
The lawsuit also names Backpage.com, which according
to a Reuters report , hosted pictures of the woman taken by the man who victimized her after he uploaded them to the site.
The classified advertising site Backpage has already been shut down by federal prosecutors in April of this year.
Google's parent company Alphabet has rolled out a new tool aimed at defending against attacks on free speech around the globe.
Jigsaw announced the release of a new app, Intra , designed to protect Android users against the manipulation of DNS
resolutions, a commonly used practice among repressive regimes to prohibit users from accessing information deemed off-limits by the state.
In Iran, for example, certain websites redirect to a government censorship page. The same is true of
China's Great Firewall (GFW), which returns false and, often instead, seemingly erratic IP addresses in response to DNS queries to government-blocked domains. Hundreds of websites are likewise blocked in Pakistan.
Intra works, according to its
creators, by simply encrypting the user's connection to the DNS server. By default, it points to Google's own DNS servers but for users who prefer to use another ( Cloudflare or IBM's Quad9 , for example) those settings can be changed within the
According to CNET, DNS queries will be encrypted by default in an updated version of Android Pie. Reportedly, however, around 80 percent of Android users aren't using the latest version of the Android operating system. For those, Intra is now
available in Google Play
New rules on audiovisual media services will apply to broadcasters, and also to video-on-demand and video-sharing platforms
MEPs voted on updated rules on audiovisual media services covering children protection, stricter rules
on advertising, and a requirement 30% European content in video-on-demand.
Following the final vote on this agreement, the revised legislation will apply to broadcasters, but also to video-on-demand and video-sharing platforms,
such as Netflix, YouTube or Facebook, as well as to live streaming on video-sharing platforms.
The updated rules will ensure:
Enhanced protection of minors from violence, hatred, terrorism and harmful advertising
Audiovisual media services providers should have appropriate measures to combat content inciting violence, hatred and terrorism, while gratuitous violence and pornography will be subject to the strictest rules. Video-sharing platforms
will now be responsible for reacting quickly when content is reported or flagged by users as harmful.
The legislation does not include any automatic filtering of uploaded content, but, at the request of the Parliament, platforms
need to create a transparent, easy-to-use and effective mechanism to allow users to report or flag content.
The new law includes strict rules on advertising, product placement in children's TV programmes and content available on
video-on-demand platforms. EP negotiators also secured a personal data protection mechanism for children, imposing measures to ensure that data collected by audiovisual media providers are not processed for commercial use, including for profiling and
behaviourally targeted advertising.
Redefined limits of advertising
Under the new rules, advertising can take up a maximum of 20% of the daily broadcasting period between 6.00 and 18.00, giving the broadcaster the flexibility to adjust their advertising periods. A prime-time window between 18:00 and
0:00 was also set out, during which advertising will only be allowed to take up a maximum of 20% of broadcasting time.
30% of European content on the video-on-demand platforms' catalogues
In order to support the cultural diversity of the European audiovisual sector, MEPs ensured that 30% of content in the video-on-demand platforms' catalogues should be European.
Video-on-demand platforms are
also asked to contribute to the development of European audiovisual productions, either by investing directly in content or by contributing to national funds. The level of contribution in each country should be proportional to their on-demand revenues in
that country (member states where they are established or member states where they target the audience wholly or mostly).
The legislation also includes provisions regarding accessibility, integrity of a broadcaster's signal,
strengthening regulatory authorities and promoting media competences.
The deal still needs to be formally approved by the Council of EU ministers before the revised law can enter into
force. Member States have 21 months after its entry into force to transpose the new rules into national legislation.
The text was adopted by 452 votes against 132, with 65 abstentions.
A new section has been added to the AVMS rules re censorship
Member States shall take appropriate measures to ensure that audiovisual media services provided by media service providers under their jurisdiction which may impair the physical, mental or moral development of minors are only
made available in such a way as to ensure that minors will not normally hear or see them. Such measures may include selecting the time of the broadcast, age verification tools or other technical measures. They shall be proportionate to the potential harm
of the programme. The most harmful content, such as gratuitous violence and pornography, shall be subject to the strictest measures.
Personal data of minors collected or otherwise generated by media service
providers pursuant to paragraph 1 shall not be processed for commercial purposes, such as direct marketing, profiling and behaviourally targeted advertising.
Member States shall ensure that media service providers
provide sufficient information to viewers about content which may impair the physical, mental or moral development of minors. For this purpose, media service providers shall use a system describing the potentially harmful nature of the content of an
audiovisual media service. For the implementation of this paragraph, Member States shall encourage the use of co - regulation as provided for in Article 4a(1).
The Commission shall encourage media service providers
to exchange best practices on co - regulatory codes of conduct . Member States and the Commission may foster self - regulation, for the purposes of this Article, through Union codes of conduct as referred to in Article 4a(2).
Article 4a suggests possible organisation of the censors assigned to the task, eg state censors, state controlled organisations eg Ofcom, or nominally state controlled co-regulators like the defunct ATVOD.
Article 4a(3). notes that
censorial countries like the UK are free to add further censorship rules of their own:
Member States shall remain free to require media service providers under their jurisdiction to comply with more detailed or
stricter rules in compliance with this Directive and Union law, including where their national independent regulatory authorities or bodies conclude that any code of conduct or parts thereof h ave proven not to be sufficiently effective. Member States
shall report such rules to the Commission without undue delay. ;