Although a majority are in favour of verifying age, it seems far fewer people in our survey would be happy to actually go through verification themselves. Only 19% said they'd be comfortable sharing information directly with an adult site, and
just 11% would be comfortable handing details to a third party.
The UK's mass digital surveillance regime preceding the snoopers charter has been found to be illegal by an appeals court.
The case was brought by the Labour deputy leader, Tom Watson in conjunction with Liberty, the human rights campaign group.
The three judges said Data Retention and Investigatory Powers Act 2014 (Dripa), which paved the way for the snooper's charter legislation, did not restrict the accessing of confidential personal phone and web browsing records to investigations of
serious crime, and allowed police and other public bodies to authorise their own access without adequate oversight. The judges said Dripa was inconsistent with EU law because of this lack of safeguards, including the absence of prior review by a
court or independent administrative authority.
Responding to the ruling, Watson said:
This legislation was flawed from the start. It was rushed through parliament just before recess without proper parliamentary scrutiny. The government must now bring forward changes to the Investigatory Powers Act to ensure that hundreds of
thousands of people, many of whom are innocent victims or witnesses to crime, are protected by a system of independent approval for access to communications data. I'm proud to have played my part in safeguarding citizens' fundamental rights.
Martha Spurrier, the director of Liberty, said:
Yet again a UK court has ruled the government's extreme mass surveillance regime unlawful. This judgement tells ministers in crystal clear terms that they are breaching the public's human rights. She said no politician was above the law. When
will the government stop bartering with judges and start drawing up a surveillance law that upholds our democratic freedoms?
Matthew Rice of the Open Rights Group responded:
Once again, another UK court has found another piece of Government surveillance legislation to be unlawful. The Government needs to admit their legislation is flawed and make the necessary changes to the Investigatory Powers Act to protect the
public's fundamental rights.
The Investigatory Powers Act carves a gaping hole in the public's rights. Public bodies able to access data without proper oversight, and access to that data for reasons other than fighting serious crime. These practices must stop, the courts
have now confirmed it. The ball is firmly in the Government's court to set it right.
Two broadband providers, BT and EE, have gone to the Supreme Court in London to appeal two key aspects of an earlier ruling, which forced major UK ISPs to start blocking websites that were found to sell counterfeit goods.
Previously major ISPs could only be forced, via a court order, to block websites if they were found to facilitate internet copyright infringement. But in 2014 the High Court extended this to include sites that sell counterfeit goods and thus
abuse company trademarks.
The providers initially appealed this decision, not least by stating that Cartier and Montblanc (they raised the original case) had provided no evidence that their networks were being abused to infringe Trade Marks and that the UK Trade Mark Act
did not include a provision for website blocking. Not to mention the risk that such a law could be applied in an overzealous way, eg requiring the blocking of eBay because of one seller.
The ISPs also noted that trademark infringing sites weren't heavily used, and thus they felt as if it would not be proportionate for them to suffer the costs involved.
In April 2016 this case went to the Court of Appeal (London) and the ISPs lost and so the appeal to the Supreme Court.
Firefox is working to protect users from censorship and government control of the Internet. Firefox 59 will recognize new peer to peer internet protocols such as Dat Project, IPFS, and Secure Scuttlebutt, allowing companies to develop extensions
which will deliver the Internet in a way governments will find difficult to control, monitor and censor.
Mozilla believes such freedom is a key ingredient of a healthy Internet, and has sponsored other projects which would offer peer to peer wireless internet which cuts out Internet Service Providers.
While a peer to peer system would never be as fast and easy as a client-server system as we have at present, it does provide a baseline level of service which government and ISPs could not go below, or risk increasing number of users defecting,
which means the mere existence of these systems helps everyone else, even if they never become widespread.
Mozilla has always been a proponent of decentralization , recognizing that it is a key ingredient of a healthy Internet. Starting with Firefox 59, several protocols that support decentralized architectures are approved for
use by extensions. The newly approved protocols are:
Firefox itself does not implement these protocols, but having them on the approved list means the browser recognizes them as valid protocols and extensions are free to provide implementations.
A Republican Virginia lawmaker has revived the nonsense idea to impose a state tax charge on every device sold to enable access to adult websites.
State Representative Dave LaRock's has introduced a bill misleadingly called the Human Trafficking Prevention Act, which would require Virginians to pay a $20 fee to unblock content on adult websites.
LaRock has track record of being anti-porn and anti-gay. He once tore down advertising for an adult bookstore and railed against recognition for a local LGBTQ pride month.
Opponents point out that the proposal amounts to a tax on media content and would violate the First Amendment. The Media Coalition, which tracks legislation involving the First Amendment, sees the bill as nothing more than a tax on content, which
is unconstitutional, said executive director David Horowitz. People have a First Amendment right to access this content, and publishers have a First Amendment right to provide it.
Claire Guthrie Gastañaga, executive director of the ACLU of Virginia, said the organization just can't take the bill seriously.
China's internet censor has shut down some of the most popular sections of Weibo, a Twitter-like social media platform, saying that the website had failed in its duty to censor content.
The Beijing office of the Cyberspace Administration of China summoned a Weibo executive, complaining of its serious problems including not censoring vulgar and pornographic content. The censor said:
Sina Weibo has violated the relevant internet laws and regulations and spread illegal information. It has a serious problem in promoting 'wrong' values and has had an adverse influence on the internet environment.
It highlighted as problematic sections of the platform such as the hot topics ranking, most searched, most searched celebrities and most searched relationship topics, as well as its question-and-answer section.
Other problems on Weibo included allowing posts that discriminated against ethnic minorities and content that was not in line with what it deemed appropriate social values.
Weibo said it had since shut down a number of services, including its list of top searches, for a week.
Just a bit of background from Thailand explaining how internet is priced for mobile phones, it rather explains how Facebook amd Youtube are even more dominant than in the west:
We give our littl'un a quid a week to top up her pay as you go mobile phone. She can, and does, spend unlimited time on YouTube, Facebook, Messenger, Skype, Line and a couple of other social media sites. It's as cheap as chips, but the rub is
that she has just a tiny bandwidth allowance to look at any sites apart from the core social media set.
On the other hand wider internet access with enough bandwidth to watch a few videos costs abut 15 quid a month (a recently reduced price, it used to be 30 quid a month a few months ago).
Presumably the cheap service is actually paid for by Google and Facebook etc with the knowledge that people are nearly totally trapped in their walled garden. Its quite useful for kids because they haven't got the bandwidth to go looking round
where they shouldn't. But the price makes it very attractive to many adults too.
Anyway Summer Lopez from PEN America considers how this internet monopoly stitch up is even more sensitive to the announced Facebook feed changes than in the west.
Theresa May is creating a new national security unit to counter supposed fake news and disinformation spread by Russia and other foreign powers, Downing Street has announced.
The Prime Minister's official spokesman said the new national security communications unit would build on existing capabilities and would be tasked with combating disinformation by state actors and others. The spokesman said:
We are living in an era of fake news and competing narratives. The government will respond with more and better use of national security communications to tackle these interconnected, complex challenges.
To do this we will build on existing capabilities by creating a dedicated national security communications unit. This will be tasked with combating disinformation by state actors and others.
The new unit has already been dubbed the Ministry of Truth.
It is clear that the BBFC are set to censor porn websites but what about the grey area of non-porn websites about porn and sex work. The BBFC falsely claim they don't know yet as they haven't begun work on their guidelines
A few MEPs produce YouTube video highlighting the corporate and state censorship that will be enabled by an EU proposal to require social media posts to be approved before posting by an automated censorship machine
In a new campaign video, several Members of the European Parliament warn that the EU's proposed mandatory upload filters pose a threat to freedom of speech. The new filters would function as censorship machines which are "completely
disproportionate," they say. The MEPs encourage the public to speak up, while they still can.
Through a series of new proposals, the European Commission is working hard to
modernize EU copyright law. Among other things, it will require online services to do more to fight piracy.
These proposals have not been without controversy. Article 13 of the proposed Copyright Directive, for example, has been widely criticized as it would require online services to monitor and filter uploaded content.
This means that online services, which deal with large volumes of user-uploaded content, must use fingerprinting or other detection mechanisms -- similar to YouTube's Content-ID system -- to block copyright infringing files.
The Commission believes that more stringent control is needed to support copyright holders. However, many
legal scholars ,
digital activists , and members of the public worry that they will violate the rights of regular Internet users.
In the European Parliament, there is fierce opposition as well. Today, six Members of Parliament (MEPs) from across the political spectrum released a new campaign video warning their fellow colleagues and the public at large.
The MEPs warn that such upload filters would act as censorship machines, something they've made clear to the Council's working group on intellectual property, where the controversial proposal was discussed today.
Imagine if every time you opened your mouth, computers controlled by big companies would check what you were about to say, and have the power to prevent you from saying it, Greens/EFA MEP Julia Reda says.
A new legal proposal would make this a reality when it comes to expressing yourself online: Every clip and every photo would have to be pre-screened by some automated 'robocop' before it could be uploaded and seen online, ALDE MEP Marietje
Stop censorship machines!
Schaake notes that she has dealt with the consequences of upload filters herself. When she uploaded a recording of a political speech to YouTube, the site took it down without explanation. Until this day, the MEP still doesn't know on what
grounds it was removed.
These broad upload filters are completely disproportionate and a danger for freedom of speech, the MEPs warn. The automated systems make mistakes and can't properly detect whether something's fair use, for example.
Another problem is that the measures will be relatively costly for smaller companies ,which puts them at a competitive disadvantage. "Only the biggest platforms can afford them -- European competitors and small businesses will
struggle," ECR MEP Dan Dalton says.
The plans can still be stopped, the MEPs say. They are currently scheduled for a vote in the Legal Affairs Committee at the end of March, and the video encourages members of the public to raise their voices.
Speak out ...while you can still do so unfiltered! S&D MEP Catherine Stihler says.
Robert Hannigan, a recent director of GCHQ has joined the clamour for internet censorship by US internet monopolies.
Hannigan accused the web giants of doing too little to remove terrorist and extremist content and he threatened that the companies have a year to reform themselves or face government legislation.
Hannigan suggested tech companies were becoming more powerful than governments, and had a tendency to consider themselves above democracy. But he said he believed their window to change themselves was closing and he feared most were missing the
boat. He predicted that if firms do not take credible action by the end of 2018, governments would start to intervene with legislation.
The government publishes it guidance to the new UK porn censor about notifying websites that they are to be censored, asking payment providers and advertisers to end their service, recourse to ISP blocks and an appeals process
A person contravenes Part 3 of the Digital Economy Act 2017 if they make
pornographic material available on the internet on a commercial basis to
persons in the United Kingdom without ensuring that the material is not
normally accessible to persons under the age of 18. Contravention could lead
to a range of measures being taken by the age-verification regulator in
relation to that person, including blocking by internet service providers (ISPs).
Part 3 also gives the age-verification regulator powers to act where a person
makes extreme pornographic material (as defined in section 22 of the Digital
Economy Act 2017) available on the internet to persons in the United
This guidance has been written to provide the framework for the operation of
the age-verification regulatory regime in the following areas:
● Regulator's approach to the exercise of its powers;
● Age-verification arrangements;
● Payment-services Providers and Ancillary Service Providers;
● Internet Service Provider blocking; and
This guidance balances two overarching principles in the regulator's application of its powers under sections 19, 21 and 23 - that it should apply its powers in the way which it thinks will be most effective in ensuring
compliance on a case-by-case basis and that it should take a proportionate approach.
As set out in this guidance, it is expected that the regulator, in taking a proportionate approach, will first seek to engage with the non-compliant person to encourage them to comply, before considering issuing a notice
under section 19, 21 or 23, unless there are reasons as to why the regulator does not think that is appropriate in a given case
Regulator's approach to the exercise of its powers
The age-verification consultation Child Safety Online: Age verification for pornography identified that an extremely large number of websites contain pornographic content - circa 5 million sites or parts of sites. All
providers of online pornography, who are making available pornographic material to persons in the United Kingdom on a commercial basis, will be required to comply with the age-verification requirement .
In exercising its powers, the regulator should take a proportionate approach. Section 26(1) specifically provides that the regulator may, if it thinks fit, choose to exercise its powers principally in relation to persons
who, in the age-verification regulator's opinion:
(a) make pornographic material or extreme pornographic material available on the internet on a commercial basis to a large number of persons, or a large number of persons under the age of 18, in the United Kingdom; or
(b) generate a large amount of turnover by doing so.
In taking a proportionate approach, the regulator should have regard to the following:
a. As set out in section 19, before making a determination that a person is contravening section 14(1), the regulator must allow that person an opportunity to make representations about why the determination should not be
made. To ensure clarity and discourage evasion, the regulator should specify a prompt timeframe for compliance and, if it considers it appropriate, set out the steps that it considers that the person needs to take to comply.
b. When considering whether to exercise its powers (whether under section 19, 21 or 23), including considering what type of notice to issue, the regulator should consider, in any given case, which intervention will be most
effective in encouraging compliance, while balancing this against the need to act in a proportionate manner.
c. Before issuing a notice to require internet service providers to block access to material, the regulator must always first consider whether issuing civil proceedings or giving notice to ancillary service providers and
payment-services providers might have a sufficient effect on the non-complying person's behaviour.
To help ensure transparency, the regulator should publish on its website details of any notices under sections 19, 21 and 23.
Section 25(1) provides that the regulator must publish guidance about the types of arrangements for making pornographic material available that the regulator will treat as complying with section 14(1). This guidance is
subject to a Parliamentary procedure
A person making pornographic material available on a commercial basis to persons in the United Kingdom must have an effective process in place to verify a user is 18 or over. There are various methods for verifying whether
someone is 18 or over (and it is expected that new age-verification technologies will develop over time). As such, the Secretary of State considers that rather than setting out a closed list of age-verification arrangements, the regulator's
guidance should specify the criteria by which it will assess, in any given case, that a person has met with this requirement. The regulator's guidance should also outline good practice in relation to age verification to encourage consumer choice
and the use of mechanisms which confirm age, rather than identity.
The regulator is not required to approve individual age-verification solutions. There are various ways to age verify online and the industry is developing at pace. Providers are innovating and providing choice to consumers.
The process of verifying age for adults should be concerned only with the need to establish that the user is aged 18 or above. The privacy of adult users of pornographic sites should be maintained and the potential for fraud
or misuse of personal data should be safeguarded. The key focus of many age-verification providers is on privacy and specifically providing verification, rather than identification of the individual.
Payment-services providers and ancillary service providers
There is no requirement in the Digital Economy Act for payment-services providers or ancillary service providers to take any action on receipt of such a notice. However, Government expects that responsible companies will
wish to withdraw services from those who are in breach of UK legislation by making pornographic material accessible online to children or by making extreme pornographic material available.
The regulator should consider on a case-by-case basis the effectiveness of notifying different ancillary service providers (and payment-services providers).
There are a wide-range of providers whose services may be used by pornography providers to enable or facilitate making pornography available online and who may therefore fall under the definition of ancillary service
provider in section 21(5)(a) . Such a service is not limited to where a direct financial relationship is in place between the service and the pornography provider. Section 21(5)(b) identifies those who advertise commercially on such sites as
ancillary service providers. In addition, others include, but are not limited to:
a. Platforms which enable pornographic content or extreme pornographic material to be uploaded;
b. Search engines which facilitate access to pornographic content or extreme pornographic material;
c. Discussion for a and communities in which users post links;
d. Cyberlockers' and cloud storage services on which pornographic content or extreme pornographic material may be stored;
e. Services including websites and App marketplaces that enable users to download Apps;
f. Hosting services which enable access to websites, Apps or App marketplaces; that enable users to download apps
g. Domain name registrars.
h. Set-top boxes, mobile applications and other devices that can connect directly to streaming servers
Internet Service Provider blocking
The regulator should only issue a notice to an internet service provider having had regard to Chapter 2 of this guidance. The regulator should take a proportionate approach and consider all actions (Chapter 2.4) before
issuing a notice to internet service providers.
In determining those ISPs that will be subject to notification, the regulator should take into consideration the number and the nature of customers, with a focus on suppliers of home and mobile broadband services. The
regulator should consider any ISP that promotes its services on the basis of pornography being accessible without age verification irrespective of other considerations.
The regulator should take into account the child safety impact that will be achieved by notifying a supplier with a small number of subscribers and ensure a proportionate approach. Additionally, it is not anticipated that
ISPs will be expected to block services to business customers, unless a specific need is identified.
In order to assist with the ongoing review of the effectiveness of the new regime and the regulator's functions, the Secretary of State considers that it would be good practice for the regulator to submit to the Secretary of
State an annual report on the exercise of its functions and their effectiveness.
The US adult trade group, Free Speech Coalition at its inaugural Leadership Conference on Thursday introduced Murray Perkins, who leads efforts for the UK's new age-verification censorship regime under the Digital Economy Act.
Perkins is the principal adviser for the BBFC, which last year signed on to assume the role of internet porn censor.
Perkins traveled to the XBIZ Show on an informational trip specifically to offer education on the Digital Economy Act's regulatory powers; he continues on to Las Vegas next week and Australia the following week to speak with online adult
The reason why I am here is to be visible, to give people an opportunity to ask questions about what is happening. I firmly believe that the only way to make this work is to with and not against the adult entertainment industry.
This is a challenge; there is no template, but we will figure it out. I am reasonably optimistic [the legislation] will work.
A team of classification examiners will start screening content for potential violations starting in the spring. (In a separate discussion with XBIZ, Perkins said that his army of examiners will total 15.)
Perkins showed himself to be a bit naive, a bit insensitive, or a bit of an idiot when he spouted:
The Digital Economy Act will affect everyone in this room, one way or the other, Perkins said. However, the Digital Economy Act is not anti-porn -- it is not intended to disrupt an adult's journey or access to their
content. [...BUT... it is likely to totally devastate the UK adult industry and hand over all remaining business to the foreign internet giant Mindgeek, who will become the Facebook/Google/Amazon of porn. Not to mention the Brits served
on a platter to scammers, blackmailers and identity thieves].
The third evaluation of the EU's 'Code of Conduct' on censoring 'illegal online hate speech' carried out by NGOs and public bodies shows that IT companies removed on average 70% of posts claimed to contain 'illegal hate speech'.
However, some further challenges still remain, in particular the lack of systematic feedback to users.
Google+ announced today that they are joining the Code of Conduct, and Facebook confirmed that Instagram would also do so, thus further expanding the numbers of actors covered by it.
Vera Jourová, with the oxymoronic title of EU Commissioner for Justice, Consumers and Gender Equality, said:
The Internet must be a safe place, free from illegal hate speech, free from xenophobic and racist content. The Code of Conduct is now proving to be a valuable tool to tackle illegal content quickly and efficiently. This shows that where there is
a strong collaboration between technology companies, civil society and policy makers we can get results, and at the same time, preserve freedom of speech. I expect IT companies to show similar determination when working on other important
issues, such as the fight with terrorism, or unfavourable terms and conditions for their users.
On average, IT companies removed 70% of all the 'illegal hate speech' notified to them by the NGOs and public bodies participating in the evaluation. This rate has steadily increased from 28% in the first monitoring round in 2016 and 59% in the
second monitoring exercise in May 2017.T
The Commission will continue to monitor regularly the implementation of the Code by the participating IT Companies with the help of civil society organisations and aims at widening it to further online platforms. The Commission will consider
additional measures if efforts are not pursued or slow down.
Of course no mention of the possibility that some of the reports of supposed 'illegal hate speech' are not actioned because they are simply wrong and may be just the politically correct being easily offended. We seem to live in an injust age
where the accuser is always considered right and the merits of the case count for absolutely nothing.
Google is set for its first appearance in a London court over the so-called right to be forgotten in two cases that will test the boundaries between personal privacy and public interest.
Two anonymous people, who describe themselves in court filings as businessmen, want the search engine to take down links to information about their old convictions.
One of the men had been found guilty of conspiracy to account falsely, and the other of conspiracy to intercept communications. Judge Matthew Nicklin said at a pre-trial hearing that hose convictions are old and are now covered by an English law
-- designed to rehabilitate offenders -- that says they can effectively be ignored. With a few exceptions, they don't have to be disclosed to potential employers.
A Google spokeswoman said:
We work hard to comply with the right to be forgotten, but we take great care not to remove search results that are clearly in the public interest and will defend the public's right to access lawful information.
Facebook has unveiled more changes to the News Feed of its 2 billion users, announcing it will rank news organizations by credibility based on user feedback and diminish its role as an arbiter of the news people see.
In a blog post accompanying the announcement, chief executive Mark Zuckerberg wrote:
Facebook is not comfortable deciding which news sources are the most trustworthy in a world with so much division. We decided that having the community determine which sources are broadly trusted would be most objective.
The new trust rankings will emerge from surveys the company is conducting. Broadly trusted outlets that are affirmed by a significant cross-section of users may see a boost in readership, while less known organizations or start-ups receiving poor
ratings could see their web traffic decline significantly on the social network.
The company's changes also include an effort to boost the content of local news outlets, which have suffered sizable subscription and readership declines as news consumption migrated online.
On Friday, Google announced it would cancel a two-month-old experiment, called Knowledge Panel, that informed its users that a news article had been disputed by independent fact-checking organizations. Conservatives had complained the feature
unfairly targeted a right-leaning outlet.
The House of Representatives cast a deeply disappointing vote today to extend NSA spying powers for the next six years by a 256-164 margin. In a related vote, the House also failed to adopt meaningful reforms on how the government sweeps up large
swaths of data that predictably include Americans' communications.
Because of these votes, broad NSA surveillance of the Internet will likely continue, and the government will still have access to Americans' emails, chat logs, and browsing history without a warrant. Because of these votes, this surveillance will
continue to operate in a dark corner, routinely violating the Fourth Amendment and other core constitutional protections.
This is a disappointment to EFF and all our supporters who, for weeks, have spoken to defend privacy. And this is a disappointment for the dozens of Congress members who have tried to rein NSA surveillance in, asking that the intelligence
community merely follow the Constitution.
Today's House vote concerned S. 139, a bill to extend Section 702 of the Foreign Intelligence Surveillance Act (FISA), a powerful surveillance authority the NSA relies on to sweep up countless Americans' electronic communications. EFF vehemently
opposed S. 139 for its failure to enact true reform of Section 702.
As passed by the House today, the bill:
Endorses nearly all warrantless searches of databases containing Americans' communications collected under Section 702.
Provides a narrow and seemingly useless warrant requirement that applies only for searches in some later-stage criminal investigations, a circumstance which the FBI itself has said almost never happens.
Allows for the restarting of "about" collection, an invasive type of surveillance that the NSA ended last year after being criticized by the Foreign Intelligence Surveillance Court for privacy violations.
Sunsets in six years, delaying Congress' best opportunity to debate the limits NSA surveillance.
Sadly, the House's approval of S. 139 was its second failure today. The first was in the House's inability to pass an amendment--through a 183-233 vote--that would have replaced the text of S. 139 with the text of the USA Rights Act, a bill that
EFF is proud to support. You can
read about that bill here .
The amendment to replace the text of S. 139 with the USA Rights Act was introduced by Reps. Justin Amash (R-MI) and Zoe Lofgren (D-CA) and included more than 40 cosponsors from sides of the aisle. Its defeat came from both Republicans and
S. 139 now heads to the Senate, which we expect to vote by January 19. The Senate has already considered
stronger bills to rein in NSA surveillance, and we call on the Senate to reject this terrible bill coming out of the House.
A new undercover video from a group of conservative investigative journalists appears to show Twitter staff and former employees talking about how they censor content they disagree with.
James O'Keefe, Project Veritas founder, posted a video showing an undercover reporter speaking to Abhinov Vadrevu, a former Twitter software engineer, at a San Francisco restaurant on January 3.
There, he discussed a technique referred to as shadow banning, which means that users' content is quietly blocked without them ever knowing about it. Their tweets would still appear to their followers, but it wouldn't appear in search results or
anywhere else on Twitter. So posters just think that no one is engaging with their content, when in reality, no one is seeing it.
Olinda Hassan, a policy manager for Twitter's Trust and Safety team, was filmed talking about development of a system for down ranking shitty people.
Another Twitter engineer claimed that staff already have tools to censor pro-Trump or conservative content. One Twitter engineer appeared to suggest that the social network was trying to ban, like, a way of talking. Anyone found to be aggressive
or negative will just vanish.
Every single conversation is going to be rated by a machine and the machine is going to say whether or not it's a positive thing or a negative thing, Twitter software engineer Steven Pierre was filmed on December 8 saying as he discussed the
development of an automated censure system.
In the latest undercover Project Veritas video investigation, eight current and former Twitter employees are on camera explaining steps the social media giant is taking to censor political content that they don't like.
Last September, Swiss legislators approved changes to its gambling laws will introduce website blocking for foreign competitors to Switzerland's own gambling industry.
This domain-blocking plan, set to take effect in 2019, met with pushback from Swiss ISPs and civil libertarians, who decided Swiss voters should have a say in this flirtation with authoritarian censorship. Swiss law allows voters a referendum on
contentious legislation provided 50k citizens sign the necessary petition within 100 days of the law's passage.
On Tuesday, Swiss media outlet Blick reported that a coalition of three political parties and the Internet Society Switzerland Chapter had so far collected around 65k signatures, of which 25k have been certified by the state. The group has
until January 18 to certify the additional 25k signatures needed for the referendum to be approved.
Andri Silberschmidt, president of the youth organization of Switzerland's Free Democratic Party, told Blick that his group was intent on combatting digital isolation, mindful that once a government starts banning what its citizens can do online,
even tighter restrictions are usually not far behind. Freedom for the economy and the internet, has great support in Switzerland.
The local casino industry, which has long complained that its falling revenue was due to competition from international gambling sites. But the most recent data from the Swiss Federation of Casinos showed the nation's 21 licensed brick-and-mortar
casinos posted a modest year-on-year revenue gain in 2016. Lottery and sports betting revenue enjoyed even larger gains in 2016, rising 8.3% year-on-year. So it appears that there are bluffs to call.
Twitter said that it would not ban a world leader, Donald Trump, from its platform....BUT... that it reserved the right to delete official statements by heads of state of sovereign nations as it saw fit
Germany's justice minister fell victim to the rules he himself championed against online social media when one of his tweets was deleted following several complaints.
The censored tweet dated back to 2010, when Heiko Maas was not yet minister. in the tweet he had called Thilo Sarrazin, a politician who wrote a controversial book on Muslim immigrants, an idiot.
Maas told Bild on Monday that he did not receive any information from Twitter about why the tweet was deleted, or whether it would be deleted from Twitter.
Germany meanwhile signalled on Monday it was open to amending the controversial law which combats online hate speech. Government spokesman Steffen Seibert said an evaluation would be carried out within six months to examine how well the new law
Germany's NetzDG internet censorship law has been in force since the New Year and has already sparked multiple controversies. Opposition parties across the political spectrum already say its time for change.
Senior figures in the rival Free Democratic (FDP), Green and Left parties on Sunday demanded lawmakers replace Germany's recently passed online hate speech law. The call comes after Twitter decided to delete allegedly offensive statements by
far-right politicians and suspend the account of a German satirical magazine.
The last few days have emphatically shown that private companies cannot correctly determine whether a questionable online statement is illegal, satirical or tasteless yet still democratically legitimate, the FDP's general secretary Nicola Beer
told Germany weekly Die Welt am Sonntag .
Beer said Germany needed a law similar to the one the FDP proposed before Christmas that would give an appropriately endowed authority the right to enforce the rule of law online rather than give private companies the right to determine the
illegality of flagged content.
Green Party Chairwoman Simone Peter has also called for a replacement law that would take away the right of private companies to make decisions regarding flagged content. He said:
It is not acceptable for US companies such as Twitter to influence freedom of expression or press freedoms in Germany. Last year, we proposed a clear legal alternative that would hold platforms such as Twitter accountable without making them
Greens' internet policy spokesman, Konstantin von Notz, also criticized the current statute, telling the newspaper that the need for reform the law was overdue.
Left leader Sarah Wagenknecht added:
The law is a slap in the face of all democratic principles because, in a constitutional state, courts rather than private companies make decisions about what is lawful and what is not.
Chairman of the Federal Communications Commission (FFC), Ajit Pai, has been forced to cancel his scheduled appearance at the world's biggest tech conference, CES, after receiving death threats.
That's according to a report by Recode, which cites two agency sources familiar with the matter. This seems to be in response to the disgraceful FCC decision to scrap the US government's net neutrality rules, made in December last year.
For those not up-to-date, net neutrality is the concept that internet service providers should enable equal levels of access to all web services. The decision enables big business to assert a lot more control over the internet and to let them
charge websites and customers for differing levels of service.
The Twitter account of German satirical magazine Titanic was blocked after it parodied anti-Muslim comments by AfD MP Beatrix von Storch.
She accused police of trying to appease the barbaric, Muslim, rapist hordes of men by putting out a tweet in Arabic.
On Tuesday night, the magazine published a tweet parodying von Storch, saying:
The last thing that I want is mollified barbarian, Muslim, gang-raping hordes of men.
Titanic said on Wednesday its Twitter account had been blocked over the message, presumably as a result of a new law requiring social media sites to immediately block hateful comments on threat of massive fines. There is no time allowed or
economic reason for assessing the merits of censorship claims, so social media companies are just censoring everything on demand, just in case.
China's social media giants are ramping up efforts to get their users to snitch on people circulating taboo content.
China's tech giant Tencent said it was hiring 200 content censors to form what the company is calling a penguin patrol unit, after the company's penguin mascot. The brigade, made of 10 journalists, 70 writers who use Tencent's content platforms,
and 120 regular internet users, will flag content that transgresses China's repressive censorship rules.
Reviewers will be required to make at least 300 snitch reports each month about transgressive information, including porn, sensational headlines, plagiarism, fake news, or old news. Those who complete the mission will get 30 virtual coins which
can be used to purchase items on Tencent's QQ chat app. Those who fail to meet the reporting quota three times will be booted from the unit.
The UK government slipped out its impact assessment of the upcoming porn censorship law during the Christmas break. The new law requires porn websites to be blocked in the UK when they don't implement age verification.
The measures are currently due to come into force in May but it seems a tight schedule as even the rules for acceptable age verification systems have not yet been published.
The report contains some interesting costings and assessment of the expected harms to be inflicted on porn viewers and British adult businesses.
The document notes the unpopularity of the age verification requirements with a public consultation finding that 54% of respondents did not support the introduction of a law to require age verification.
However, the government has forged ahead, with the aim of stopping kids accessing porn on the grounds that such content could distress them or harm their development.
The governments censorship rules will be enforced by the BBFC, in its new role as the UK porn censor although it prefers the descriptor: age-verification regulator . The government states that the censorship job will initially be funded by
the government, and the government is assuming this will cost £4.5 million based upon a range of estimates from 1 million to 8 million.
The government has bizarrely assumed that the BBFC will ban just 1 to 60 sites in a year. The additional work for ISPs to block these sites is estimated £100,000 to £500,000 for each ISP. Probably to be absorbed by larger companies, but will be
an expensive problem for smaller companies who do not currently implement any blocking systems.
Interestingly the government notes that there wont be any impact on UK adult businesses notionally because they should have already implemented age verification under ATVOD and Ofcom censorship rules. In reality it will have little impact on UK
businesses because they have already been decimated by the ATVOD and Ofcom rules and have mostly closed down or moved abroad.
Te key section of the document summarising expected harms is as follows.
The policy option set out above also gives rise to the following risks:
Deterring adults from consuming content as a result of privacy/ fraud concerns linked to inputting ID data into sites and apps, also some adults may not be able to prove their age online;
Development of alternative payment systems and technological work-arounds could mean porn providers do not comply with new law, and enforcement is impossible as they are based overseas, so the policy goal would not be
The assumption that ISPs will comply with the direction of the regulator;
Reputational risks including Government censorship, over-regulation, freedom of speech and freedom of expression.
The potential for online fraud could raise significantly, as criminals adapt approaches in order to make use of false AV systems / spoof websites and access user data;
The potential ability of children, particularly older children, to bypass age verification controls is a risk. However, whilst no system will be perfect, and alternative routes such as virtual private networks and
peer-to-peer sharing of content may enable some under-18s to see this content, Ofcom research indicates that the numbers of children bypassing network level filters, for example, is very low (ca. 1%).
Adults (and some children) may be pushed towards using ToR and related systems to avoid AV where they could be exposed to illegal and extreme material that they otherwise would never have come into contact with.
The list does not seem to include the potential for blackmail from user data sold by porn firms, or else stolen by hackers. And mischievously, politicians could be one of the groups most open to blackmail for money or favours.
Another notable omission, is that the government does not seem overly concerned about mass VPN usage. I would have thought that the secret services wanting to monitor terrorists would not be pleased if a couple of million people stared to use
encrypted VPNs. Perhaps it shows that the likes of GCHQ can already see into what goes on behind VPNs.
The Information Commissioner's Office (ICO) has warned Facebook, Twitter and Snapchat to tighten up their age controls and kick off underage users.
The ICO stepped in after it became aware that millions of British children join the platform before they were 13. New ICO guidelines state that social media giants must examine whether they put children at risk -- by showing minors adverts for
alcohol or gambling, for example.
The guidance, which is under consultation, also calls on the firms to do a better job of kicking underage users off their platforms, and to stop or deter children from sharing their information online.
Elizabeth Denham, the Information Commissioner, threatened:
Whether designing services to provide protection for children or having a system to verify age, organisations, including social media companies, need to change the way they offer services to children.
It's also vital that we ensure children's interests and rights are protected online in the same way they are in all other aspects of life.
In November, an Ofcom report revealed that half of British 12-year-olds and more than a quarter of ten-year-olds have their own social media profiles. At the moment, all the major web giants demand that users are over 13 before they get an
account -- but they do next to nothing to enforce that rule.
Facebook, Twitter and Snapchat insist it is unrealistic to have to verify the age of users under the age of 18.
The ICO does not seem to have addressed the enormity of their demand. Facebook and social networks are the very essence of smart phones. If children aren't allowed to share things, how does any website or app feed up news and articles to anyone
if it does now what the reader likes nor who is linked to that person. Typing in what you want to see is no longer practical or desirable, so the basic idea of sending people more of what they have already shown they liked is the only game in
town. Of course the kids could play games all day instead, but maybe that has a downside too.
Germany starts enforcing an internet censorship law where contested content has to be taken down pronto by social media who will suffer massive fines of they don't comply.
The law is supposedly targeted at obviously illegal hate speech, but surely it will be used to take down content anyone doesn't like for any reason. The threats of fines and short time allowed simply means that websites will opt for the easiest
and most economic policy, and that is to take down anything contested.
The new law states the sites that do not remove obviously illegal posts could face fines of up to 50m euro. The law gives the networks 24 hours to act after they have been told about law-breaking material.
Social networks and media sites with more than two million members will fall under the law's provisions. Facebook, Twitter and YouTube will be the law's main focus but it is also likely to be applied to Reddit, Tumblr and Russian social network
VK. Other sites such as Vimeo and Flickr could also be caught up in its provisions.
Facebook has reportedly recruited several hundred staff in Germany to deal with reports about content that breaks the NetzDG and to do a better job of monitoring what people post.
Update: First examples of fair free speech being censored in Germany
Sophie Passmann is an unlikely poster child for Germany's new online hate speech laws.
The 24-year-old comedian from Cologne posted a satirical message on Twitter early on New Year's Day, mocking the German far right's fear that the hundreds of thousands of immigrants that have entered the country in recent years would endanger
Germany's culture. Instead of entertaining her more than 14,000 Twitter followers , Passmann's tweet was blocked within nine hours by the American social media giant, telling users in Germany that Passmann's message had run afoul of local laws.
Germany's rightwing AfD party have been busy with political posters pointing out that they will be the likely victims of censorship under Germany's new law.
And they will certainly have a good claim. The new law will surely over censor, and any complaint will end up in a censored post, regardless of the merits of the claim. A slightly UnPC post by AfD is likely to be blocked, and so the AfD will
rightly be able to highlight the censorship.
The publicity for examples of censorship will surely chime with a significant proportion of the German population, and so will add to the general level of disaffection with the political elite.
Perhaps Germany ought to at least ensure that censorship should be based on the merits of the case, not implemented by a commercial company who only cares about the cheapest possible method of meeting the censorship requirements.
Emmanuel Macron has vowed to introduce a law to censor 'fake news' on the internet during French election campaigns. He claimed he wanted new legislation for social media platforms during election periods in order to protect democracy.
For fake news published during election seasons, an emergency legal action could allow authorities to remove that content or even block the website, Macron said. If we want to protect liberal democracies, we must be strong and have clear rules.
He said France's media censor, the CSA, would be empowered to fight against any attempt at destabilisation by TV stations controlled or influenced by foreign states. Macron said he wanted to act against what he called propaganda articulated
by thousands of social media accounts.
Macron has an axe to grind about fake news, during the election campaign in spring 2017 he filed a legal complaint after Le Pen, the Front National leader, referred to fake stories about him placing funds in an offshore account in the Bahamas.
Also a bogus website resembling the site of the Belgian newspaper Le Soir reported that Saudi Arabia was financing Macron's campaign. Le Soir totally distanced itself from the report.
Britain's security minister Ben Wallace has threatened technology firms such as Facebook, YouTube and Google with punitive taxation if they fail to cooperate with the government on fighting online extremism.
Ben Wallace said that Britain was spending hundreds of millions of pounds on human surveillance and de-radicalisation programmes because tech giants were failing to remove extremist content online quick enough.
Wallace said the companies were ruthless profiteers, despite sitting on beanbags in T-shirts, who sold on details of its users to loan companies but would fail to give the same information to the government.
Because of encryption and because of radicalisation, the cost of that is heaped on law enforcement agencies, Wallace told the Sunday Times. I have to have more human surveillance. It's costing hundreds of millions of pounds. If they [tech firms]
continue to be less than co-operative, we should look at things like tax as a way of incentivising them or compensating for their inaction.
Because content is not taken down as quickly as they could do, we're having to de-radicalise people who have been radicalised. That's costing millions. They [the firms] can't get away with that and we should look at all options, including tax.
Maybe its a good idea to extract a significantly higher tax take from the vast sums of money being siphoned out of the UK economy straight into the hands of American big business. But it seems a little hopeful to claim that quicker blocking of
terrorist related material will 'solve' the UK's terrorism problem.
One suspects that terrorism is a little more entrenched in society, and that terrorism will continue pretty much unabated even if the government get its way with quicker takedowns. There might even be a scope for some very expensive legal bluff
calling, should expensive censorship measures get taken, and it turns out that the government blame conjecture is provably wrong.