Poland is challenging the EU's copyright directive in the EU Court of Justice (CJEU) on grounds of its threats to freedom of speech on the internet, Foreign Minister Jacek Czaputowicz said on Friday.
The complaint especially addresses a mechanism obliging online services to run preventive checks on user content even without suspicion of copyright infringement. Czaputowicz explained at a press conference in Warsaw:
Poland has charged the copyright directive to the CJEU, because in our opinion it creates a fundamental threat to freedom of speech on the internet. Such censorship is forbidden both by the Polish constitution and EU law. The Charter of
Fundamental Rights (of the European Union - PAP) guarantees freedom of speech.
The directive is to change the way online content is published and monitored. EU members have two years to introduce the new regulations. Against the directive are Poland, Holland, Italy, Finland and Luxembourg.
Ireland's Justice Minister Charlie Flanagan confirmed that the Irish government will consider a similar system to the UK's so-called porn block law as part of new legislation on online safety. Flanagan said:
I would be very keen that we would engage widely to ensure that Ireland could benefit from what is international best practice here and that is why we are looking at what is happening in other jurisdictions.
The Irish communications minister Richard Bruton said there are also issues around privacy laws and this has to be carefully dealt with. H said:
It would be my view that government through the strategy that we have published, we have a cross-government committee who is looking at policy development to ensure online safety, and I think that forum is the forum where I believe we will
discuss what should be done in that area because I think there is a genuine public concern, it hasn't been the subject of the Law Reform Commission or other scrutiny of legislation in this area, but it was worthy of consideration, but it does
have its difficulties, as the UK indeed has recognised also.
Users Without Resources to Fight Back Are Most Affected by Unevenly-Enforced Rules
The Electronic Frontier Foundation (EFF) has launched TOSsed Out, a project to highlight the vast spectrum of people silenced by social media platforms that inconsistently and erroneously apply terms of service (TOS) rules.
TOSsed Out will track and publicize the ways in which TOS and other speech moderation rules are unevenly enforced, with little to no transparency, against a range people for whom the Internet is an irreplaceable forum to express ideas, connect
with others, and find support.
This includes people on the margins who question authority, criticize the powerful, educate, and call attention to discrimination. The project is a continuation of work EFF began five years ago when it launched Onlinecensorship.org to collect
speech takedown reports from users.
Last week the White House launched a tool to report take downs, following the president's repeated allegations that conservatives are being censored on social media, said Jillian York, EFF Director for International Freedom of Expression. But in
reality, commercial content moderation practices negatively affect all kinds of people with all kinds of political views. Black women get flagged for posting hate speech when they share experiences of racism. Sex educators' content is removed
because it was deemed too risqu39. TOSsed Out will show that trying to censor social media at scale ends up removing far too much legal, protected speech that should be allowed on platforms.
EFF conceived TOSsed Out in late 2018 after seeing more takedowns resulting from increased public and government pressure to deal with objectionable content, as well as the rise in automated tools. While calls for censorship abound, TOSsed Out
aims to demonstrate how difficult it is for platforms to get it right. Platform rules--either through automation or human moderators--unfairly ban many people who don't deserve it and disproportionately impact those with insufficient resources to
easily move to other mediums to speak out, express their ideas, and build a community.
EFF is launching TOSsed Out with several examples of TOS enforcement gone wrong, and invites visitors to the site to submit more. In one example, a reverend couldn't initially promote a Black Lives Matter-themed concert on Facebook, eventually
discovering that using the words Black Lives Matter required additional review. Other examples include queer sex education videos being removed and automated filters on Tumblr flagging a law professor's black and white drawings of design patents
as adult content. Political speech is also impacted; one case highlights the removal of a parody account lampooning presidential candidate Beto O'Rourke.
The current debates and complaints too often center on people with huge followings getting kicked off of social media because of their political ideologies. This threatens to miss the bigger problem. TOS enforcement by corporate gatekeepers far
more often hits people without the resources and networks to fight back to regain their voice online, said EFF Policy Analyst Katharine Trendacosta. Platforms over-filter in response to pressure to weed out objectionable content, and a broad
range of people at the margins are paying the price. With TOSsed Out, we seek to put pressure on those platforms to take a closer look at who is being actually hurt by their speech moderation rules, instead of just responding to the headline of
Age verification for porn is pushing internet users into areas of the internet that provide more privacy, security and resistance to censorship.
I'd have thought that security services would prefer that internet users to remain in the more open areas of the internet for easier snooping.
So I wonder if it protecting kids from stumbling across porn is worth the increased difficulty in monitoring terrorists and the like? Or perhaps GCHQ can already see through the encrypted internet.
RQ12: Privacy & Security for Firefox
Mozilla has an interest in potentially integrating more of Tor into Firefox, for the purposes of providing a Super Private Browsing (SPB) mode for our users.
Tor offers privacy and anonymity on the Web, features which are sorely needed in the modern era of mass surveillance, tracking and fingerprinting. However, enabling a large number of additional users to make use of the Tor network requires
solving for inefficiencies currently present in Tor so as to make the protocol optimal to deploy at scale. Academic research is just getting started with regards to investigating alternative protocol architectures and route selection protocols,
such as Tor-over-QUIC, employing DTLS, and Walking Onions.
What alternative protocol architectures and route selection protocols would offer acceptable gains in Tor performance? And would they preserve Tor properties? Is it truly possible to deploy Tor at scale? And what would the full integration of Tor
and Firefox look like?
The internet technology known as deep packet inspection is currently illegal in Europe, but big telecom companies doing business in the European Union want to change that. They want deep packet inspection permitted as part of the new net
neutrality rules currently under negotiation in the EU, but on Wednesday, a group of 45 privacy and internet freedom advocates and groups published an open letter warning against the change:
Dear Vice-President Andrus Ansip, (and others)
We are writing you in the context of the evaluation of Regulation (EU) 2015/2120 and the reform of the BEREC Guidelines on its implementation. Specifically, we are concerned because of the increased use of Deep Packet Inspection (DPI) technology
by providers of internet access services (IAS). DPI is a technology that examines data packets that are transmitted in a given network beyond what would be necessary for the provision IAS by looking at specific content from the part of the
user-defined payload of the transmission.
IAS providers are increasingly using DPI technology for the purpose of traffic management and the differentiated pricing of specific applications or services (e.g. zero-rating) as part of their product design. DPI allows IAS providers to identify
and distinguish traffic in their networks in order to identify traffic of specific applications or services for the purpose such as billing them differently throttling or prioritising them over other traffic.
The undersigned would like to recall the concerning practice of examining domain names or the addresses (URLs) of visited websites and other internet resources. The evaluation of these types of data can reveal sensitive information about a user,
such as preferred news publications, interest in specific health conditions, sexual preferences, or religious beliefs. URLs directly identify specific resources on the world wide web (e.g. a specific image, a specific article in an encyclopedia,
a specific segment of a video stream, etc.) and give direct information on the content of a transmission.
A mapping of differential pricing products in the EEA conducted in 2018 identified 186 such products which potentially make use of DPI technology. Among those, several of these products by mobile operators with large market shares are confirmed
to rely on DPI because their products offer providers of applications or services the option of identifying their traffic via criteria such as Domain names, SNI, URLs or DNS snooping.
Currently, the BEREC Guidelines3 clearly state that traffic management based on the monitoring of domain names and URLs (as implied by the phrase transport protocol layer payload) is not reasonable traffic management under the Regulation.
However, this clear rule has been mostly ignored by IAS providers in their treatment of traffic.
The nature of DPI necessitates telecom expertise as well as expertise in data protection issues. Yet, we observe a lack of cooperation between national regulatory authorities for electronic communications and regulatory authorities for data
protection on this issue, both in the decisions put forward on these products as well as cooperation on joint opinions on the question in general. For example, some regulators issue justifications of DPI based on the consent of the customer of
the IAS provider which crucially ignores the clear ban of DPI in the BEREC Guidelines and the processing of the data of the other party communicating with the subscriber, which never gave consent.
Given the scale and sensitivity of the issue, we urge the Commission and BEREC to carefully consider the use of DPI technologies and their data protection impact in the ongoing reform of the net neutrality Regulation and the Guidelines. In
addition, we recommend to the Commission and BEREC to explore an interpretation of the proportionality requirement included in Article 3, paragraph 3 of Regulation 2015/2120 in line with the data minimization principle established by the GDPR.
Finally, we suggest to mandate the European Data Protection Board to produce guidelines on the use of DPI by IAS providers.
European Digital Rights, Europe Electronic Frontier Foundation, International Council of European Professional Informatics Societies, Europe Article 19, International Chaos Computer Club e.V, Germany epicenter.works - for digital rights, Austria
Austrian Computer Society (OCG), Austria Bits of Freedom, the Netherlands La Quadrature du Net, France ApTI, Romania Code4Romania, Romania IT-Pol, Denmark Homo Digitalis, Greece Hermes Center, Italy X-net, Spain Vrijschrift, the Netherlands
Dataskydd.net, Sweden Electronic Frontier Norway (EFN), Norway Alternatif Bilisim (Alternative Informatics Association), Turkey Digitalcourage, Germany Fitug e.V., Germany Digitale Freiheit, Germany Deutsche Vereinigung f3cr Datenschutz e.V.
(DVD), Germany Gesellschaft f3cr Informatik e.V. (GI), Germany LOAD e.V. - Verein f3cr liberale Netzpolitik, Germany (And others)
In March, the Russian government's internet censor Roskomnadzor contacted 10 leading VPN providers to demand they comply with local censorship laws or risk being blocked.
Roskomnadzor equired them to hook up to a dedicated government system that defines a list of websites required to be blocked to Russian internet users.
The VPN providers contacted were ExpressVPN, NordVPN, IPVanish, VPN Unlimited, VyprVPN, HideMyAss!, TorGuard, Hola VPN, OpenVPN, and Kaspersky Secure Connection. The deadline has now passed and the only VPN company that has agreed to comply with
the new requirements is the Russia-based Kaspersky Secure Connection.
Most other providers on the list have removed their VPN servers from Russia altogether, so asn ot to be at risk of being asked to hand over information to Russia about their customers.
The South African Law Reform Commission is debating widespread changes law pertaining to the protection of children. Much of the debate is about serious crimes of child abuse but there is a significant portion devoted to protecting children from
legal adult pornography. The commission writes:
SEXUAL OFFENCES: PORNOGRAPHY AND CHILDREN
On 16 March 2019 the Commission approved the publication of its discussion paper on sexual offences (pornography and children) for comment.
Five main topics are discussed in this paper, namely:
Access to or exposure of a child to pornography;
Creation and distribution of child sexual abuse material;
Consensual self-child sexual abuse material (sexting);
Grooming of a child and other sexual contact crimes associated with or facilitated by pornography or child sexual abuse material; and
Investigation, procedure & sentencing.
The Commission invites comment on the discussion paper and the draft Bill which accompanies it. Comment may also be made on related issues of concern which have not been raised in the discussion paper. The closing date for comment is 30 July
The methodology discussed doesn't seem to match well to the real world. The authors seems to hold a lot of stock in the notion that every device can contain some sort of simple porn block app that can render a device unable to access porn and
hence be safe for children. The proposed law suggests penalties should unprotected devices get bought, sold, or used by children. Perhaps someone should invent such an app to help out South Africa.
The United States has decided not to support the censorship call by 18 governments and five top American tech firms and declined to endorse a New Zealand-led censorship effort responding to the live-streamed shootings at two Christchurch mosques.
White House officials said free-speech concerns prevented them from formally signing onto the largest campaign to date targeting extremism online.
World leaders, including British Prime Minister Theresa May, Canadian Prime Minister Justin Trudeau and Jordan's King Abdullah II, signed the Christchurch Call, which was unveiled at a gathering in Paris that had been organized by French
President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern.
The governments pledged to counter online extremism, including through new regulation, and to encourage media outlets to apply ethical standards when depicting terrorist events online.
But the White House opted against endorsing the effort, and President Trump did not join the other leaders in Paris. The White House felt the document could present constitutional concerns, officials there said, potentially conflicting with the
First Amendment. Indeed Trump has previously threatened social media out of concern that it's biased against conservatives.
Amazon, Facebook, Google, Microsoft and Twitter also signed on to the document, pledging to work more closely with one another and governments to make certain their sites do not become conduits for terrorism. Twitter CEO Jack Dorsey was among the
attendees at the conference.
The companies agreed to accelerate research and information sharing with governments in the wake of recent terrorist attacks. They said they'd pursue a nine-point plan of technical remedies designed to find and combat objectionable content,
including instituting more user-reporting systems, more refined automatic detection systems, improved vetting of live-streamed videos and more collective development of organized research and technologies the industry could build and share.
The companies also promised to implement appropriate checks on live-streaming, with the aim of ensuring that videos of violent attacks aren't broadcast widely, in real time, online. To that end, Facebook this week announced a new one-strike
policy, in which users who violate its rules -- such as sharing content from known terrorist groups -- could be prohibited from using its live-streaming tools.
To challenge online censorship of art featuring naked bodies or body parts, photographer Spencer Tunick, in collaboration with the National Coalition Against Censorship, will stage a nude art action in New York on June 2. The event will bring
together 100 undressed participants at an as-yet-undisclosed location, and Tunick will photograph the scene and create an installation using donated images of male nipples.
Artists Andres Serrano, Paul Mpagi Sepuya, and Tunick have given photos of their own nipples to the cause, as has Bravo TV personality Andy Cohen, Red Hot Chili Peppers drummer Chad Smith, and actor/photographer Adam Goldberg.
In addition, the National Coalition Against Censorship has launched a #WeTheNipple campaign through which Instagram and Facebook users can share their experiences with censorship and advocate for changes to the social media platforms' guidelines
related to nudity.
At the moment when internet users want to view a page, they specify the page they want in the clear. ISPs can see the page requested and block it if the authorities don't like it. A new internet protocol has been launched that encrypts the
specification of the page requested so that ISPs can't tell what page is being requested, so can't block it.
This new DNS Over HTTPS protocol is already available in Firefox which also provides an uncensored and encrypted DNS server. Users simply have to change the
settings in about:config (being careful of the dragons of course)
Questions have been raised in the House of Lords about the impact on the UK's ability to censor the internet.
House of Lords, 14th May 2019, Internet Encryption Question
Baroness Thornton Shadow Spokesperson (Health) 2:53 pm, 14th May 2019
To ask Her Majesty 's Government what assessment they have made of the deployment of the Internet Engineering Task Force 's new " DNS over HTTPS " protocol and its implications for the blocking of content by internet service providers
and the Internet Watch Foundation ; and what steps they intend to take in response.
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
My Lords, DCMS is working together with the National Cyber Security Centre to understand and resolve the implications of DNS over HTTPS , also referred to as DoH, for the blocking of content online. This involves liaising across government and
engaging with industry at all levels, operators, internet service providers, browser providers and pan-industry organisations to understand rollout options and influence the way ahead. The rollout of DoH is a complex commercial and technical
issue revolving around the global nature of the internet.
Baroness Thornton Shadow Spokesperson (Health)
My Lords, I thank the Minister for that Answer, and I apologise to the House for this somewhat geeky Question. This Question concerns the danger posed to existing internet safety mechanisms by an encryption protocol that, if implemented, would
render useless the family filters in millions of homes and the ability to track down illegal content by organisations such as the Internet Watch Foundation . Does the Minister agree that there is a fundamental and very concerning lack of
accountability when obscure technical groups, peopled largely by the employees of the big internet companies, take decisions that have major public policy implications with enormous consequences for all of us and the safety of our children? What
engagement have the British Government had with the internet companies that are represented on the Internet Engineering Task Force about this matter?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
My Lords, I thank the noble Baroness for discussing this with me beforehand, which was very welcome. I agree that there may be serious consequences from DoH. The DoH protocol has been defined by the Internet Engineering Task Force . Where I do
not agree with the noble Baroness is that this is not an obscure organisation; it has been the dominant internet technical standards organisation for 30-plus years and has attendants from civil society, academia and the UK Government as well as
the industry. The proceedings are available online and are not restricted. It is important to know that DoH has not been rolled out yet and the picture in it is complex--there are pros to DoH as well as cons. We will continue to be part of these
discussions; indeed, there was a meeting last week, convened by the NCSC , with DCMS and industry stakeholders present.
Lord Clement-Jones Liberal Democrat Lords Spokesperson (Digital)
My Lords, the noble Baroness has raised a very important issue, and it sounds from the Minister 's Answer as though the Government are somewhat behind the curve on this. When did Ministers actually get to hear about the new encrypted DoH
protocol? Does it not risk blowing a very large hole in the Government's online safety strategy set out in the White Paper ?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
As I said to the noble Baroness, the Government attend the IETF . The protocol was discussed from October 2017 to October 2018, so it was during that process. As far as the online harms White Paper is concerned, the technology will potentially
cause changes in enforcement by online companies, but of course it does not change the duty of care in any way. We will have to look at the alternatives to some of the most dramatic forms of enforcement, which are DNS blocking.
Lord Stevenson of Balmacara Opposition Whip (Lords)
My Lords, if there is obscurity, it is probably in the use of the technology itself and the terminology that we have to use--DoH and the other protocols that have been referred to are complicated. At heart, there are two issues at stake, are
there not? The first is that the intentions of DoH, as the Minister said, are quite helpful in terms of protecting identity, and we do not want to lose that. On the other hand, it makes it difficult, as has been said, to see how the Government
can continue with their current plan. We support the Digital Economy Act approach to age-appropriate design, and we hope that that will not be affected. We also think that the soon to be legislated for--we hope--duty of care on all companies to
protect users of their services will help. I note that the Minister says in his recent letter that there is a requirement on the Secretary of State to carry out a review of the impact and effectiveness of the regulatory framework included in the
DEA within the next 12 to 18 months. Can he confirm that the issue of DoH will be included?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
Clearly, DoH is on the agenda at DCMS and will be included everywhere it is relevant. On the consideration of enforcement--as I said before, it may require changes to potential enforcement mechanisms--we are aware that there are other enforcement
mechanisms. It is not true to say that you cannot block sites; it makes it more difficult, and you have to do it in a different way.
The Countess of Mar Deputy Chairman of Committees, Deputy Speaker (Lords)
My Lords, for the uninitiated, can the noble Lord tell us what DoH means --very briefly, please?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
It is not possible to do so very briefly. It means that, when you send a request to a server and you have to work out which server you are going to by finding out the IP address, the message is encrypted so that the intervening servers are not
able to look at what is in the message. It encrypts the message that is sent to the servers. What that means is that, whereas previously every server along the route could see what was in the message, now only the browser will have the ability to
look at it, and that will put more power in the hands of the browsers.
Lord West of Spithead Labour
My Lords, I thought I understood this subject until the Minister explained it a minute ago. This is a very serious issue. I was unclear from his answer: is this going to be addressed in the White Paper ? Will the new officer who is being
appointed have the ability to look at this issue when the White Paper comes out?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
It is not something that the White Paper per se can look at, because it is not within the purview of the Government. The protocol is designed by the IETF , which is not a government body; it is a standards body, so to that extent it is not
possible. Obviously, however, when it comes to regulating and the powers that the regulator can use, the White Paper is consulting precisely on those matters, which include DNS blocking, so it can be considered in the consultation.
The German President Frank-Walter Steinmeier opened the re:publica 2019 conference in Berlin last week with a speech about internet censorship. The World Socialist Web Site reported the speech:
With cynical references to Germany's Basic Law and the right to freedom of speech contained within it, Steinmeier called for new censorship measures and appealed to the major technology firms to enforce already existing guidelines more
He stated, The upcoming 70th anniversary of the German Basic Law reminds us of a connection that pre-dates online and offline: liberty needs rules--and new liberties need new rules. Furthermore, freedom of opinion brings with it responsibility
for opinion. He stressed that he knew there are already many rules, among which he mentioned the notorious Network Enforcement Law (Netz DG), but it will be necessary to argue over others.
He then added, Anyone who creates space for a political discussion with a platform bears responsibility for democracy, whether they like it or not. Therefore, democratic regulations are required, he continued. Steinmeier said that he felt this is
now understood in Silicon Valley. After a lot of words and announcements, discussion forums, and photogenic appearances with politicians, it is now time for Facebook, Twitter, YouTube and Co. to finally acknowledge their responsibility for
democracy, finally put it into practice.
Watching pornography on buses is to be banned, ministers have announced. Bus conductors and the police will be given powers to tackle those who watch sexual material on mobile phones and tablets.
Ministers are also drawing up plans for a national database of claimed harassment incidents. It will record incidents at work and in public places, and is likely to cover wolf-whistling and cat-calling as well as more serious incidents.
In addition, the Government is considering whether to launch a public health campaign warning of the effects of pornography -- modelled on smoking campaigns.
As of 15 July, people in the UK who try to access porn on the internet will be required to verify their age or identity online.
The new UK Online Pornography (Commercial Basis) Regulations 2018 law does not affect the Channel Islands but the States have not ruled out introducing their own regulations.
The UK Department for Censorship, Media and Sport said it was working closely with the Crown Dependencies to make the necessary arrangements for the extension of this legislation to the Channel Islands.
A spokeswoman for the States said they were monitoring the situation in the UK to inform our own policy development in this area.
President Trump has threatened to monitor social-media sites for their censorship of American citizens. He was responding to Facebook permanently banning figures and organizations from the political right. Trump tweeted:
I am continuing to monitor the censorship of AMERICAN CITIZENS on social media platforms. This is the United States of America -- and we have what's known as FREEDOM OF SPEECH! We are monitoring and watching, closely!!
On Thursday, Facebook announced it had permanently banned users including Louis Farrakhan, the founder of the Nation of Islam, along with far-right figures Milo Yiannopoulos, Laura Loomer and Alex Jones, the founder of Infowars. The tech giant
removed their accounts, fan pages and affiliated groups on Facebook as well as its photo-sharing service Instagram, claiming that their presence on the social networking sites had become dangerous.
For his part, President Trump repeatedly has accused popular social-networking sites of exhibiting political bias, and threatened to regulate Silicon Valley in response. In a private meeting with Twitter CEO Jack Dorsey last month, Trump
repeatedly raised his concerns that the company has removed some of his followers.
On Friday, Trump specifically tweeted he was surprised about Facebook's decision to ban Paul Joseph Watson, a YouTube personality who has served as editor-at-large of Infowars .
Update: Texas bill would allow state to sue social media companies like Facebook and Twitter that censor free speech
A bill before the Texas Senate seeks to prevent social media platforms like Facebook and Twitter from censoring users based on their viewpoints. Supporters say it would protect the free exchange of ideas, but critics say the bill contradicts a
federal law that allows social media platforms to regulate their own content.
The measure -- Senate Bill 2373 by state Sen. Bryan Hughes -- would hold social media platforms accountable for restricting users' speech based on personal opinions. Hughes said the bill applies to social media platforms that advertise themselves
as unbiased but still censor users. The Senate State Affairs Committee unanimously approved the bill last week. The Texas Senate approved the bill on April 25 in an 18-12 vote. It now heads to the House.
Facebook will create a privacy oversight committee as part of its recent agreement with the US Federal Trade Commission (FTC), according to reports.
According to Politico, Facebook will appoint a government-approved committee to 'guide' the company on privacy matters. This committee will also consist of company board members.
The plans would also see Facebook chairman and CEO Mark Zuckerberg act as a designated compliance officer, meaning that he would be personally responsible and accountable for Facebook's privacy policies.
Last week, it was reported that Facebook could be slapped with a fine of up to $5 billion over its handling of user data and privacy. The FTC launched the investigation last March, following claims that Facebook allowed organisations, such as
political consultancy Cambridge Analytica, to collect data from millions of users without their consent.
The Committee to Protect Journalists has condemned the Singapore parliament's passage of legislation that will be used to stifle reporting and the dissemination of news, and called for the punitive measure's immediate repeal.
The Protection from Online Falsehoods and Manipulation Act , which was passed yesterday, gives all government ministers broad and arbitrary powers to demand corrections, remove content, and block webpages if they are deemed to be
disseminating falsehoods against the public interest or to undermine public confidence in the government, both on public websites and within chat programs such as WhatsApp, according to news reports .
Violations of the law will be punishable with maximum 10-year jail terms and fines of up to $1 million Singapore dollars (US$735,000), according to those reports. The law was passed after a two-day debate and is expected to come into force in the
next few week.
Shawn Crispin, CPJ's senior Southeast Asian representative said:
This law will give Singapore's ministers yet another tool to suppress and censor news that does not fit with the People's Action Party-dominated government's authoritarian narrative. Singapore's online media is already over-regulated and
severely censored. The law should be dropped for the sake of press freedom.
Law Minister K. Shanmugam said censorship orders would be made mainly against technology companies that hosted the objectionable content, and that they would be able to challenge the government's take-down requests,.
Lawyers for Facebook and Instagram have appeared in a Texas courtrooms attempting to dismiss two civil cases that accuse the social media sites of not protecting victims of sex trafficking.
The Facebook case involves a Houston woman who in October said the company's morally bankrupt corporate culture left her prey to a predatory pimp who drew her into sex trafficking as a child. The Instagram case involves a 14-year-old girl from
Spring who said she was recruited, groomed and sold in 2018 by a man she met on the social media site.
Of course Facebook is only embroiled in this case because it supported Congress to pass an anti-trafficking amendment in April 2018. Stop Enabling Sex Traffickers Act and Fight Online Sex Trafficking Act, collectively known as SESTA-FOSTA, this
attempts to make it easier to prosecute owners and operators of websites that facilitate sex trafficking. This act removed the legal protection for websites that previously meant they couldn't be held responsible for the actions of its members.
After the Houston suit was filed, a Facebook spokesperson said human trafficking is not permitted on the site and staffers report all instances they're informed about to the National Center for Missing and Exploited Children. Of course that
simply isn't enough any more, and now they have to proactively stop their website from being used for criminal activity.
The impossibility of preventing such misuse has led to many websites pulling out of anything that may be related to people hooking up for sex, lest they are held responsible for something they couldn't possibly prevent.
But perhaps Facebook has enough money to pay for lawyers who can argue their way out of such hassles.
The Adult Performers Actors Guild is standing up for sex workers who are tired of being banned from Instagram with no explanation.
In related news, adult performers are campaigning against being arbitrarily banned from their accounts by Facebook and Instagram. It seems likely that the social media companies are summarily ejecting users detected to have any connection with
people getting together for sex.
As explained above, the social media companies are responsible for anything related to sex trafficking happening on their website. They practically aren't able to discern sex trafficking from consensual sex so the only protection available for
internet companies is to ban anyone that might have a connection to sex.
This reality is clearly impacting those effected. A group of adult performers is starting to organize against Facebook and Instagram for removing their accounts without explanation. Around 200 performers and models have included their usernames
in a letter to Facebook asking the network to address this issue.
Alana Evans, president of the Adult Performers Actors Guild (APAG), a union that advocates for adult industry professionals' rights, told Vice. There are performers who are being deleted, because they put up a picture of their freshly painted
In an April 22 letter to Facebook, the Adult Performers Actors Guild's legal counsel James Felton wrote:
Over the course of the last several months, almost 200 adult performers have had their Instagrams accounts terminated without explanation. In fact, every day, additional performers reach out to us with their termination stories. In the large
majority of instances, her was no nudity shown in the pictures. However, it appears that the accounts were terminated merely because of their status as an adult performer.
Effort to learn the reasons behind the termination have been futile. Performers are asked to send pictures of their names to try to verify that the accounts are actually theirs and not put up by frauds. Emails are sent and there is no reply.
Google is set to roll out a dashboard-like function in its Chrome browser to offer users more control in fending off tracking cookies, the Wall Street Journal has reported.
While Google's new tools are not expected to significantly curtail its ability to collect data itself, it would help the company press its sizable advantage over online-advertising rivals, the newspaper said .
Google has been working on the cookies plan for at least six years, in stops and starts, but accelerated the work after news broke last year that personal data of Facebook users was improperly shared with Cambridge Analytica.
The company is mostly targeting cookies installed by profit-seeking third parties, separate from the owner of the website a user is actively visiting, the Journal said.
Apple Inc in 2017 stopped majority of tracking cookies on its Safari browser by default and Mozilla Corp's Firefox did the same a year later.
The EFF has written an impassioned article about the ineffectiveness of censorship of anti-vax information as a means of convincing people that vaccinations are the right things for their kids.
But before presenting the EFF case, I read a far more interesting article suggesting that the authorities are on totally the wrong track anyway. And that no amount of blocking anti-vax claims. or bombarding people with true information, is going
to make any difference.
The article suggests that many anti-vaxers don't actually believe that vaccines cause autism anyway, so censorship of negative info and pushing of positive info won't work because people know that already. Thomas Baekdal explains:
What makes a parent decide not to vaccinate their kids? One reason might be an economic one. In the US (in particular), healthcare comes at a very high cost, and while there are ways to vaccinate kids cheaper (or even for free), people in the US
are very afraid of engaging too much with the healthcare system.
So as journalists, we need to focus on that instead, because it's highly likely that many people who make this decision do so because they worry about their financial future. In other words, not wanting to vaccinate their kids might be an excuse
that they cling to because of financial concerns.
Perhaps it is the same with denying climate change. Perhaps it is not the belief in the science of climate change that is the reason for denial. Perhaps it is more that people find the solutions unpalatable. Perhaps they don't want to support
climate change measures simply because they don't fancy being forced to become vegetarian and don't want to be priced off the road by green taxes.
Anyway the EFF have been considering the difficulties faced by social media companies trying to convince anti-vaxers by censorship and by force feeding them 'the correct information'. The EFF writes:
With measles cases on the rise for the first time in decades and anti-vaccine (or anti-vax) memes spreading like wildfire on social media, a number of companies--including Facebook, Pinterest, YouTube, Instagram, and GoFundMe --recently
banned anti-vax posts.
But censorship cannot be the only answer to disinformation online. The anti-vax trend is a bigger problem than censorship can solve. And when tech companies ban an entire category of content like this, they have a history of overcorrecting and
censoring accurate, useful speech --or, even worse, reinforcing misinformation with their policies. That's why platforms that adopt categorical bans must follow the Santa Clara Principles on Transparency and Accountability in Content Moderation
to ensure that users are notified when and about why their content has been removed, and that they have the opportunity to appeal.
Many intermediaries already act as censors of users' posts, comments, and accounts , and the rules that govern what users can and cannot say grow more complex with every year. But removing entire categories of speech from a platform does little
to solve the underlying problems.
Tech companies and online platforms have other ways to address the rapid spread of disinformation, including addressing the algorithmic megaphone at the heart of the problem and giving users control over their own feeds.
Anti-vax information is able to thrive online in part because it exists in a data void in which available information about vaccines online is limited, non-existent, or deeply problematic. Because the merit of vaccines has long been considered a
decided issue, there is little recent scientific literature or educational material to take on the current mountains of disinformation. Thus, someone searching for recent literature on vaccines will likely find more anti-vax content than
empirical medical research supporting vaccines.
Censoring anti-vax disinformation won't address this problem. Even attempts at the impossible task of wiping anti-vax disinformation from the Internet entirely will put it beyond the reach of researchers, public health professionals, and others
who need to be able to study it and understand how it spreads.
In a worst-case scenario, well-intentioned bans on anti-vax content could actually make this problem worse. Facebook, for example, has over-adjusted in the past to the detriment of legitimate educational health content: A ban on overly suggestive
or sexually provocative ads also caught the National Campaign to Prevent Teen and Unwanted Pregnancy in its net.
Platforms must address one of the root causes behind disinformation's spread online: the algorithms that decide what content users see and when. And they should start by empowering users with more individualized tools that let them understand and
control the information they see.
Algorithms like Facebook's Newsfeed or Twitter's timeline makes decisions about which news items, ads, and user-generated content to promote and which to hide. That kind of curation can play an amplifying role for some types of incendiary
content, despite the efforts of platforms like Facebook to tweak their algorithms to disincentivize or downrank it. Features designed to help people find content they'll like can too easily funnel them into a rabbit hole of disinformation .
That's why platforms should examine the parts of their infrastructure that are acting as a megaphone for dangerous content and address that root cause of the problem rather than censoring users.
The most important parts of the puzzle here are transparency and openness. Transparency about how a platform's algorithms work, and tools to allow users to open up and create their own feeds, are critical for wider understanding of algorithmic
curation, the kind of content it can incentivize, and the consequences it can have. Recent transparency improvements in this area from Facebook are encouraging, but don't go far enough.
Users shouldn't be held hostage to a platform's proprietary algorithm. Instead of serving everyone one algorithm to rule them all and giving users just a few opportunities to tweak it, platforms should open up their APIs to allow users to create
their own filtering rules for their own algorithms. News outlets, educational institutions, community groups, and individuals should all be able to create their own feeds, allowing users to choose who they trust to curate their information and
share their preferences with their communities.
Censorship by tech giants must be rare and well-justified . So when a company does adopt a categorical ban, we should ask: Can the company explain what makes that category exceptional? Are the rules to define its boundaries clear and predictable,
and are they backed up by consistent data? Under what conditions will other speech that challenges established consensus be removed?
Beyond the technical nuts and bolts of banning a category of speech, disinformation also poses ethical challenges to social media platforms. What responsibility does a company have to prevent the spread of disinformation on its platforms? Who
decides what does or does not qualify as misleading or inaccurate? Who is tasked with testing and validating the potential bias of those decisions?
The BBFC has re-iterated that its Age Verification certification scheme does not allow for personal data to be used for another purpose beyond age verification. In particular age verification should not be coupled with electronic wallets.
Presumably this is intended to prevent personal date identifying porn users to be dangerously stored in databases use for other purposes.
In passing, this suggests that there may be commercial issues as age verification systems for porn may not be reusable for age verification for social media usage or identity verification required for online gambling. I suspect that several AV
providers are only interested in porn as a way to get established for social media age verification.
This BBFC warning may be of particular interest to users of the porn site xHamster. The preferred AV option for that website is the electronic wallet 1Account.
The BBFC write in a press release:
The Age-verification Regulator under the UK's Digital Economy Act, the British Board of Film Classification (BBFC), has advised age-verification providers that they will not be certified under the Age-verification Certificate (AVC) if they use a
digital wallet in their solution.
The AVC is a voluntary, non-statutory scheme that has been designed specifically to ensure age-verification providers maintain high standards of privacy and data security. The AVC will ensure data minimisation, and that there is no handover of
personal information used to verify an individual is over 18 between certified age-verification providers and commercial pornography services. The only data that should be shared between a certified AV provider and an adult website is a token or
flag indicating that the consumer has either passed or failed age-verification.
Murray Perkins, Policy Director for the BBFC, said:
A consumer should be able to consider that their engagement with an age-verification provider is something temporary.
In order to preserve consumer confidence in age-verification and the AVC, it was not considered appropriate to allow certified AV providers to offer other services to consumers, for example by way of marketing or by the creation of a digital
wallet. The AVC is necessarily robust in order to allow consumers a high level of confidence in the age-verification solutions they choose to use.
Accredited providers will be indicated by the BBFC's green AV symbol, which is what consumers should look out for. Details of the independent assessment will also be published on the BBFC's age-verification website, ageverificationregulator.com,
so consumers can make an informed choice between age-verification providers.
The Standard for the AVC imposes limits on the use of data collected for the purpose of age-verification, and sets out requirements for data minimisation.
The AVC Standard has been developed by the BBFC and NCC Group - who are experts in cyber security and data protection - in cooperation with industry, with the support of government, including the National Cyber Security Centre and Chief
Scientific Advisors, and in consultation with the Information Commissioner's Office. In order to be certified, AV Providers will undergo an on-site audit as well as a penetration test.
Further announcements will be made on AV Providers' certification under the scheme ahead of entry into force on July 15.
Verizon Communications is looking to sell Tumblr, the free blogging platform it acquired when it bought Yahoo in 2015.
Any deal is unlikely to be for a price anywhere near the $1.1 billion that Yahoo paid for Tumblr back in 2013. Yahoo wrote down the website's value by $230 million three years later, and Tumblr's popularity has faded in recent years.
Tumblr took a major hit in the first quarter of the year after it banned Not Safe For Work content from its platform. Its daily visitors dropped by 30% between December 2018 and March 2019, according to the Verge .
Following the news, Pornhub has shown an interesting in acquiring Tumblr. Pornhub Vice President Corey Price said in an email to BuzzFeed News that the porn-streaming giant is extremely interested in buying Tumblr, the once uniquely horny hub for
young women and queer people that banned adult content last December to the disappointment of many of its users.
Price said that restoring Tumblr's NSFW edge would be central to their acquisition of it, were it to actually happen.
Russia took another step toward government control over the internet on Thursday, as lawmakers approved a bill that will open the door to sweeping censorship.
The legislation is designed to route web traffic through servers controlled by Roskomnadzor, the state communications censor, increasing its power to control information and block messaging or other applications.
It also provides for Russia to create its own system of domain names that would allow the internet to continue operating within the country, even if it were cut off from the global web.
The bill is expected to receive final approval before the end of the month. Once signed into law by Putin, the bulk of it will go into effect on Nov. 1.
Russian President Vladimir Putin has signed into law a measure expands government censorship control over the Russian internet.
The law, signed Wednesday, requires ISPs to install equipment to route Russian internet traffic through servers in the country. Proponents said it is a defense measure in case the United States or other hostile powers cut off the internet for
Based on the results of an investigation by Privacy International, one of Europe's key data protection authorities has opened an inquiry into Quantcast, a major player in the online tracking industry.
The Irish Data Protection Commission has now opened statutory inquiry into Quantcast International Limited. The organisation writes:
Since the application of the GDPR significant concerns have been raised by individuals and privacy advocates concerning the conduct of technology companies operating in the online advertising sector and their compliance with the GDPR. Arising
from a submission to the Data Protection Commission by Privacy International, a statutory inquiry pursuant to section 110 of the Data Protection Action 2018 has been commenced in respect of Quantcast International Limited. The purpose of the
inquiry is to establish whether the company's processing and aggregating of personal data for the purposes of profiling and utilising the profiles generated for targeted advertising is in compliance with the relevant provisions of the GDPR. The
GDPR principle of transparency and retention practices will also be examined.