The animated series Star Wars: The Clone Wars has made it onto the internet TV service Disney Plus, but it's missing a bit on the episode A Distant Echo .
A once teased, but ultimately axed scene featured some WW2-era pinup-style
graphics. It showed a leggy Senator Padme Amidala in high boots, a severe updo, a mischievously cryptic look on her face, and a gun in hand.
In the deleted scene, Anakin asks Hunter, Hey! What's with the nose art? Hunter responds, That's our girl.
The Naboo Senator. We check her out on the holoscans. Wrecker responds, Yea! She can negotiate with me anytime!
Disney has not given any reason for this latest cutting room floor decision. But some fans are frustrated at Disneyand those within, who
want to sterilize everything for the modern era.
Prager University (PragerU) is a right wing group that creates videos explaining a right wing perspective to political issues.
YouTube didn't much care for the content and shunted the videos up a 'restricted mode' back alley.
the censorship in court but have just lost their case. First Amendment rights in the US bans the state from censoring free speech but this protection does not extended to private companies. PragerU had tried to argue that Google has become so integral to
American life that it should be treated like a state institution.
The Ninth Circuit Court of Appeals on Wednesday affirmed that YouTube, a Google subsidiary, is a private platform and thus not subject to the First Amendment. In making that
determination, the Court also rejected a plea from a conservative content maker that sued YouTube in hopes that the courts would force it to behave like a public utility.
Headed by conservative radio host Dennis Prager, PragerU alleged in its suit
against YouTube that the video hosting platform violated PragerU's right to free speech when it placed a portion of the nonprofit's clips on Restricted Mode, an optional setting that approximately 1.5 percent of YouTube users select so as not to see
content with mature themes.
Writing for the appeals court, Circuit Judge Margaret McKeown said YouTube was a private forum despite its ubiquity and public accessibility, and hosting videos did not make it a state actor for purposes of the First
Firefox has begun the rollout of encrypted DNS over HTTPS (DoH) by default for US-based users. The rollout will continue over the next few weeks to confirm no major issues are discovered as this new protocol is enabled for Firefox's US-based users.
A little over two years ago, we began work to help update and secure one of the oldest parts of the internet, the Domain Name System (DNS). To put this change into context, we need to briefly describe how the system worked before DoH.
DNS is a database that links a human-friendly name, such as www.mozilla.org, to a computer-friendly series of numbers, called an IP address (e.g. 192.0.2.1). By performing a lookup in this database, your web browser is able to find websites on your
behalf. Because of how DNS was originally designed decades ago, browsers doing DNS lookups for websites -- even encrypted https:// sites -- had to perform these lookups without encryption. We described the impact of insecure DNS on our privacy:
Because there is no encryption, other devices along the way might collect (or even block or change) this data too. DNS lookups are sent to servers that can spy on your website browsing history without either informing you or
publishing a policy about what they do with that information.
At the creation of the internet, these kinds of threats to people's privacy and security were known, but not being exploited yet. Today, we know that unencrypted DNS is
not only vulnerable to spying but is being exploited, and so we are helping the internet to make the shift to more secure alternatives. We do this by performing DNS lookups in an encrypted HTTPS connection. This helps hide your browsing history from
attackers on the network, helps prevent data collection by third parties on the network that ties your computer to websites you visit.
We're enabling DoH by default only in the US. If you're outside of the US and would like to
enable DoH, you're welcome to do so by going to Settings, then General, then scroll down to Networking Settings and click the Settings button on the right. Here you can enable DNS over HTTPS by clicking, and a checkbox will appear. By default, this
change will send your encrypted DNS requests to Cloudflare.Users have the option to choose between two providers 204 Cloudflare and NextDNS -- both of which are trusted resolvers.
A Times article has popped up under the headline Boris Johnson set to water down curbs on tech giants.
It had all the hallmarks of an insider briefing, opening with the following
prime minister is preparing to soften plans for sanctions on social media companies amid concerns about a backlash from tech giants.
There is a very pro-tech lobby in No 10, a well-placed source said. They got spooked by some of
the coverage around online harms and raised concerns about the reaction of the technology companies. There is a real nervousness about it.
The European Commission has told its staff to start using Signal, an end-to-end-encrypted messaging app, in a push to increase the security of its communications.
The instruction appeared on internal messaging boards in early February, notifying
employees that Signal has been selected as the recommended application for public instant messaging.
The app is favored by privacy activists because of its end-to-end encryption and open-source technology. Bart Preneel, cryptography expert at the
University of Leuven explained:
It's like Facebook's WhatsApp and Apple's iMessage but it's based on an encryption protocol that's very innovative. Because it's open-source, you can check what's happening under the
Promoting the app, however, could antagonize the law enforcement community. It will underline the hypocrisy of Officials in Brussels, Washington and other capitals have been putting strong pressure on Facebook and Apple to
allow government agencies to access to encrypted messages; if these agencies refuse, legal requirements could be introduced that force firms to do just that.
American, British and Australian officials have published an open letter to Facebook CEO Mark
Zuckerberg in October, asking that he call off plans to encrypt the company's messaging service. Dutch Minister for Justice and Security Ferd Grappehaus told POLITICO last April that the EU needs to look into legislation allowing governments to access
A Mississippi legislator has introduced two bills that would ban all online porn completely.
The bills authored by Republican Representative Tracy Arnold would not only ban porn in Mississippi but also would create a coalition of Southern states where
the porn ban would apply. Other states would need to join in the legislation, but among those Arnold's bill targets would be Georgia, Arkansas, Louisiana, Alabama, Kentucky, Tennessee, West Virginia and Oklahoma.
The states would join to create
what one of the bills, HB 1116 , calls an Area of Moral Decency.
Arnold's companion bill, HB 1120 , would bar social media platforms from carrying advertisements for obscene and pornographic content.
US internet giants including Microsoft have raised the spectre of geo-blocking New Zealand if the Government proceeds with a bill for classifying streamed content.
A law requiring film ratings to apply to streaming services like Netflix and Lightbox
has raised hackles from some Silicon Valley firms. The bill mandates that certain commercial video-on-demand (CVoD) providers follow the process that broadcasting and film companies follow in classifying content or submit themselves to a
self-classification system to be developed by the Chief Censor and the Office of Film and Literature Classification (OFLC).
But even this self classification option would require reclassifying vast back catalogs of content, some CVoD providers
say, and it might be easier for them to pull some content out of New Zealand altogether. There are also worries that streaming services might choose to leave the country rather than deal with a potentially onerous regulatory regime.
the Governance and Administration select committee raised concerns about enforceability or whether companies might just pull out. NZME's submission said news sites with paid subscriptions that aired video footage could fall under the classification
In its submission to the select committee, Microsoft warned that content it has yet to classify could be geo-blocked. Microsoft observes, however, that while the majority of content it makes available through the Microsoft Movies & TV
platform is or may be rated by the studio producing the content, where a small independent studio or filmmaker makes content available on the platform, that content may not have a rating assigned. In that situation, a provider like Microsoft is unlikely
to apply to rate the content itself (or itself develop a rating system) as it isn't in the business of reviewing the film's content in order to apply for the correct label. In the result, unobjectionable yet unrated / unlabelled content may, in some
cases, not be made available to New Zealand audiences, due to the regulatory threshold associated with rating and labelling.
The Deputy Chief Censor also did not think that providers would skip the New Zealand market because of the proposed
changes. The OFLC, in its select committee submission , said that given that this framework is low cost and simple for providers to implement, this would be unlikely to impact services provided to NZ public. We have not seen providers withdraw from other
jurisdictions due to regulation. This light-handed regulatory approach will not require providers to make significant investment to supply our relatively small market.
Facebook has blocked Singapore-based users access to the page of the State Times' Review (STR) on the orders of the Singapore government.
STR has been accused on multiple occasions by the government claiming fake news and misinformation. The latest
correction notice was served after STR posted an article containing claims about the coronavirus (Covid-19) situation that was deemed entirely untrue according to the government.
After STR failed to heed the notice, the government resorted to
ordering Facebook to block Singapore users from accessing STR's page.
Facebook complied as it said it was legally compelled to carry out the order. However, the social network told Channel NewsAsia it believed orders like this are disproportionate
and contradict the Singapore government's claim that POFMA would not be used as a censorship tool. A Facebook spokesman said:
We've repeatedly highlighted this law's potential for overreach and we're deeply concerned
about the precedent this sets for the stifling of freedom of expression in Singapore.
The paper poses four questions which go to the heart of the debate about regulating content online:
How can content regulation best achieve the goal of reducing harmful speech while preserving free expression? By requiring systems such as user-friendly channels for reporting content or external oversight of policies or
enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies' efforts.
How can regulations enhance the accountability of internet platforms? Regulators could consider certain requirements for companies, such as publishing their content standards, consulting with stakeholders when making
significant changes to standards, or creating a channel for users to appeal a company's content removal or non-removal decision.
Should regulation require internet companies to meet certain performance targets? Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.
Should regulation define which "harmful content" should be prohibited on the internet? Laws restricting speech are generally implemented by law enforcement officials and the courts. Internet content
moderation is fundamentally different. Governments should create rules to address this complexity -- that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends
Guidelines for Future Regulation
The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms. The following
principles are based on lessons we've learned from our work in combating harmful content and our discussions with others.
Incentives. Ensuring accountability in companies' content moderation systems and procedures will be the best way to create the incentives for companies to responsibly balance values like safety, privacy, and freedom of
The global nature of the internet. Any national regulatory approach to addressing harmful content should respect the global scale of the internet and the value of cross-border communications. They should
aim to increase interoperability among regulators and regulations.
Freedom of expression. In addition to complying with Article 19 of the ICCPR (and related guidance), regulators should consider the impacts of their
decisions on freedom of expression.
Technology. Regulators should develop an understanding of the capabilities and limitations of technology in content moderation and allow internet companies the flexibility to
innovate. An approach that works for one particular platform or type of content may be less effective (or even counterproductive) when applied elsewhere.
Proportionality and necessity. Regulators should take into
account the severity and prevalence of the harmful content in question, its status in law, and the efforts already underway to address the content.
If designed well, new frameworks for regulating harmful content can contribute to the internet's continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together.
Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation.
We hope today's white paper helps to stimulate further conversation around the regulation
of content online. It builds on a paper we published last September on data portability , and we plan on publishing similar papers on elections
and privacy in the coming months.
Oliver Dowden was appointed Secretary of State for Digital, Culture, Media and Sport on 13 February 2020.
He was previously Paymaster General and Minister for the Cabinet Office, and before that, Parliamentary Secretary at the Cabinet Office. He was
elected Conservative MP for Hertsmere in May 2015.
The previous Culture Secretary Nicky Morgan will now be spending more time with her family.
There's been no suggestions that Dowden will diverge from the government path on setting out a
new internet censorship regime as outlined in its OnlIne Harms white paper.
Perhaps another parliamentary appointment that may be relevant is that Julian Knight has taken over the Chair of the DCMS Select Committee, the Parliamentary scrutiny body
overseeing the DCMS.
Knight seems quite keen on the internet censorship idea and will surely be spurring on the DCMS.
And finally one more censorship appointment was announced by the Government. The government has appointed Ofcom to
regulate video-sharing platforms under the audiovisual media services directive, which aims to reduce harmful content on these sites. That will provide quicker protection for some harms and activities and will act as a stepping stone to the full online
harms regulatory framework.
Matt Warman, The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport announced:
We also yesterday appointed Ofcom to regulate video-sharing platforms under the
audiovisual media services directive, which aims to reduce harmful content on these sites. That will provide quicker protection for some harms and activities and will act as a stepping stone to the full online harms regulatory framework.
In Fact this censorship process is set to start in September 2020 and in fact Ofcom have already produced their solution that shadows the age verification requirements of the Digital Economy Act but now may need rethinking as some of the enforcement
mechanisms, such as ISP blocking, are no longer on the table. The mechanism also only applies to British based online adult companies providing online video. of which there are hardly any left, after previously being destroyed by the ATVOD regime.
The Pakistan government should immediately roll back a set of social media censorship measures that were passed in secret, the Committee to Protect Journalists has said..
On January 28, the federal cabinet approved the Citizens Protection (Against
Online Harm) Rules, 2020, a set of regulations on social media content, without public consultation; the measures were enacted in secret.
A copy of the regulations, which was leaked online, shows that the rules empower the government to fine or ban
social media platforms over their users' content. The regulations provide for a National Coordinator to be appointed within the Ministry of Information and Telecommunications responsible for enforcing the rules.
Steven Butler, CPJ's Asia program
These stringent but vague rules approved by Pakistan's federal cabinet threaten the ability of journalists to report the news and communicate with their sources. The cabinet should immediately
reverse course and seek broad consultations with legislators and civil society, including the media, on how to proceed with any such regulations.
Social media companies are required to remove content deemed objectionable by the National
Coordinator within 24 hours, and to provide to the regulator decrypted content and any other information about users on demand. The companies are also made responsible for preventing the live streaming of any content related to terrorism, extremism, hate
speech, defamation, fake news, incitement to violence and national security.
If a service is does not comply, the National Coordinator is granted the power to block services and levy fines of up to 500 million rupees ($3.24 million).
The Government has signalled its approach to introducing internet censorship in a government response to consultation contributions about the Online Harms white paper. A more detailed paper will follow in the spring.
The Government has outlined
onerous, vague and expensive censorship requirements on any British website that lets its users post content including speech. Any website that takes down its forums and comment sections etc will escape the nastiness of the new law.
The idea seems
to be to force all speech onto a few US and Chinese social media websites that can handle the extensive censorship requirements of the British Governments. No doubt this will give a market opportunity for the US and Chinese internet giants to start
charging for forcibly moderated and censored interaction.
The Government has more or less committed to appointing Ofcom as the state internet censor who will be able to impose massive fines on companies and their fall guy directors who allow
speech that the government doesn't like.
On a slightly more positive note the government seems to have narrowed down its censorship scope from any conceivable thing that could be considered a harm to someone somewhere into more manageable set that
can be defines as harms to children.
The introductory sections of the document read:
1. The Online Harms White Paper set out the intention to improve protections for users
online through the introduction of a new duty of care on companies and an independent regulator responsible for overseeing this framework. The White Paper proposed that this regulation follow a proportionate and risk-based approach, and that the duty of
care be designed to ensure that all companies have appropriate systems and processes in place to react to concerns over harmful content and improve the safety of their users - from effective complaint mechanisms to transparent decision-making over
actions taken in response to reports of harm.
2. The consultation ran from 8 April 2019 to 1 July 2019. It received over 2,400 responses ranging from companies in the technology industry including large tech giants and small and
medium sized enterprises, academics, think tanks, children's charities, rights groups, publishers, governmental organisations and individuals. In parallel to the consultation process, we have undertaken extensive engagement over the last 12 months with
representatives from industry, civil society and others. This engagement is reflected in the response.
3. This initial government response provides an overview of the consultation responses and wider engagement on the proposals in
the White Paper. It includes an in-depth breakdown of the responses to each of the 18 consultation questions asked in relation to the White Paper proposals, and an overview of the feedback in response to our engagement with stakeholders. This document
forms an iterative part of the policy development process. We are committed to taking a deliberative and open approach to ensure that we get the detail of this complex and novel policy right. While it does not provide a detailed update on all policy
proposals, it does give an indication of our direction of travel in a number of key areas raised as overarching concern across some responses.
4. In particular, while the risk-based and proportionate approach proposed by the White
Paper was positively received by those we consulted with, written responses and our engagement highlighted questions over a number of areas, including freedom of expression and the businesses in scope of the duty of care. Having carefully considered the
information gained during this process, we have made a number of developments to our policies. These are clarified in the 'Our Response' section below.
5. This consultation has been a critical part of the development of this
policy and we are grateful to those who took part. This feedback is being factored into the development of this policy, and we will continue to engage with users, industry and civil society as we continue to refine our policies ahead of publication of
the full policy response. We believe that an agile and proportionate approach to regulation, developed in collaboration with stakeholders, will strengthen a free and open internet by providing a framework that builds public trust, while encouraging
innovation and providing confidence to investors.
Our response Freedom of expression
1. The consultation responses indicated that some respondents were concerned that the proposals could impact
freedom of expression online. We recognise the critical importance of freedom of expression, both as a fundamental right in itself and as an essential enabler of the full range of other human rights protected by UK and international law. As a result, the
overarching principle of the regulation of online harms is to protect users' rights online, including the rights of children and freedom of expression. Safeguards for freedom of expression have been built in throughout the framework. Rather than
requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.
2. To ensure protections for freedom of expression, regulation will establish differentiated expectations on companies for illegal content and activity, versus conduct that is not illegal but has the potential to cause harm. Regulation will therefore not
force companies to remove specific pieces of legal content. The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this
consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content.
3. Services in scope of the
regulation will need to ensure that illegal content is removed expeditiously and that the risk of it appearing is minimised by effective systems. Reflecting the threat to national security and the physical safety of children, companies will be required
to take particularly robust action to tackle terrorist content and online child sexual exploitation and abuse.
4. Recognising concerns about freedom of expression, the regulator will not investigate or adjudicate on individual
complaints. Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and
conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users' ability to challenge removal
of content where this occurs.
5. Companies will be required to have effective and proportionate user redress mechanisms which will enable users to report harmful content and to challenge content takedown where necessary. This will
give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression. These processes will need to be transparent, in line with terms and conditions, and
Ensuring clarity for businesses
6. We recognise the need for businesses to have certainty, and will ensure that guidance is provided to help businesses understand potential
risks arising from different types of service, and the actions that businesses would need to take to comply with the duty of care as a result. We will ensure that the regulator consults with relevant stakeholders to ensure the guidance is clear and
Businesses in scope
7. The legislation will only apply to companies that provide services or use functionality on their websites which facilitate the sharing of user generated content or
user interactions, for example through comments, forums or video sharing. Our assessment is that only a very small proportion of UK businesses (estimated to account to less than 5%) fit within that definition. To ensure clarity, guidance will be provided
by the regulator to help businesses understand whether or not the services they provide or functionality contained on their website would fall into the scope of the regulation.
8. Just because a business has a social media page
that does not bring it in scope of regulation. Equally, a business would not be brought in scope purely by providing referral or discount codes on its website to be shared with other potential customers on social media. It would be the social media
platform hosting the content that is in scope, not the business using its services to advertise or promote their company. To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content,
or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome
changes to their service to be compliant with the proposed regulation.
9. Regulation must be proportionate and based on evidence of risk of harm and what can feasibly be expected of companies. We anticipate that the regulator
would assess the business impacts of any new requirements it introduces. Final policy positions on proportionality will, therefore, align with the evidence of risk of harm and impact to business. Business-to-business services have very limited
opportunities to prevent harm occurring to individuals and as such will be out of scope of regulation.
Identity of the regulator
11. We are minded to make Ofcom the new regulator, in preference to
giving this function to a new body or to another existing organisation. This preference is based on its organisational experience, robustness, and experience of delivering challenging, high-profile remits across a range of sectors. Ofcom is a
well-established and experienced regulator, recently assuming high profile roles such as regulation of the BBC. Ofcom's focus on the communications sector means it already has relationships with many of the major players in the online arena, and its
spectrum licensing duties mean that it is practised at dealing with large numbers of small businesses.
12. We judge that such a role is best served by an existing regulator with a proven track record of experience, expertise and
credibility. We think that the best fit for this role is Ofcom, both in terms of policy alignment and organisational experience - for instance, in their existing work, Ofcom already takes the risk-based approach that we expect the online harms regulator
will need to employ.
13. Effective transparency reporting will help ensure that content removal is well-founded and freedom of expression is protected. In particular, increasing
transparency around the reasons behind, and prevalence of, content removal may address concerns about some companies' existing processes for removing content. Companies' existing processes have in some cases been criticised for being opaque and hard to
14. The government is committed to ensuring that conversations about this policy are ongoing, and that stakeholders are being engaged to mitigate concerns. In order to achieve this, we have recently established a
multi-stakeholder Transparency Working Group chaired by the Minister for Digital and Broadband which includes representation from all sides of the debate, including from industry and civil society. This group will feed into the government's transparency
report, which was announced in the Online Harms White Paper and which we intend to publish in the coming months.
15. Some stakeholders expressed concerns about a potential 'one size fits all' approach to transparency, and the
material costs for companies associated with reporting. In line with the overarching principles of the regulatory framework, the reporting requirements that a company may have to comply with will also vary in proportion with the type of service that is
being provided, and the risk factors involved. To maintain a proportionate and risk-based approach, the regulator will apply minimum thresholds in determining the level of detail that an in-scope business would need to provide in its transparency
reporting, or whether it would need to produce reports at all.
Ensuring that the regulator acts proportionately
16. The consideration of freedom of expression is at the heart of our policy
development, and we will ensure that appropriate safeguards are included throughout the legislation. By taking action to address harmful online behaviours, we are confident that our approach will support more people to enjoy their right to freedom of
expression and participate in online discussions.
17. At the same time, we also remain confident that proposals will not place an undue burden on business. Companies will be expected to take reasonable and proportionate steps to
protect users. This will vary according to the organisation's associated risk, first and foremost, size and the resources available to it, as well as by the risk associated with the service provided. To ensure clarity about how the duty of care could be
fulfilled, we will ensure there is sufficient clarity in the regulation and codes of practice about the applicable expectations on business, including where businesses are exempt from certain requirements due to their size or risk.
18. This will help companies to comply with the legislation, and to feel confident that they have done so appropriately.
19. We recognise the importance of the
regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action
when they fail to do so. We are considering the responses to the consultation on senior management liability and business disruption measures and will set out our final policy position in the Spring.
Protection of children
20. Under our proposals we expect companies to use a proportionate range of tools including age assurance, and age verification technologies to prevent children from accessing age-inappropriate content and to protect them from
other harms. This would achieve our objective of protecting children from online pornography, and would also fulfil the aims of the Digital Economy Act.
After five months of complete internet shutdown in the federally-administered Indian union territory of Jammu and Kashmir, only partial internet access has been restored after the interference of the Indian Supreme Court on January 10, which called the
shutdown unconstitutional. Freedom of internet access is a fundamental right , said Justice N. V. Ramana who was a part of the bench that gave this verdict.
This shutdown marks the longest ever internet shutdown in any
democracy around the world, and is viewed by experts as a potential signal of the rise of the Great Firewall of India . The term great firewall is used to refer to the set of legislative and technical tools deployed by the Chinese government to control
information online, including by blocking access to foreign services and preventing politically sensitive content from entering the domestic network.
While the Chinese firewall has evolved as a very sophisticated internet
censorship infrastructure, the Indian one is yet to get organized into a large-scale and complex structure. India's tactics to control information online include banning entire websites and services, shutting down networks and pressuring social media
content to remove content on vague grounds. Read More: India partially lifts communications blackout in Kashmir, internet still down 301 websites whitelisted
According to internetshutdowns.in , a project that is tracking internet
shutdowns in India and created by legal nonprofit Software Freedom Law Centre , the shutdown that was imposed on August 4, 2019, has been the longest in the country and was only partially lifted in Kargil on December 27, 2019, while the rest of the state
was still under the shutdown.
Landlines and mobile communications services were also blocked in addition to regular internet services. Although the verified users of the Kashmir valley saw 2G services working on January 25, 2020
with access to only 301 white-listed websites (153 initially which was later expanded to 301), social media, Virtual Private Networks (VPNs) and many other sites remain banned.
The administration of J&K passed an order on 25th
January ordering for the restoration of 2G internet for around 300 whitelisted websites.
The Logical Indian reported on January 30, 2020, that broadband services in Kashmir will be be restored only after the creation of an alleged
social media firewall. It is currently unclear whether these restrictions will only be imposed in Kashmir or in other areas of India as well.
Nazir Ahmad Joo, General Manager of Bharat Sanchar Nigam Limited (BSNL), a public mobile
and broadband carrier, told the digital news platform that his company is working on a developing a firewall:
We have called a team of technical experts from Noida and Banglore who are working over creating a firewall
to thwart any attempt by the consumers to reach to the social media applications[..]
Internet Service Providers like mobile internet carriers were asked by the government to install necessary firewalls while
white-listing the list of allowed websites in an order dated January 13, 2020.
In the meantime, the partial shutdown continues in Kashmir despite the Supreme Court's verdict of January 10. Ironically, the order from the Jammu and
Kashmir home department mentioned above was imposed a day after the Court ruling.
Facebook is moving ahead with plans to implement end to end encryption on Facebook Messenger and Instagram to protect users from snoopers, censors, spammers, scammers and thieves.
But children's campaign groups are opposing these safety measures on
the grounds the encryption will also protect those illegally distributing child abuse material.
About 100 organisations, led by the NSPCC, have signed an open letter warning the plans will undermine efforts to catch abusers.
Home Secretary Priti
Patel said she fully supported the move, presumably also thinking of the state's wider remit to snoop on people's communications.
End-to-end encryption, already used on Facebook-owned WhatsApp, means no-one, including the company that owns the
platform, can see the content of sent messages. The technology will make it significantly less likely that hackers will be able to intercept messages, going a long way to protect users from phishing and cyber-stalking. And of course child internet users
will also benefit from these protections.
The campaign group opposed such protection arguing:
We urge you to recognise and accept that an increased risk of child abuse being facilitated on or by Facebook is not a
reasonable trade-off to make.
A spokesman for Facebook said protecting the wellbeing of children on its platform was critically important to it. He said:
We have led the industry in safeguarding
children from exploitation and we are bringing this same commitment and leadership to our work on encryption
We are working closely with child-safety experts, including NCMEC [the US National Center for Missing and Exploited
Children], law enforcement, governments and other technology companies, to help keep children safe online.
In 2018, Facebook made 16.8 million reports of child sexual exploitation and abuse content to the NCMEC. The National Crime Agency
said this had led to more than 2,500 arrests and 3,000 children made safe.
Netflix has reported on the moves and TV shows that it has banned at the request of governments. Netflix writes:
We offer creators the ability to reach audiences all around the world. However, our catalog varies from country to country, including
for rights reasons (i.e., we don't have the rights to show everything in every country where we operate). In some cases we've also been forced to remove specific titles or episodes of titles in specific countries due to government takedown demands.
Below are the titles we've removed to date, as of February 2020 -- just nine in total since we launched. Beginning next year, we will report these takedowns annually.
In 2015, Netflix complied with the New Zealand Film and Video Labeling Body to remove The Bridge . The film is classified as "objectionable" in the country.
In 2017, Netflix complied with Vietnamese Authority of
Broadcasting and Electronic Information (ABEI) to remove Full Metal Jacket.
In 2017, Netflix complied with the German Commission for Youth Protection (KJM) to remove Night of the Living Dead . A version of the film is
also banned in the country. There's a discussion of exactly which version is banned in a German language article from schnittberichte.com
Netflix complied with the Singapore Infocomm Media Development Authority (IMDA) to remove Cooking on High (TV series about cooking with cannabis) , The Legend of 420 (a comedy documentary about cannabis) , and Disjointed
(TV series about cannabis) from the service in Singapore only.
In 2019, Netflix complied with the Saudi Communication and Information Technology Commission to remove one episode -- "Saudi Arabia" -- from Patriot Act with
Hasan Minhaj (comedy TV news, talk show) .
In 2019, Netflix complied with the Singapore Infocomm Media Development Authority (IMDA) to remove The Last Temptation of Christ.
In 2020, Netflix complied with the
Singapore Infocomm Media Development Authority (IMDA) to remove The Last Hangover . This is the Brazilian TV comedy about a gay Christ that proved controversial in Brazil.
Thanks to the adoption of a disastrous new Copyright Directive, the European Union is about to require its member states to pass laws requiring online service providers to ensure the unavailability of copyright-protected works. This will likely result in
the use of copyright filters that automatically assess user-submitted audio, text, video and still images for potential infringement. The Directive does include certain safeguards to prevent the restriction of fundamental free expression rights, but
national governments will need some way to evaluate whether the steps tech companies take to comply meet those standards. That evaluation must be both objective and balanced to protect the rights of users and copyright holders alike.
Quick background for those who missed this development: Last March, the European Parliament narrowly approved the new set of copyright rules , squeaking it through by a mere five votes (afterwards, ten MEPs admitted they'd been
confused by the process and had pressed the wrong button).
By far the most controversial measure in the new rules was a mandate requiring online services to use preventive measures to block their users from posting text, photos,
videos, or audio that have been claimed as copyrighted works by anyone in the world. In most cases, the only conceivable preventive measure that satisfies this requirement is an upload filter. Such a filter would likely fall afoul of the ban on general
monitoring anchored in the 2000 E-Commerce Directive (which is currently under reform) and mirrored in Article 17 of the Copyright Directive.
There are grave problems with this mandate, most notably that it does not provide for
penalties for fraudulently or negligently misrepresenting yourself as being the proprietor of a copyrighted work. Absent these kinds of deterrents, the Directive paves the way for the kinds of economic warfare , extortion and censorship against creators
that these filters are routinely used for today.
But the problems with filters are not limited to abuse: Even when working as intended, filters pose a serious challenge for both artistic expression and the everyday discourse of
Internet users, who use online services for a laundry list of everyday activities that are totally disconnected from the entertainment industry, such as dating, taking care of their health, staying in touch with their families, doing their jobs, getting
an education, and participating in civic and political life.
The EU recognized the risk to free expression and other fundamental freedoms posed by a system of remorseless, blunt-edged automatic copyright filters, and they added
language to the final draft of the Directive to balance the rights of creators with the rights of the public. Article 17(9) requires online service providers to create effective and expeditious complaint and redress mechanisms for users who have had
their material removed or their access disabled.
Far more important than these after-the-fact remedies, though, are the provisions in Article 17(7), which requires that Member States shall ensure that users...are able to rely on
limitations and exceptions to copyright, notably quotation, criticism, review and use for the purpose of caricature, parody or pastiche. These free expression protections have special status and will inform the high industry standards of professional
diligence required for obtaining licenses and establishing preventive measures (Art 17(4)).
This is a seismic development in European copyright law. European states have historically operated tangled legal frameworks for copyright
limitations and exceptions that diverged from country to country. The 2001 Information Society Directive didn't improve the situation: Rather than establishing a set of region-wide limitations and exceptions, the EU offered member states a menu of
copyright exceptions and allowed each country to pick some, none, or all of these exceptions for their own laws.
With the passage of the new Copyright Directive, member states are now obliged to establish two broad categories of
copyright exceptions: those quotation, criticism, review and caricature, parody or pastiche exceptions. To comply with the Directive, member states must protect those who make parodies or excerpt works for the purpose of review or criticism. Equally
importantly, a parody that's legal in, say, France, must also be legal in Germany and Greece and Spain.
Under Article 17(7), users should be able to rely on these exceptions. The protective measures of the Directive--including
copyright filters--should not stop users from posting material that doesn't infringe copyright, including works that are legal because they make use of these mandatory parody/criticism exceptions. For avoidance of doubt, Article 17(9) confirms that
filters shall in no way affect legitimate uses, such as uses under exceptions or limitations provided for in Union law and Recital 70 calls on member states to ensure that their filter laws do not interfere with exceptions and limitations, in particular
those that guarantee the freedom of expression of users.
As EU member states move to transpose the Directive by turning it into national laws, they will need to evaluate claims from tech companies who have developed their own
internal filters (such as YouTube's Content ID filter) or who are hoping to sell filters to online services that will help them comply with the Directive's two requirements:
1. To block copyright infringement; and
2. To not block user-submitted materials that do not infringe copyright, including materials that take advantage of the mandatory exceptions in 17(7), as well as additional exceptions that each member state's laws have encoded
under the Information Society Directive (for example, Dutch copyright law permits copying without permission for "scientific treatises," but does not include copying for "the demonstration or repair of equipment," which is permitted
in Portugal and elsewhere).
Evaluating the performance of these filters will present a major technical challenge, but it's not an unprecedented one.
Law and regulation are no stranger to
technical performance standards. Regulators routinely create standardized test suites to evaluate manufacturers' compliance with regulation, and these test suites are maintained and updated based on changes to rules and in response to industry conduct.
(In)famously, EU regulators maintained a test suite for evaluating compliance with emissions standards for diesel vehicles, then had to undertake a top-to-bottom overhaul of these standards in the wake of widespread cheating by auto manufacturers.
Test suites are the standard way for evaluating and benchmarking technical systems, and they provide assurances to consumers that the systems they entrust will perform as advertised. Reviewers maintain standard suites for testing the
performance of code libraries, computers and subcomponents (such as mass-storage devices and video-cards) and protocols and products, such as 3D graphics rendering programs.
We believe that the EU's guidance to member states on
Article 17 implementations should include a recommendation to create and maintain test suites if member states decide to establish copyright filters. These suites should evaluate both the filters' ability to correctly identify infringing materials and
non-infringing uses. The filters could also be tested for their ability to correctly identify works that may be freely shared, such as works in the public domain and works that are licensed under permissive regimes such as the Creative Commons licenses
EFF previously sketched out a suite to evaluate filters' ability to comply with US fair use . Though fair use and EU exceptions and limitations are very different concepts, this test suite does reveal some of the challenges of
complying with Article 17's requirement the EU residents should be able to rely upon the parody and criticism exceptions it defines.
Notably, these exceptions require that the filter make determinations about the character of a
work under consideration: to be able to distinguish excerpting a work to critique it (a protected use) versus excerpting a work to celebrate it (a potentially prohibited use).
For example, a creator might sample a musician's
recording in order to criticize the musician's stance on the song's subject matter (one of the seminal music sampling cases turned on this very question ). This new sound file should pass through a filter, even if it detects a match with the original
recording, after the filter determines that the creator of the new file intended to criticize the original artist, and that they sampled only those parts of the original recording as were necessary to make the critical point.
However, if another artist sampled the original recording for a composition that celebrated the original artist's musical talent, the filter should detect and block this use, as enthusiastic tribute is not among the limitations and exceptions permitted under the Infosoc Directive , nor those mandated by the Copyright Directive.
This is clearly a difficult programming challenge. Computers are very bad at divining intent and even worse at making subjective determinations about whether the intent was successfully conveyed in a finished work.
However, filters should not be approved for use unless they can meet this challenge. In the decades since the Acuff-Rose sampling decision came down in 1994, musicians around the world have treated its contours as a best practice in
their own sampling. A large corpus of music has since emerged that fits this pattern. The musicians who created (and will create) music that hews to the standard--whose contours are markedly similar to those mandated in the criticism/parody language of
Article 17--would have their fundamental expression rights as well as their rights to profit from their creative labors compromised if they had to queue up to argue their case through a human review process every time they attempted to upload their work.
Existing case-law among EU member states makes it clear that these kinds of subjective determinations are key to evaluating whether a work is entitled to make use of a limitation or exception in copyright law. For example, the
landmark Germania 3 case demands that courts consider a balancing of relevant interests when determining whether a quotation is permissible.
Parody cases require even more subjective determination, with Dutch case law holding that
a work can only qualify as a parody if it evokes an existing work, while being noticeably different, and constitutes an expression of humor or mockery. ( Deckmyn v. Vandersteen (C-201/13, 2014) ).
Article 17 was passed amidst an
unprecedented controversy over the consequences for the fundamental right to free expression once electronic discourse was subjected to automated judgments handed down by automated systems. The changes made in the run-up to the final vote were intended
to ensure a high level of protection for the fundamental rights of European Internet users.
The final Article 17 text offers two different assurances to European Internet users: first, the right to a mechanism for effective and
expeditious complaint and redress, and second, Article 17(7) and (4)'s assurance that Europeans are able to rely on their right to undertake quotation, criticism, review and use for the purpose of caricature, parody or pastiche ... in accordance with
high industry standards of professional diligence.
The Copyright Directive passed amid unprecedented controversy, and its final drafters promised that Article 17 had been redesigned to protect the innocent as well as punishing the
guilty, this being the foundational premise of all fair systems of law. National governments have a duty to ensure that it's no harder to publish legal material than it is to remove illegal material. Streamlining the copyright enforcement system to allow
anyone to block the publication of anything, forever, without evidence or oversight presents an obvious risk for those whose own work might be blocked through malice or carelessness, and it is not enough to send those people to argue their case before a
tech company's copyright tribunal. If Europeans are to be able to rely upon copyright limitations and exceptions, then they should be assured that their work will be no harder to publish than any other's.
The U.K. government has hinted at its thoughts on its internet censorship plans and has also be giving clues about the schedule.
A first announcement seems to be due this month. It seems that the government is planning a summer bill and implementation
within about 18 months.
The plans are set to be discussed in Cabinet on Thursday and are due to be launched to coincide with Safer Internet Day next Tuesday when Baroness Morgan will also publish results of a consultation on last year's White Paper
on online harms.
The unelected Nicky Morgan proposes the new regime should mirror regulation in the financial sector, known as senior management liability where firms have to appoint a fall guy director to take personal responsibility for ensuring
they meet their legal duties. They face fines and criminal prosecution for breaches.
Ofcom will advise on potential sanctions against the directors ranging from enforcement notices, professional disqualification, fines and criminal prosecution. Under
the plans, Ofcom will also draw up legally enforceable codes of practice setting out what the social media firms will be expected to do to protect users from loosely define online harms that may not even be illegal.
Other legal harms to be
covered by codes are expected to include disinformation that causes public harm such as anti-vaccine propaganda, self-harm, harassment, cyberbullying, violence and pornography where there will be tougher rules on age verification to bar children.
Tellingly proposals to include real and actual financial harms such as fraud in the codes have been dropped.
Ministers have yet to decide if to give the internet censor the power to block website access to UK internet users but this option seems out of favour, maybe because it results in massive numbers of people moving to the encrypted internet that makes
it harder the authorities to snoop on people's internet activity.
The Centre for Data Ethics and Innovation does is part of the Department for Digital, Culture, Media & Sport. It's tasked by the Government to connect policymakers, industry, civil society, and the public to develop the 'right' governance regime for
The group has just published its final report into the control of social media and their 'algorithms' in time for their suggestions to be incorporated into the government's upcoming internet censorship bill.
term 'algorithm' has been used to imply some sort of manipulative menace that secretly drives social media. In fact the algorithm isn't likely to be far away from: Give them more of what they like, and maybe also try them with what their mates like.
No doubt the government would prefer something more like: Give them more of what the government likes.
Anyway the press release reads:
The CDEI publishes recommendations to make online platforms more accountable,
increase transparency, and empower users to take control of how they are targeted. These include:
New systemic regulation of the online targeting systems that promote and recommend content like posts, videos and adverts.
Powers to require platforms to allow independent researchers secure access to
their data to build an evidence base on issues of public concern - from the potential links between social media use and declining mental health, to its role in incentivising the spread of misinformation
Platforms to host
publicly accessible online archives for 'high-risk' adverts, including politics, 'opportunities' (e.g. jobs, housing, credit) and age-restricted products.
Steps to encourage long-term wholesale reform of online targeting to
give individuals greater control over how their online experiences are personalised.
The CDEI recommendations come as the government develops proposals for online harms regulation.
The Centre for Data Ethics and Innovation (CDEI), the UK's independent advisory body on the ethical use of
AI and data-driven technology, has warned that people are being left in the dark about the way that major platforms target information at their users, in its first report to the government.
The CDEI's year long review of online
targeting systems - which use personal information about users to decide which posts, videos and adverts to show them - has found that existing regulation is out of step with the public's expectations.
A major new analysis of
public attitudes towards online targeting, conducted with Ipsos MORI, finds that people welcome the convenience of targeting systems, but are concerned that platforms are unaccountable for the way their systems could cause harm to individuals and
society, such as by increasing discrimination and harming the vulnerable. The research highlighted most concern was related to social media platforms.
The analysis found that only 28% of people trust platforms to target them in a
responsible way, and when they try to change settings, only one-third (33%) of people trust these companies to do what they ask. 61% of people favoured greater regulatory oversight of online targeting, compared with 17% of people who support
The CDEI's recommendations to the government would increase the accountability of platforms, improve transparency and give users more meaningful control of their online experience.
recommendations strike a balance by protecting users from the potential harms of online targeting, without inhibiting the kind of personalisation of the online experience that the public find useful. Clear governance will support the development and
take-up of socially beneficial applications of online targeting, including by the public sector.
The report calls for internet regulation to be developed in a way that promotes human rights-based international norms, and
recommends that the online harms regulator should have a statutory duty to protect and respect freedom of expression and privacy.
And from the report:
The government's new online harms regulator should be required to provide regulatory oversight of targeting:
The regulator should take a "systemic" approach, with a code of practice to set standards, and require online platforms to assess and explain the impacts of their systems.
compliance, the regulator needs information gathering powers. This should include the power to give independent experts secure access to platform data to undertake audits.
The regulator's duties should explicitly include
protecting rights to freedom of expression and privacy.
Regulation of online targeting should encompass all types of content, including advertising.
The regulatory landscape should be coherent and
efficient. The online harms regulator, ICO, and CMA should develop formal coordination mechanisms.
The government should develop a code for public sector use of online targeting to promote safe, trustworthy innovation in the delivery of personalised advice and support.
The regulator should have the power to require platforms to give independent researchers secure access to their data where this is needed for research of significant potential importance to public policy.
Platforms should be required to host publicly accessible archives for online political advertising, "opportunity" advertising (jobs, credit and housing), and adverts for age-restricted products.
The government should consider formal mechanisms for collaboration to tackle "coordinated inauthentic behaviour" on online platforms.
Regulation should encourage platforms to provide people with more information and control:
We support the CMA's proposed "Fairness by Design" duty on online platforms.
The government's plans for labels on online electoral adverts should make paid-for content easy to identify, and
give users some basic information to show that the content they are seeing has been targeted at them.
Regulators should increase coordination of their digital literacy campaigns. The emergence of "data
intermediaries" could improve data governance and rebalance power towards users. Government and regulatory policy should support their development.
The Rajya Sabha is the upper house of the Indian parliament. Its Ethics Committee has just published an extensive list of internet censorship measures in the name of curbing online child sexual abuse material (CSAM).
The Committee has recommended that
law enforcement agencies be permitted to break end-to-end encryption, and that ISPs provide parents with website blocking services.
The ad hoc Committee, headed by Jairam Ramesh, made 40 recommendations in its report pubished on January 25.
Amend the Information Technology Act, 2000:
Make intermediaries responsible for proactively identifying and removing CSAM, and for reporting it to Indian and foreign authorities, and for reporting, to the designated authority, the IP address/identities of people who search for or access
child porn and CSAM
Make gateway ISPs liable so that they can detect and block CSAM websites.
Prescribe punitive measures for those who give pornographic access to children and those who access, produce or transmit CSAM.
Allow Central Government through "its designated authority" to block and/or prohibit all websites/intermediaries that carry CSAM . The designated authority has not been specified.
Allow law enforcement to break end-to-end encryption to trace distributors of child pornography.
Mandate CSAM detection for all social media companies through minimum essential technologies to detect CSAM besides reporting
it to law enforcement agencies.
Separate adult content section on streaming platforms like Netflix and social media platforms such as Twitter and Facebook where children are not allowed.
Age verification and gating mechanisms
on social media to restrict access to "objectionable/obscene material".
Manage children's access to internet: To do that, make apps that monitor children's access to porn mandatory on all devices in India, and make such
apps/solutions freely available to ISPs, companies, schools and parents. Also, ISPs should provide family-friendly filters to parents to regulate children's access to internet.
Use blockchain to trace buyers of child porn: MeitY should
coordinate with blockchain analysis companies to trace users who use cryptocurrencies to purchase child porn online.
Ban all payments to porn websites: Online payment portals and credit cards be prohibited from processing payments for any
Amend the Prevention of Children from Sexual Offences (POCSO) Act, 2012:
Prescribe a Code of Conduct for intermediaries (online platforms) to maintain child safety online, ensure age appropriate content, and curb use of children for pornographic purposes.
Make "advocating or counseling" sexual
activities with a minor through written material, visual media, audio recording, or any other means, an offence under the Act.
Make school management responsible for safety of children within schools, transportation services and any other
programmers with which the school is associated.
Make National Cyber Crime Reporting Portal the national portal for all report related to electronic material.
Make National Commission for Protection of Child Rights (NCPCR) the nodal agency to deal with the issue. It should have "necessary" technological, cyber policing and prosecution capabilities. Each state and UT should also have a
Commission for the Protection for Child Rights that mirrors NCPCR.
Appoint e-safety commissioners at state level to ensure implementation of social media and website guidelines.
National Crime Record Bureau (NCRB) must record
and report cases of child pornography of all kinds annually. Readers should note that the last annual report from NCRB was for 2017 and was released in October 2019.
National Tipline Number where citizens can report about child
sexual abuse and distribution of CSAM.
Awareness campaigns by Ministries of Women and Child Development, and Information and Broadcasting on recognising signs of child abuse, online risks and improving online safety. Schools should also
conduct training programmes for parents at least twice a year.
Mark Zuckerberg has declared that Facebook is going to stand up for free expression in spite of the fact it will piss off a lot of people.
He made the claim during a fiery appearance at the Silicon Slopes Tech Summit in Utah on Friday. Zuckerberg told
the audience that Facebook had previously tried to resist moves that would be branded as too offensive - but says he now believes he is being asked to partake in excessive censorship:
Increasingly we're getting called
to censor a lot of different kinds of content that makes me really uncomfortable, he claimed. We're going to take down the content that's really harmful, but the line needs to be held at some point.
It kind of feels like the list
of things that you're not allowed to say socially keeps on growing, and I'm not really okay with that.
This is the new approach [free expression], and I think it's going to piss off a lot of people. But frankly the old approach
was pissing off a lot of people too, so let's try something different.
Sex workers have been waiting for our day in court for over 100 years, announced Kaytlin Bailey, Director of Communications for the group Decriminalize Sex Work, and finally, we're going to get it.
On January 24, sex workers and their allies won an
important victory in our ongoing constitutional challenge to SESTA/FOSTA, a federal law that attempts to erase the oldest profession from the Internet.
The sex workers are arguing that the government's censorship of internet websites used by sex
workers has led to more them being endangered by having to seek trade in unsafe spaces, such as walking teh street.
Now the U.S. Court of Appeals for the D.C. Circuit ruled that their case can proceed to trial, where a federal judge will decide
whether SESTA/FOSTA interferes with the constitutional rights of website operators and their users.