The Government's Online Harms bill will require foreign social media companies to appoint a token fall guy in Britain who will be jailed should the company fail in its duty of care. I wonder what the salary will be?
The government is pushing forward with an internet censorship bill which will punish people and companies for getting it wrong without the expense and trouble of tying to dictate rules on what is allowed.
In an interesting development the Times is
reporting that the government want to introduce a "senior management liability", under which executives could be held personally responsible for breaches of standards. US tech giants would be required to appoint a British-based director, who
would be accountable for any breaches of the censorship rules.
It seems a little unjust to prosecute a token fall guy who is likely to have absolutely no say in the day to day decisions made by a foreign company. Still it should be a very well
paid job which hopefully includes lots of coverage for legal bills and a zero notice period allowing instant resignation at the first hint of trouble.
The BBC has posted an interesting review of internet and general control freakery in China. One interesting idea was the use of numbers for hash tag rallying calls as numbers can be pretty hard to censor by text filtering. The BBC explains:
Late last year, the term 996 cropped up on a number of social media microblogs and forums, originally by workers in China's tech industry as a subtle way to vent their frustrations at the excessive amount of work they were
expected to do.
The Chinese censors struggle to censor number sequences, given that they can often be innocuous. Consequently, Weibo users were able to use the term 996 to complain openly that their employer was violating China's
labour laws by making them work some 72 hours a week: from 9am to 9pm, six days a week.
But the phrase has now seen expanded usage beyond the tech industry, especially among China's young, who complained that overtime has become
Britain's first police unit for tackling supposed online hate crime has brought charges against less than 1% of the cases it has investigated.
Scotland Yard's online hate crime hub has logged 1,851 incidents since its launch in April 2017 and 17
cases, or 0.92%, resulted in charges. And of those seven have led to prosecutions, Freedom of Information figures show. There are three more cases pending a charging decision from the Crown Prosecution Service (CPS).
The £1.7million scheme, launched
by London mayor Sadiq Khan, has however resulted in 59 being given youth referrals, harassment warnings or have been noted as apologising.
The Metropolitan Police said the £326,344 needed for the pilot year of the hub was funded by the Mayor's Office
for Policing and Crime (MOPAC). Following the trial, a unit of five officers led by a detective inspector was given a £323,829 budget for 2018/19 and £363,000 in 2019/20 by the police force.
Scotland Yard said the unit now deals with both online and
offline cases, reviewing every hate crime reported to the Met on a daily basis.
The low number of charges is thought to be due to the high CPS charging threshold for online hate, and the difficulties investigators face in obtaining information from
social media companies.
The French government has come up with an innovative way of financing a program of mass social media, surveillance, to use it to detect tax fraud.
The self financing surveillance scheme has now been given the go the constitutional court. Customs and
tax officials will be allowed to review users' profiles, posts and pictures for evidence of undisclosed income.
In its ruling, the court acknowledged that users' privacy and freedom of expression could be compromised, but its applied caveats to
the legislation. It said authorities would have to ensure that password-protected content was off limits and that they would only be able to use public information pertaining to the person divulging it online. However the wording suggests that the non
public data is available and can be used for other more covert reasons.
The mass collection of data is part of a three-year online monitoring experiment by the French government and greatly increases the state's online surveillance powers.
Having learnt nothing from legislating for age verification without thinking, a few lords want to rush through internet censorship because it will take the government a year to work through the difficult issues
A few unelected members of the House of Lords are introducing their own internet censorship law because they think it is unreasonable to wait a year for the government to work through the issues.
Tom McNally, previously involved in TV censorship law
has challenged the Government to back his proposed new law. This is set to be introduced in the House or Lord on January 14.
The bill gives Ofcom censorship powers requiring that internet companies accept a duty of care with provisions to be
enforced by Ofcom.
McNally told The Daily Telegraph:
We are in danger of losing a whole year on this. The Government's commitment to develop safer internet legislation in the Queen's Speech, though welcome, did not go
The Government has yet to reveal the findings from its consultation on its White Paper which was published in the Summer. The results had been expected before the end of this year but have been delayed by the general
McNally is drafting the bill with the Carnegie Trust who campaign for internet censorship in the name of thinking of the children. Lord Puttnam and Baroness Kidron, the film director and children's internet rights campaigner are being
canvassed as sponsors of the bill.
YouTube has been censoring cryptocurrency-related content with a new wave of rule enforcements, according to several hosts. Since 23rd December, the site has been deleting individual videos from cryptocurrency channels. Some hosts have also been given
warnings and strikes, which temporarily prevent them from uploading content.
YouTube has not publicly stated that crypto videos are against its rules, meaning that users must read between the lines to deduce what is being targeted.
YouTube creator, Chris Dunn, has noted that his own videos were removed on the grounds that they were responsible for the sale of regulated goods and contained harmful and dangerous content.
Many YouTube hosts are now considering moving to
decentralized and uncensorable video platforms, such as PeerTube, LBRY, BitChute, and DTube. Incidentally, Twitter is also planning to create a decentralized media platform.
hundreds of videos was an 'error'
YouTube said today that its
removal of hundreds of crypto-related video sites earlier this week was an 'error'. YouTuve told Decrypt that the sites have since been put back online. However, a quick check today indicated that none had yet been restored. YouTube spouted:
With the massive volume of videos on our site, sometimes we make the wrong call. When it's brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it.
Offsite Update: After the dust has settled YouTube re-censors the crypto channels
Russia has said that it has successfully tested its sovereign internet, a country-wide alternative to the global internet.
RuNet, as the internet service is known , was tested on Monday to ensure the security of its internet infrastructure in case
the country would like to cut itself off from the global internet.
Deputy communications minister Alexei Sokolov said the results of the tests would be presented to President Vladimir Putin, and added that the drills would continue in the future.
Four telecoms operators took part, with 18 different scenarios tested.
Internet rights activists have noted that the measures could tighten censorship and lead to online isolation. Russian authorities also tried to ensure that it was
possible to intercept mobile phone traffic and text messages, Sokolov said.
New Zealand's chief censor David Shanks has commented on a legislative amendment requiring the likes of Netflix to use New Zealand censorship ratings and rules for content targeted at New Zealand viewers.
Bringing our media regulation framework up to date will take some significant work, and earlier this year Minister Tracey Martin announced a broad media regulation review, with work to commence on this substantively next year. That is
a good idea, but in the interim we thought that there was a relatively simple change that could make things better, clearer and more consistent for NZ consumers right now. That is to require Commercial Video on Demand (CVoD) services including
subscription services like Lightbox and Netflix and rental services like iTunes to use New Zealand classifications, and apply a New Zealand framework to new content.
This is the thinking behind the Films, Videos, and Publications
Classification (Commercial Video on-Demand) Amendment Bill introduced to the House on the 17th of December. Where a film or series has been classified in NZ, digital providers will need to use that classification. And where they are making a new film or
series available to Kiwis, these providers will need to apply a NZ framework, and provide age ratings and information that is consistent with what we expect.
Poland has became the latest country to propose a national age verification law for porn.
Prime Minister Mateusz Morawiecki, of the country's center-right Law and Justice Party, claimed that 60% of Polish boys between ages 13 and 16 had been exposed
Morawiecki made the remarks to a meeting of the Family Council, a group of parliamentarians, policy experts and leaders of non-governmental organizations whose mission is to support, initiate and promote actions that will benefit
Morawiecki did not specify what method might be used to check the ages of Polish people attempting to view online porn.
The video sharing platform Vimeo has now initiated censorship policy changes announced last June. The website has banned negative or critical content about vaccines. But there is also a rather unprecidented ratcheting up of censorship with a new clause
explaining that you can have your account banned for criticizing vaccines, even when using off-site services.
Vimeo explained the change via general counsel, Michael Cheah who said:
Our rules and processes are
designed to be applied fairly, consistently and transparently. As always, context matters. When prohibited content appears in the context of a news story or a narrative device in a dramatic work, we are likely to leave it up. If, however, the overall
driving message of the work is to perpetuate a viewpoint that we have specifically banned, we will remove it. We will also consider a user's speech outside of Vimeo (such as social media platforms, blogs, or anywhere else their personal views are clearly
represented) in making calls about intent and good faith.
The Government has reiterated its plans as outlined in the Online Harms white paper. It seems that the measures have been advanced somewhat as previous references to pre-legislative scrutiny have been deleted.
The Queens Speech briefing paper details
the government's legislative plan for the next two years:
“My ministers will develop legislation to improve internet safety for all.”
Britain is leading the world in developing a comprehensive regulatory regime to keep people safe online, protect children and other vulnerable users and ensure that there are no safe spaces for terrorists online.
The April 2019 Online Harms White Paper set out the Government’s plan for world-leading legislation to make the UK the safest place in the world to be online. The Government will continue work to develop this legislation, alongside
ensuring that the UK remains one of the best places in the world for technology companies to operate.
The proposals, as set out in the White Paper were:
○ A new duty of care on companies towards
their users, with an independent regulator to oversee this framework. ○ The Government want to keep people safe online, but we want to do this in a proportionate way, ensuring that freedom of expression is upheld and promoted online, and that
the value of a free and ndependent press is preserved. ○ The Government is seeking to do this by ensuring that companies have the right processes and systems in place to fulfil their obligations, rather than penalising them for individual
instances of unacceptable content.
The public consultation on this has closed and the Government is analysing the responses and considering the issues raised. The Government is working closely with a variety of
stakeholders, including technology companies and civil society groups, to understand their views.
The Government will prepare legislation to implement the final policy in response to the consultation.
Ahead of this legislation, the Government will publish interim codes of practice on tackling the use of the internet by terrorists and those engaged in child sexual abuse and exploitation. This will ensure companies take action
now to tackle content that threatens our national security and the physical safety of children.
The Government will publish a media literacy strategy to empower users to stay safe online.
Government will help start-ups and businesses to embed safety from the earliest stages of developing or updating their products and services, by publishing a Safety by Design framework.
The Government will carry out a review
of the Gambling Act, with a particular focus on tackling issues around online loot boxes and credit card misuse.
What the Government has done so far:
The joint DCMS-Home Office Online Harms White Paper was published in April 2019. The Government also published the Social Media Code of Practice, setting out the actions that social media platforms should take to prevent bullying,
insulting, intimidating and humiliating behaviours on their sites.
In November 2018 the Government established a new UK Council for Internet Safety. This expanded the scope of the UK Council for Child Internet Safety, and was
guided by the Government's Internet Safety Strategy.
The UK has been championing international action on online safety. The Prime Minister used his speech at the United Nations General Assembly to champion the UK's work on
Instagram has launched a new censorship feature that uses AI to recognize potentially offensive language and warn you that you're about to post something that might be deemed 'problematic'.
The feature uses a machine learning algorithm that Instagram
developed and tested to recognize different forms of bullying and provide a warning if and when a caption crosses that line.
The warning reads:
This caption looks similar to others that have been reported. From
there, you can choose to either Edit the Caption, Learn More, or Share Anyway. If the AI mistake, you can report it by clicking Learn More:
The feature joins another AI-powered pop-up, released earlier this year, which warns users when
their comments may be considered offensive.
We've found that these types of nudges can encourage people to reconsider their words when given a chance. Additionally, Instagram hopes that the feature
will be informative, helping educate people on what is and is not allowed.
The warning will roll out around the world in the next few months.
Albania's parliament has passed an internet censorship package criticised by journalists and the Council of Europe as an attempt to muzzle the media.
The parliament amended existing laws to empower the Albanian Media Authority (AMA) to censor news
websites. The package was initially targeted at 700 to 800 online news sites but the scope was broadened to include TV stations.
Prime Minister Edi Rama claimed the move intended to stop fake news or slander from causing loss of life or pressing
businesses for bribes by shaming the quality of their products.
Dunja Mijatovic, the Council of Europe Commissioner for Human Rights, said the laws were in urgent need of improvement:
The powers given to AMA, the
possibility of excessive fines and the blocking of media websites without a court order may deal a strong blow to freedom of expression and media.
Several provisions are indeed not compatible with international and European human
rights standards which protect freedom of expression and freedom of the media
Mozilla has announced that NextDNS would be joining Cloudflare as the second DNS-over-HTTPS (DoH) provider inside Firefox.
The browser maker says NextDNS passed the conditions imposed by its Trusted Recursive Resolver (TRR) program. These conditions
limiting the data NextDNS collects from the DoH server used by Firefox users;
being transparent about the data they collect; and
promising not to censor, filter, or block DNS traffic unless specifically requested by law enforcement.
The new option will appear some time next year.
DNS-over-HTTPS, or DoH, is a new feature that was added to Firefox last year. When enabled, it encrypts DNS traffic coming in and out of the browser.DNS traffic is not only encrypted but also moved
from port 53 (for DNS traffic) to port 443 (for HTTPS traffic), effectively hiding DNS queries and replies inside the browser's normal stream of HTTPS content.
Google reported on 20th November 2019 that electoral announcements can no longer be targeted to specific groups. The political advertisements in Google Ads can only be segmented based on the general categories: age, sex, and general location (postal
This new regulation will enter into force on 6th January 20120 in the United States.
Brad Parscale, the director of the Trump 2020 campaign criticized Google's new advertising policy for considering that it was specially designed to
prevent the re-election of the president. He told Fox News:
2016 freaked them out because I used a whole bunch of liberal platforms to do it.I guarantee you, this decision came from another room full of people going,
'Oh my God, we've got to stop them.'
The new rules imply that campaign advisers will not be able to target voters based on their political affiliations, even if they have previously stated that they would like to be contacted.
Critical voices about Google's new policy were also heard from the Democratic side. In an article published in Medium magazine, a group of Democratic digital operatives and strategists stated that the measure has a strong impact on the Democratic voting base, which uses digital media relatively more.
Nitish Kumar, the Chief Minister of the Indian state of Bihar is blaming rising incidents of sexual crime against women in the state on porn. He has written to the Indian Prime Minister Narendra Modi urging him to ban all porn sites and inappropriate
content available online. He wrote:
It will be my request to take appropriate action to ban all porn sites and inappropriate content available on internet immediately after giving due consideration to the serious issue, he
The incidents (of gang rape and crime against women) take place in some cases because of the impact of these sites.
People make videos of heinous acts (rape) against girls and women and get them
uploaded on social media such as Whatsapp, Facebook etc. Such content, which seriously affect the minds of children and youths, have been found as factors responsible for crimes (against women).
Long-term use of such content
negatively affect the mind of some people, which gives rise to social problems and increases the number of cases of crime against women.
A civil court in Rome has ruled that Facebook must immediately reactivate the account of the Italian neo-fascist party CasaPound Italia and pay the group 800 euro for each day the account has been closed/
Facebook shut the party's account, which had
240,000 followers, along with its Instagram page in early September. A Facebook spokesperson told the Ansa news agency at the time: Persons or organisations that spread hatred or attack others on the basis of who they are will not have a place on
Facebook and Instagram.
Facebook must also pay 15,000 euro in legal costs. The judge reportedly ruled that without Facebook, the party was excluded (or extremely limited) from the Italian political debate.
A Facebook spokesperson said the company
was aware of the court's decision and we are reviewing it carefully.
A radical overhaul of Australia's censorship and classification laws alongside reforms to the Privacy Act have been revealed.
On Wednesday and Thursday, Communications Minister Paul Fletcher confirmed that Australia's eSafety
Commissioner will be handed significant censorship powers.
The new government regime includes the development of a uniform classification framework across all media platforms that would replace the current system of Refused Classification, X, R, MA15
The legal basis was introduced via the hasty introduction of the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 in the wake of the Christchurch Mosque attacks. The law compelled ISPs, content service providers and
hosting service providers to block such content if called upon to do so by the Australian Federal Police.
And now it seems that this will provide the basis for the eSafety Commissioner to coordinate internet censorship in Australia.A key problem
to date has been people haven't been sure who they can complain to and who enforces action. Censorable content has been divided into 2 categories.
Class 1 seriously harmful content will be able to be reported directly the eSafety Commissioner. The
Commissioner will investigate the content and will be able to issue a takedown notice for seriously harmful content, regardless of where it is hosted, and refer it to law enforcement and international networks if it is sufficiently serious, the
government's fact sheet says. Where takedown notices are not effective, the ancillary service provider notice scheme will be able to be used request the delisting or de-ranking of material or services.
Class 2 content will be defined as content that
would otherwise be classified as RC, X18+, R18+ and MA15+ under the National Classification Code. This includes high impact material like sexually explicit, high impact, realistically stimulated violent content, through to content that is unlikely to
disturb most adults but is still not suitable for children, like coarse language, or less explicit violence. The most appropriate response to this kind of content will depend on its nature. eSafety would have graduated sanctions available to address
breaches of industry codes under the online content scheme, including warnings, notices, undertakings, remedial directions and civil penalties, the government fact sheet says.
After being heavily fined for child privacy issues about personalised advertising on YouTube, Google is trying to get its house in order. It will soon be rolling out new rules that prevent the profiling of younger viewers for advertising purposes.
restrictions on personalised advertising will negatively affect the livelihoods of many YouTube creators. It is pretty clear that Peppa Pig videos will be deemed off limits for personalised adverts, but a more difficult question is what about more
general content that appeals to adults and children alike?
YouTube is demanding clearer guidelines about this situation from the government internet privacy censors of the Federal Trade Commission (FTC). The law underpinning the requirements is known
as COPPA [the Children's Online Privacy Protection Act]. YouTube wrote to the FTC asking:
We believe there needs to be more clarity about when content should be considered primarily child-directed
are also writing to the FTC out of fear that the changes and vague guidance could destroy their channels.
The FTC has responded by initiating a public consultation.
In comments filed with the FTC Monday , YouTube invoked arguments raised by
creators, writing that adult users also engage with videos that could traditionally be considered child-directed, like crafting videos and content focused on collecting old toys:
Sometimes, content that isn't intentionally
targeting kids can involve a traditional kids activity, such as DIY, gaming and art videos. Are these videos 'made for kids,' even if they don't intend to target kids? This lack of clarity creates uncertainty for creators.
By the way
of a comparison, the British advert censors at ASA has a basic rule that if the proportion of kids watching is greater than 25% of the total audience then child protection rules kick in. Presumably the figure 25% is about what one expect for content that
appeals to all ages equally.
Yesterday the US Senate Judiciary Committee held a hearing on encryption and lawful access. That's the fanciful idea that encryption providers can somehow allow law enforcement access to users' encrypted data while otherwise preventing the bad guys
from accessing this very same data.
But the hearing was not inspired by some new engineering breakthrough that might make it possible for Apple or Facebook to build a secure law enforcement backdoor into their encrypted devices
and messaging applications. Instead, it followed speeches, open letters, and other public pressure by law enforcement officials in the U.S. and elsewhere to prevent Facebook from encrypting its messaging applications, and more generally to portray
encryption as a tool used in serious crimes, including child exploitation. Facebook has signaled it won't bow to that pressure. And more than 100 organizations including EFF have called on these law enforcement officials to reverse course and avoid
gutting one of the most powerful privacy and security tools available to users in an increasingly insecure world.
Many of the committee members seemed to arrive at the hearing convinced that they could legislate secure backdoors.
Among others, Senators Graham and Feinstein told representatives from Apple and Facebook that they had a responsibility to find a solution to enable government access to encrypted data. Senator Graham commented:
advice to you is to get on with it, because this time next year, if we haven't found a way that you can live with, we will impose our will on you.
But when it came to questioning witnesses, the senators had trouble
establishing the need for or the feasibility of blanket law enforcement access to encrypted data. As all of the witnesses pointed out, even a basic discussion of encryption requires differentiating between encrypting data on a smartphone, also called
encryption at rest, and end-to-end encryption of private chats, for example.
As a result, the committee's questioning actually revealed several points that undercut the apocalyptic vision painted by law enforcement officials in
recent months. Here are some of our takeaways:
There's No Such Thing As an Unhackable Phone
The first witness was Manhattan District Attorney Cyrus Vance, Jr., who has called for Apple and Google to
roll back encryption in their mobile operating systems. Yet by his own statistics, the DA's office is able to access the contents of a majority of devices it encounters in its investigations each year. Even for those phones that are locked and encrypted,
Vance reported that half could be accessed using in-house forensic tools or services from outside vendors. Although he stressed both the high cost and the uncertainty of these tools, the fact remains that device encryption is far from an insurmountable
barrier to law enforcement.
As we saw when the FBI dramatically lowered its own estimate of unhackable phones in 2017, the level of security of these devices is not static. Even as Apple and Google patch vulnerabilities that might
allow access, vendors like Cellebrite and Grayshift discover new means of bypassing security features in mobile operating systems. Of course, no investigative technique will be completely effective, which is why law enforcement has always worked every
angle it can. The cost of forensic tools may be a concern, but they are clearly part of a variety of tools law enforcement use to successfully pursue investigations in a world with widespread encryption.
Lawful Access to
Encrypted Phones Would Take Us Back to the Bad Old Days
Meanwhile, even as Vance focused on the cost of forensic tools to access encrypted phones, he repeatedly ignored why companies like Apple began fully encrypting their
devices in their first place. In a colloquy with Senator Mike Lee, Apple's manager of user privacy Erik Neuenschwander explained that the company's introduction of full disk encryption in iOS in 2014 was a response to threats from hackers and criminals
who could otherwise access a wealth of sensitive, unencrypted data on users' phones. On this point, Neuenschwander explained that Vance was simply misinformed: Apple has never held a key capable of decrypting encrypted data on users' phones.
Neuenschwander explained that he could think of only two approaches to accomplishing Vance's call for lawful access, both of which would dramatically increase the risks to consumers. Either Apple could simply roll back encryption on
its devices, leaving users exposed to increasingly sophisticated threats from bad actors, or it could attempt to engineer a system where it did hold a master key to every iPhone in the world. Regarding the second approach, Neuenschwander said as a
technologist, I am extremely fearful of the security properties of such a system. His fear is well-founded; years of research by technologists and cryptographers confirm that key escrow and related systems are highly insecure at the scale and complexity
of Apple's mobile ecosystem.
End-to-End Encryption Is Here to Stay
Finally, despite the heated rhetoric directed by Attorney General Barr and others at end-to-end encryption in messaging
applications, the committee found little consensus. Both Vance and Professor Matt Tait suggested that they did not believe that Congress should mandate backdoors in end-to-end encrypted messaging platforms. Meanwhile, Senators Coons, Cornyn, and others
expressed concerns that doing so would simply push bad actors to applications hosted outside of the United States, and also aid authoritarian states who want to spy on Facebook users within their own borders. Facebook's director for messaging privacy Jay
Sullivan discussed ways that the company will root out abuse on its platforms while removing its own ability to read users' messages. As we've written before, an encrypted Facebook Messenger is a good thing , but the proof will be in the pudding.
Ultimately, while the Senate Judiciary Committee hearing offered worrying posturing on the necessity of backdoors, we're hopeful that Congress will recognize what a dangerous idea legislation would be in this area.
Comment: Open Rights Group joins international outcry over UK government calls to access private messages
Open Rights Group has joined dozens of other organizations signing an open letter to the UK government to express significant concerns raised by their recent statements against encryption.
The UK Home Secretary, Priti Patel,
has joined her US counterparts in demanding weaker encryption and asking i nternet companies to design digital back doors into their messaging services. The UK government suggests stronger capabilities to monitor private messages will aid inf fighting
terrorism and child abuse. ORG disagrees, arguing that alternative approaches must be used as the proposed measures will weaken the security of every internet user.
ORG is concerned that this attack on encryption forms a pattern
of attacks on digital privacy and security by the UK government. Only last week leaked documents showed that the UK wants to give the US access to NHS records and other personal information, in a free flow of data between the two countries.
The open letter was also addressed to US and Australian authorities, and was coordinated by the US-based Open Technology Institute and was signed, among others, by Amnesty International, Article 19, Index on Censorship, Privacy
International and Reporters Without Borders.
Javier Ruiz Diaz, Policy Director for Open Rights Group, said:
The Home Secretary wants to be able to access our private messages in WhatsApp and
similar apps, demanding that companies remove the technical protections that keep out fraudsters and other criminals. This is wrong and will make the internet less safe. Surveillance measures should be targeted and not built into the apps used by
millions of people to talk to their friends and family.
Comment: Facebook has also responded to UK/US/Australian government calls for back doors
As the Heads of WhatsApp and Messenger, we are writing in response to your public letter addressing our plans to strengthen private messaging for our customers. You have raised important issues that could impact the future of free societies in the
digital age and we are grateful for the opportunity to explain our view.
We all want people to have the ability to communicate privately and safely, without harm or abuse from hackers, criminals or repressive regimes. Every day,
billions of people around the world use encrypted messages to stay in touch with their family and friends, run their small businesses, and advocate for important causes. In these messages they share private information that they only want the person they
message to see. And it is the fact that these messages are encrypted that forms the first line of defense, as it keeps them safe from cyber attacks and protected from falling into the hands of criminals. The core principle behind end-to-end encryption is
that only the sender and recipient of a message have the keys to unlock and read what is sent. No one can intercept and read these messages - not us, not governments, not hackers or criminals.
We believe that people have a right
to expect this level of security, wherever they live. As a company that supports 2.7 billion users around the world, it is our responsibility to use the very best technology available to protect their privacy. Encrypted messaging is the leading form of
online communication and the vast majority of the billions of online messages that are sent daily, including on WhatsApp, iMessage, and Signal, are already protected with end-to-end encryption.
Cybersecurity experts have
repeatedly proven that when you weaken any part of an encrypted system, you weaken it for everyone, everywhere. The backdoor access you are demanding for law enforcement would be a gift to criminals, hackers and repressive regimes, creating a way for
them to enter our systems and leaving every person on our platforms more vulnerable to real-life harm. It is simply impossible to create such a backdoor for one purpose and not expect others to try and open it. People's private messages would be less
secure and the real winners would be anyone seeking to take advantage of that weakened security. That is not something we are prepared to do.
The Global Expression Report 2018-19 shows that global freedom of expression at its lowest for a decade. Gains that were made between 2008 -- 2013 have been eroded over the last five years. Repressive responses to street protests are contributing to the
decline in freedom of expression around the world. A rise in digital authoritarianism sees governments taking control of internet infrastructure, increasing online surveillance and controlling content. The numbers of journalists, communicators and human
rights defenders being imprisoned, attacked and killed continues to increase. 66 countries -- with a combined population of more than 5.5 billion people -- saw a decline in their overall freedom of expression environment last decade.
Global Expression Report 2018-19: media pack
The Global Expression Report 2018-19 shows that global freedom of expression at its lowest for a decade. Gains that were made between 2008 -- 2013 have been eroded over the last five years.
Repressive responses to street protests are contributing to the decline in freedom of expression around the world.
A rise in digital authoritarianism sees governments taking control of internet infrastructure, increasing online surveillance and controlling content.
The numbers of journalists, communicators and
human rights defenders being imprisoned, attacked and killed continues to increase.
66 countries -- with a combined population of more than 5.5 billion people -- saw a decline in their overall freedom of expression
environment last decade.
Comment from Thomas Hughes, Executive Director of ARTICLE 19:
"Almost ten years ago, the Arab Spring offered hope to people across the world that repressive governments would not be able
to retain power when faced with protestors, empowered as never before with access to information and digital tools for organising.
"Today, protests continue to take place around the world but our report shows that global
freedom of expression remains at a ten-year low and that many of the gains made in the earlier part of the decade have been lost.
"Some of these threats are not new: governments are still using state violence and judicial
harassment to close down protests. Journalists, communicators and human rights defenders are still being imprisoned, attacked and killed with impunity. But we are also seeing a rise in digital authoritarianism where governments are using digital
technology to surveill their citizens, restrict content and shut down communications."
"Governments need to take action to reverse this trend and uphold their citizens' right to freedom of expression."
The UK ISP BT has become the first of the major broadband providers to trial their own DNS over HTTPS resolver, which encrypts Domain Name System (DNS) requests.
This is response to Firefox offering its own choice of encrypted DNS resolver that would
effectively evade BT's current unencrypted DNS resolver which allows the UK government to monitor and log people's internet use, block websites that are considered 'harmful'; snitch people up to the police for politically incorrrect comments; and snitch
people up to copyright trolls over dodgy file sharing.
However BT's new service will allow people to continue using website blocking for parental control whilst being a lot safer from 3rd party snoopers on their networks.
BT have made the
following statement about its experimental new service:
BT are currently investigating roadmap options to uplift our broadband DNS platform to support improvements in DNS security -- DNSSEC, DNS over TLS (DoT) and DNS over
HTTPS (DoH). To aid this activity and in particular gain operation deployment insights, we have enabled an experimental DoH trial capability.
We are initially experimenting with an open resolver, but our plan is to move a closed
resolver only available to BT customers.
The BT DoH trial recursive resolver can be reached at https://doh.bt.com/dns-query/
Twitter's updated terms commencing from 1st January 2020 will formalise throttling and shadow banning of content
The idea of a shadow ban is that someone is banned but they don't know they've been banned because they keep posting, but no one sees
Twitter's new terms of service state that the company may limit distribution or visibility of any Content on the service.
This new line of text suggests that Twitter will legally be able to start throttling (intentionally
suppressing or hiding content) or shadow banning (intentionally suppressing or hiding a person's content without their knowledge) posts on the platform from the start of 2020. The full updated sentence now reads:
may also remove or refuse to distribute any Content on the Services, limit distribution or visibility of any Content on the service, suspend or terminate users, and reclaim usernames without liability to you.
Twitter's current terms of
service with no reference to content throttling or shadow banning
The former head of the Australian Cyber Security Centre (ACSC), former 'eSafety Commissioner', Alastair MacGibbon, has told the House of Representatives Standing Committee On Social Policy And Legal Affairs looking to age verification for online wagering
and online pornography , that any form of online age verification would require a biometric component. He said:
I think biometrics -- with all of the problems associated with biometrics, and they are not a silver bullet --
is the only way you could really have an online system.
A scenario relying solely on Home Affairs' Face and Document Verification Services to provide proof of age would not work on its own, due to the ability for children to be
able to take, for instance, a driver's licence and verify it with the system.
What will be harder for the child is to get my face in front of the camera and use it for the purposes of proof of age, he said on Friday.
I'm not advocating for it to be used as such ...BUT... it could be used as a way of saying, 'This face that's now in front of this camera is attached to a driver's licence and a passport in Australia, and that person is
over the age of 18'.
He was not very sympathetic to porn viewers who may end up being victims of hackers, fraud, identity crime, or blackmail. He added
Australians need to accept that there is no
such thing as a completely secure connected device, that there will be failures, and everything in life is about balancing value and risk.
You do run the risk that Australians who have a privacy concern will be forced into darker
parts of the web to avoid online verification and that will be an unintended consequences any such scheme.
Well with an 'eSafety Commissioner' like that, I think Australian internet users should be getting a little bit nervous.
Amazon has come under fire for selling T-shirts glorifying the death flights of Chile's military dictatorship in which leftwing opponents of the regime were dropped from helicopters in an attempt to hide their murders.
More than 3,000 people were
killed or forcibly disappeared during Pinochet's 1973-1990 dictatorship. It was revealed that at least 120 of them were later thrown to their deaths from helicopters into Chile's ocean, lakes and rivers.
T-shirts making light of such atrocities have
become popular among the US far right, and were openly on sale on Amazon with slogans such as Free Helicopter Rides.
Chilean author Diamela Eltit told the Guardian:
It is unbearable for people like me who had to
endure that time when people were thrown alive into the sea from helicopters. This is not only hurtful, it is also of incomprehensible cruelty. It shows how the worst part of humanity can be absorbed by the market and transformed into an object of
Most of the garments have now disappeared from the website.
The head of TikTok is reportedly planning a trip to Washington, D.C., next week to meet with lawmakers who have harshly criticized the app over its purported ties to the Chinese government and concerns over censorship and privacy.
This appears to be
the first visit that the TikTok chief, Alex Zhu, has been called to account for the short video-sharing platform. TikTok has become an oft-discussed target among those in the US government, who recently opened a national security investigation and have
questioned how close the relationship is between the platform and its China-based parent company, ByteDance.
TikTok has been downloaded more than 1.5 billion times globally, an indicator of its rapid rise as a platform -- especially loved by teens
-- for creating and sharing short videos and launching the latest viral memes across the internet.
TikTok has faced increasing scrutiny over ties to its parent company, a $75 billion company based out of China called ByteDance. TikTok has
consistently defended itself by asserting that none of its moderators are based in China, and that no foreign government asks the platform to censor content. However when pro-democracy protests broke out in Hong Kong earlier this year, TikTok was
curiously devoid of any hints of unrest, and videos instead documented a prettier picture.
Pinterest and The Knot, the two most popular websites for wedding inspiration and planning, have now decided to do away with plantation weddings. These seem to be the US equivalent of weddings at stately homes in the UK. Except of course that many of the
US homes, especially in the south, have a historic connection to slavery.
At the pressure from of campaign group known as Color of Change, both Pinterest and The Knot have started cracking down on all the plantation wedding venues which were once
The Knot hasn't banned any plantation venues from the platform, but has restricted hw they can describe themselves by introducing new guidelines. The chief marketing officer of the wedding planning website, Dhanusha Sivajee, said
that plantations can no longer use language that glorifies, celebrates, or romanticizes Southern plantation history.
Pinterest has taken things to the extreme by completely restricting any content around plantation weddings and is also said to be
working on going as far as de-indexing Google searches relating to the website's content about plantation weddings. A Pinterest spokesperson said:
Weddings should be a symbol of love and unity. Plantations represent
none of those things. We are working to limit the distribution of this content and accounts across our platform, and continue to not accept advertisements for them
Instagram is actively considering bringing in gambling app-style full identity verification in the name of preventing underage children joining.
Vishal Shah, Instagram's head of product, said the social media site would not take asking new
users to submit proof of age off the table as it looked at ways to tighten up how it verifies users' ages.
His comments come as Instagram announced it would now start asking all new members to give their date of birth when signing up. The social
network also said it would soon start using the date of birth users had given on Facebook to verify ages on Instagram.
Currently, Instagram asks if new users are over or under 18, and then only asks for a date of birth for those who say they are 17 or
Parent company Facebook said:
We understand not everyone will share their actual age. How best to collect and verify the age of people who use online services is something that the whole industry is exploring
and we are committed to continuing to work with industry and governments to find the best solutions. Nobody will have their date of birth publicly displayed on their Instagram profile.
It seems that Netfllix has been stealing a march on the Irish Film Classification Office (IFCO) by using a joint BBFC/Netflix rating system for Netflix users in Ireland.
Back in March 2019 the BBFC agreed on rating system with Netflix such that
Netflix would determine age rating for the programmes and films using the BBFC guidelines. The BBFC has just a quality control role to ensure that Netflix is following the guidelines.
It is reported that these age ratings are now being reused for
Netflix users in Ireland. And Newstalk has beein inquiring if the IFCO is happy with this arrangement.
The IFCO responded saying it has no legal remit on non physical product in Ireland. However Ger Connolly, the director of film classification,
I do intend to engage with Apple TV and other providers to examine if there is a mechanism to cooperate for the benefit of Republic of Ireland residents.
We know there's a difference between real-world violence and scripted or simulated violence -- such as what you see in movies, TV shows, or video games -- so we want to make sure we're enforcing our
violent or graphic content policies consistently.
Starting on 2nd December, scripted or simulated violent content found in video games will be
treated the same as other types of scripted content.
What does this mean for Gaming Creators?
Future gaming uploads that include scripted or simulated violence may be approved instead of being age-restricted.
There will be fewer restrictions for violence in gaming, but this policy will still maintain our high bar to protect audiences
from real-world violence.
We may still age-restrict content if violent or gory imagery is the sole focus of the video. For instance, if the video focuses entirely on the most graphically violent part of a video game.
In October last year, an Indian court had ordered the government to reinstate its earlier ban on 827 porn websites including PornHub and xVideos. Porn companies initially put up a fight, launching mirror URLs such as pornhub.net after pornhub.com
became inaccessible. But a few months in, major internet service providers Bharti Airtel and Reliance Jio also started blocking out the mirror URLs tool.
However Indians haven't been taking the censorship lying down. Mobile downloads of virtual
private network (VPN) apps in India grew 405% to 57 million in the 12 months starting October 2018, as analysed by London-based Top10VPN, a website that reviews VPNs.
The vast majority of users in India are using free VPN services, which are in
effect not free--they often fund operations by selling user data. But the use of paid VPN services remains limited in India.
But not all Indian users have caught on to VPNs. Nearly half of the visitors of the banned websites have merely shifted to
other adult content sites that aren't blocked in the country, such as RedPorn and SexVid, according to research from the analytics firm SimilarWeb.
I always wonder if this response is one of the reasons why age verification for porn was cancelled
by the British Government. The security services surely didn't want vast numbers of people to start using VPNs. They needed the AV services to be easy and safe enough for porn users to be willing to use. And in the end most of the methods on offer were
Singapore's new law designed to counter fake news is now fully in effect. It allows the country's government to issue corrections of information that it deems to be false, and fine those publishing it up to an equivalent of $730,000 and send them to
prison for up to ten years.
Singapore is now attempting to apply the new legislation globally, by ordering Facebook to correct a post made by a user in Australia. This is one of the points the critics of the legislation have been making ever since it
was passed in May -- that it will likely be used to stifle freedom of expression not only in Singapore but also beyond its borders.
The law, officially the Protection from Online Falsehoods and Manipulation Act, is described as one of the toughest
in the world -- while the order dispatched to Facebook marks the first time Singapore has attempted to directly influence a social media platform and content hosted on it.
The supposed 'fake news' in the first invocation of the law involved
improvable claims in argument between the government and a government Singaporean critic now based in Australia. It seems unlikely that Facebook can substantiate or arbitrate the actual truth of the claims.
In this case, Facebook has added a
correction notice to the disputed post saying:
Facebook is legally required to tell you that the Singapore government says this post has false information.