Facebook is moving ahead with plans to implement end to end encryption on Facebook Messenger and Instagram to protect users from snoopers, censors, spammers, scammers and thieves.
But children's campaign groups are opposing these safety measures on
the grounds the encryption will also protect those illegally distributing child abuse material.
About 100 organisations, led by the NSPCC, have signed an open letter warning the plans will undermine efforts to catch abusers.
Home Secretary Priti
Patel said she fully supported the move, presumably also thinking of the state's wider remit to snoop on people's communications.
End-to-end encryption, already used on Facebook-owned WhatsApp, means no-one, including the company that owns the
platform, can see the content of sent messages. The technology will make it significantly less likely that hackers will be able to intercept messages, going a long way to protect users from phishing and cyber-stalking. And of course child internet users
will also benefit from these protections.
The campaign group opposed such protection arguing:
We urge you to recognise and accept that an increased risk of child abuse being facilitated on or by Facebook is not a
reasonable trade-off to make.
A spokesman for Facebook said protecting the wellbeing of children on its platform was critically important to it. He said:
We have led the industry in safeguarding
children from exploitation and we are bringing this same commitment and leadership to our work on encryption
We are working closely with child-safety experts, including NCMEC [the US National Center for Missing and Exploited
Children], law enforcement, governments and other technology companies, to help keep children safe online.
In 2018, Facebook made 16.8 million reports of child sexual exploitation and abuse content to the NCMEC. The National Crime Agency
said this had led to more than 2,500 arrests and 3,000 children made safe.
Recent attacks on encryption have diverged. On the one hand, we've seen Attorney General William Barr call for "lawful access" to encrypted communications, using arguments that have barely changed since the 1990's . But we've also seen
suggestions from a different set of actors for more purportedly "reasonable" interventions , particularly the use of client-side scanning to stop the transmission of contraband files, most often child exploitation imagery (CEI).
Sometimes called "endpoint filtering" or "local processing," this privacy-invasive proposal works like this: every time you send a message, software that comes with your messaging app first checks it against a
database of "hashes," or unique digital fingerprints, usually of images or videos. If it finds a match, it may refuse to send your message, notify the recipient, or even forward it to a third party, possibly without your knowledge.
On their face, proposals to do client-side scanning seem to give us the best of all worlds: they preserve encryption, while also combating the spread of illegal and morally objectionable content.
unfortunately it's not that simple. While it may technically maintain some properties of end-to-end encryption, client-side scanning would render the user privacy and security guarantees of encryption hollow . Most important, it's impossible to build a
client-side scanning system that can only be used for CEI. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger's encryption itself and open the door to broader abuses. This post is a technical
deep dive into why that is.
A client-side scanning system cannot be limited to CEI through technical means
Imagine we want to add client-side scanning to WhatsApp. Before encrypting and sending an
image, the system will need to somehow check it against a known list of CEI images.
The simplest possible way to implement this: local hash matching. In this situation, there's a full CEI hash database inside every client device.
The image that's about to be sent is hashed using the same algorithm that hashed the known CEI images, then the client checks to see if that hash is inside this database. If the hash is in the database, the client will refuse to send the message (or
forward it to law enforcement authorities).
At this point, this system contains a complete mechanism to block any image content. Now, anyone with the ability to add an item to the hash database can require the client to block any
image of their choice. Since the database contains only hashes, and the hashes of CEI are indistinguishable from hashes of other images, code that was written for a CEI-scanning system cannot be limited to only CEI by technical means.
Furthermore, it will be difficult for users to audit whether the system has been expanded from its original CEI-scanning purpose to limit other images as well, even if the hash database is downloaded locally to client devices. Given
that CEI is illegal to possess, the hashes in the database would not be reversible.
This means that a user cannot determine the contents of the database just by inspecting it, only by individually hashing every potential image to
test for its inclusion--a prohibitively large task for most people. As a result, the contents of the database are effectively unauditable to journalists, academics, politicians, civil society, and anyone without access to the full set of images in the
Client-side scanning breaks the promises of end-to-end encryption
Client-side scanning mechanisms will break the fundamental promise that encrypted messengers make to their users: the
promise that no one but you and your intended recipients can read your messages or otherwise analyze their contents to infer what you are talking about . Let's say that when the client-side scan finds a hash match, it sends a message off to the server to
report that the user was trying to send a blocked image. But as we've already discussed, the server has the ability to put any hash in the database that it wants.
Given that online content is known to follow long-tail
distributions , a relatively small set of images comprises the bulk of images sent and received. So, with a comparatively small hash database, an external party could identify the images being sent in a comparatively large percentage of messages.
As a reminder, an end-to-end encrypted system is a system where the server cannot know the contents of a message, despite the client's messages passing through it. When that same server has direct access to effectively decrypt a
significant portion of messages, that's not end-to-end encryption.
In practice, an automated reporting system is not the only way to break this encryption promise. Specifically, we've been loosely assuming thus far that the
hash database would be loaded locally onto the device. But in reality, due to technical and policy constraints, the hash database would probably not be downloaded to the client at all . Instead, it would reside on the server.
means that at some point, the hash of each image the client wants to send will be known by the server. Whether each hash is sent individually or a Bloom filter is applied, anything short of an ORAM-based system will have a privacy leakage directly to the
server at this stage, even in systems that attempt to block, and not also report, images. In other words, barring state-of-the-art privacy-preserving remote image access techniques that have a provably high (and therefore impractical) efficiency cost,
the server will learn the hashes of every image that the client tries to send.
Further arguments against client-side scanning
If this argument about image decryption isn't sufficiently
compelling, consider an analogous argument applied to the text of messages rather than attached images. A nearly identical system could be used to fully decrypt the text of messages. Why not check the hash of a particular message to see if it's a chain
letter, or misinformation ? The setup is exactly the same, with the only change being that the input is text rather than an image. Now our general-purpose censorship and reporting system can detect people spreading misinformation... or literally any text
that the system chooses to check against. Why not put the whole dictionary in there, and therefore be able to decrypt any word that users type (in a similar way to this 2015 paper )? If a client-side scanning system were applied to the text of
messages, users would be similarly unable to tell that their messages were being secretly decrypted.
Regardless of what it's scanning for, this entire mechanism is circumventable by using an alternative client to the officially
distributed one, or by changing images and messages to escape the hash matching algorithm, which will no longer be secret once it's performed locally on the client's device.
These are just the tip of the iceberg of technical
critiques, not to mention policy reasons, we shouldn't build a censorship mechanism into a private, secure messenger.
UK police will be able to force US-based social media platforms to hand over users' messages, including those that are end to end encrypted, under a treaty that is set to be signed next month.
According to a report in The Times, investigations into
certain 'serious' criminal offenses, will be covered under the agreement between the two countries.
The UK government has been imploring Facebook to create back doors which would enable intelligence agencies to gain access to messaging platforms
for matters of national security.
The news of the agreement between the US and UK is sure to ramp up discussion of the effectiveness of end to end encryption when implemented by large corporations. If this report is confirmed and Facebook/police
can indeed listen in on 'end to end encryption' then such implementations of encryption are worthless.
No, The New Agreement To Share Data Between US And UK Law Enforcement Does Not Require Encryption Backdoors
It's no secret many in the UK government want backdoored encryption. The UK wing of the Five
Eyes surveillance conglomerate says the only thing that should be absolute is the government's access to communications . The long-gestating Snooper's Charter frequently contained language mandating lawful access, the government's preferred nomenclature
for encryption backdoors. And officials have, at various times, made unsupported statements about how no one really needs encryption , so maybe companies should just stop offering it.
What the UK government has in the works now
won't mandate backdoors, but it appears to be a way to get its foot in the (back)door with the assistance of the US government. An agreement between the UK and the US -- possibly an offshoot of the Cloud Act -- would mandate the sharing of encrypted
communications with UK law enforcement, as Bloomberg reports .
Sharing information is fine. Social media companies have plenty of information. What they don't have is access to users' encrypted communications, at least in most
cases. Signing an accord won't change that. There might be increased sharing of encrypted communications but it doesn't appear this agreement actually requires companies to decrypt communications or create backdoors.
Two new encryption algorithms developed by the US NSA have been rejected by an international standards body amid accusations of threatening behavior.
The Simon and Speck cryptographic tools were designed for encryption of the Internet of Things and
were intended to become a global standard.
But the pair of techniques were formally rejected earlier this week by the International Organization of Standards (ISO) amid concerns that they contained a backdoor that would allow US spies to break the
encryption. The process was also marred by complaints from encryption experts of threatening behavior from American snoops.
When some of the design choices made by the NSA were questioned by experts, the US response was to personally attack the
questioners. While no one has directly accused the NSA of inserting backdoors into the new standards, that was the clear suspicion, particularly when it refused to give what experts say was a normal level of technical detail. It took 3 years for the ISO
to extract technical details about the encryption. But by then the trust had been undermined and the vote went against the standards at a meeting in the US late last year.
Signal, an encrypted messaging apt for mobile devices had its service blocked in Egypt and UAE.
Now Signal have responded by making a new release available to those territories that should make the censors thinks twice before reaching for the block
The new Signal release uses a technique known as domain fronting. Many popular services and CDNs, such as Google, Amazon Cloudfront, Amazon S3, Azure, CloudFlare, Fastly, and Akamai can be used to access Signal in ways that look
indistinguishable from other uncensored traffic. The idea is that to block the target traffic, the censor would also have to block those entire services. With enough large scale services acting as domain fronts, disabling Signal starts to look like
disabling the internet. When users in the two countries send a Signal message, it will look like a normal HTTPS request to www.google.com. To block Signal messages, these countries would also have to block all of google.com.
Signal , the
messaging app that prides itself on circumventing government censorship, has a few new places where its flagship feature works. Last week it was Egypt, and now users in Cuba and Oman can send messages without fear of them being intercepted and altered by
The Hungarian ruling party wants to ban all working crypto. The parliamentary vice-president from Fidesz has asked parliament to:
Ban communication devices that [law enforcement agencies] are not able to surveil despite
having the legal authority to do so.
Since any working cryptographic system is one that has no known vulnerabilities, whose key length is sufficient to make brute force guessing impractical within the lifespan of the universe, this
amounts to a ban on all file-level encryption and end-to-end communications encryption, as well as most kinds of transport encryption (for example, if your browser makes a SSL connection to a server that the Hungarian government can't subpoena, it would
have no means of surveiling your communication).
A draft copy of a US law to criminalize strong encryption has been leaked online. And the internet is losing its shit.
The proposed legislation hasn't been formally published yet: the document is still being hammered out by the Senate intelligence
select committee. The proposal reads:
The underlying goal is simple, when there's a court order to render technical assistance to law enforcement or provide decrypted information, that court order is carried out. No
individual or company is above the law. We're still in the process of soliciting input from stakeholders and hope to have final language ready soon.
The draft legislation, first leaked to Washington DC insider blog The Hill, is named
the Compliance with Court Orders Act of 2016 , and would require anyone who makes or programs a communications product in the US to provide law enforcement with any data they request in an intelligible format, when presented with a court
The bill stems from Apple's refusal to help the FBI break into the San Bernardino shooter's iPhone, but goes well beyond that case. The bill would require companies to either build a backdoor into their encryption systems or use an
encryption method that can be broken by a third party.
On example of the tech community response was from computer forensics expert Jonathan Dziarski who said:
The absurdity of this bill is beyond words. Due to
the technical ineptitude of its authors, combined with a hunger for unconstitutional governmental powers, the end result is a very dangerous document that will weaken the security of America's technology infrastructure.
At least two other
countries--Pakistan and Turkey--already have versions of such laws on the books. The Pakistan Telecommunications Authority has previously instructed the country's internet service providers to ban encrypted communication, though it's largely VPN use,
which can be used to circumvent location-based internet censorship, that has been actively restricted there, and WhatsApp is still popular. Turkey takes the anti-encryption law on its books more seriously, and used it to initially charge Vice journalists
arrested in southeastern Turkey in September 2015.
Meanwhile, France's National Assembly passed a bill in May to update its Penal Code to fine companies that don't find a way to undo their own encryption when served with a warrant in a terrorism
investigation. The french? Senate version of this bill excludes this provision, and seven members from each house will now begin a compromise.
Thanks to the attention
brought to the importance of encryption via Apple vs FBI from Fight for the Future and other strong voices, Compliance with Court Orders Act of 2016 - one of the worst national security bills ever drafted - is stalled.
Messaging app WhatsApp has announced that it has added encryption for all voice calls and file transfers for all users.
It renders messages generally unreadable if they are intercepted, for example by criminals or law enforcement. No doubt if the
security services throw all their computing might at a message then they may be able to decrypt it by brute force.
The Facebook-owned company said protecting private communication of its one billion users worldwide was one of its core beliefs
. Whatsapp said:
The idea is simple: when you send a message, the only person who can read it is the person or group chat that you send that message to. No one can see inside that message. Not cybercriminals. Not
hackers. Not oppressive regimes. Not even us.
Users with the latest version of the app were notified about the change when sending messages on Tuesday. The setting is enabled by default.
Users should be aware that snoopers can
still see a whole host of non-content data about the communication, such as who was using the app, who was being called, and for how long.
Amnesty International called the move a huge victory for free speech:
Whatsapp's roll out of the Signal Protocol, providing end to end encryption for its one billion users worldwide, is a major boost for people's ability to express themselves and communicate without fear.
a huge victory for privacy and free speech, especially for activists and journalists who depend on strong and trustworthy communications to carry out their work without putting their lives at greater risk.
An open letter to the leaders of the world's governments SIGNED by organizations, companies, and individuals:
We encourage you to support the safety and security of users, companies, and
governments by strengthening the integrity of communications and systems. In doing so, governments should reject laws, policies, or other mandates or practices, including secret agreements with companies, that limit access to or undermine encryption and
other secure communications tools and technologies.
Governments should not ban or otherwise limit user access to encryption in any form or otherwise prohibit the implementation or use of encryption by grade or type;
Governments should not
mandate the design or implementation of "backdoors" or vulnerabilities into tools, technologies, or services;
Governments should not require that tools, technologies, or services are designed or developed
to allow for third-party access to unencrypted data or encryption keys;
Governments should not seek to weaken or undermine encryption standards or intentionally influence the establishment of encryption standards
except to promote a higher level of information security. No government should mandate insecure encryption algorithms, standards, tools, or technologies; and
Governments should not, either by private or public
agreement, compel or pressure an entity to engage in activity that is inconsistent with the above tenets.
Access Now, ACI-Participa, Advocacy for Principled Action in Government, Alternative Informatics Association, Alternatives, Alternatives Canada, Alternatives International,
American Civil Liberties Union, American Library Association, Amnesty International, ARTICLE 19, La Asociación Colombiana de Usuarios de Internet, Asociación por los Derechos Civiles, Asociatia pentru Tehnologie si Internet (ApTI), Association for
Progressive Communications (APC), Association for Proper Internet Governance, Australian Lawyers for Human Rights, Australian Privacy Foundation, Benetech, Bill of Rights Defense Committee, Bits of Freedom, Blueprint for Free Speech, Bolo Bhi, the Centre
for Communication Governance at National Law University Delhi, Center for Democracy and Technology, Center for Digital Democracy, Center for Financial Privacy and Human Rights, the Center for Internet and Society (CIS), Center for Media, Data and Society
at the School of Public Policy of Central European University, Center for Technology and Society at FGV Rio Law School, Chaos Computer Club, CivSource, Committee to Protect Journalists, Constitutional Alliance, Constitutional Communications, Consumer
Action, Consumer Federation of America, Consumer Watchdog, ContingenteMX, Courage Foundation, Críptica, Datapanik.org, Defending Dissent Foundation, Digitalcourage, Digitale Gesellschaft, Digital Empowerment Foundation, Digital Rights Foundation, DSS216,
Electronic Frontier Finland, Electronic Frontier Foundation, Electronic Frontiers Australia, Electronic Privacy Information Center, Engine, Enjambre Digital, Eticas Research and Consulting, European Digital Rights, Fight for the Future, Föreningen för
digitala fri- och rättigheter (DFRI), Foundation for Internet and Civic Culture (Thai Netizen Network), Freedom House, Freedom of the Press Foundation, Freedom to Read Foundation, Free Press, Free Press Unlimited, Free Software Foundation, Fundacion
Acceso, Future of Privacy Forum, Future Wise, Globe International Center, The Global Network Initiative (GNI), Global Voices Advox, Government Accountability Project, Hiperderecho, Hivos, Human Rights Foundation, Human Rights Watch, Institute for
Technology and Society of Rio (ITS Rio), Instituto Demos, the International Modern Media Institute (IMMI), International Press Institute (IPI), Internet Democracy Project, IPDANDETEC, IT for Change , IT-Political Association of Denmark, Jonction, Jordan
Open Source Association, Just Net Coalition (JNC), Karisma Foundation, Keyboard Frontline, Korean Progressive Network Jinbonet, Localization Lab, Media Alliance, Modern Poland Foundation, Movimento Mega, Myanmar ICT for Development Organization (MIDO),
Net Users' Rights Protection Association (NURPA), New America's Open Technology Institute, Niskanen Center, One World Platform Foundation, OpenMedia, Open Net Korea, Open Rights Group, Panoptykon Foundation, Paradigm Initiative Nigeria, Patient Privacy
Rights, PEN American Center, PEN International, Pirate Parties International, Point of View, Privacy International, Privacy Rights Clearinghouse, Privacy Times, Protection International, La Quadrature du Net, R3D (Red en Defensa de los Derechos
Digitales), R Street Institute, Reinst8, Restore the Fourth, RootsAction.org, Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC), Security First, SFLC.in, Share Foundation, Simply Secure, Social Media Exchange (SMEX),
SonTusDatos (Artículo 12, A.C.), Student Net Alliance, Sursiendo; Comunicación y Cultura Digital, Swiss Open Systems User Group /ch/open, TechFreedom, The Tor Project, Tully Center for Free Speech at Syracuse University, Usuarios Digitales, Viet Tan,
Vrijschrift, WITNESS, World Privacy Forum, X-Lab, Xnet, Zimbabwe Human Rights Forum