Online freedom in Russia is getting worse and worse and the latest laws being considered by the Russian Parliament look set to lead to a further deterioration.
MPs are currently pushing through legislation that would force all computers and mobile
devices sold in Russia to come with a series of pre-installed applications which would pose a massive threat to users online security and privacy.
Now the Russian Duma (lower parliament) is considering a law which it claims would protect Russian
technology against competition from overseas tech companies.
But protectionism is the least of the Russian people's concerns if this law makes it onto the statute books. Compelling devices to come pre-installed with domestic apps offers the Putin
regime a wonderful opportunity to spy on every single Russian internet user and punish those who deviate from its exacting regulations.
Why not cut to the end game and ban it? If clear and informed consent is required, then very few will sign up, profiling has nothing positive to offer people, only negatives See
article from marketingtechnews.net
Hundreds of porn stars and sex workers had their Instagram accounts deleted this year, and many say that they're being held to a different standard than mainstream celebrities.
I should be able to model my Instagram account on
Sharon Stone or any other verified profile, but the reality is that doing that would get me deleted, says Alana Evans, president of the Adult Performers Actors Guild and one of the leading voices in the battle that adult stars are waging to stay on the
Ms Evans' group has collected a list of more than 1,300 performers who claim that their accounts have been deleted by Instagram's content moderators for violations of the site's community standards, despite not showing
any nudity or sex.
They discriminate against us because they don't like what we do for a living, Ms Evans says.
Just last month WhatsApp sued an Israeli surveillance company, the NSO Group , in a US court. The case alleges that the messaging platform was compromised by NSO technology, specifically to insert its signature product -- spyware known as Pegasus -- on
to at least 1,400 devices, which enabled government surveillance (an allegation that NSO Group rejects ).With Pegasus in their hands, governments have access to the seemingly endless amount of personal data in our pockets. The University of Toronto's
CitzenLab has found the Pegasus spyware used in 45 countries.
The global surveillance industry -- in which the NSO Group is just one of many dozens, if not hundreds, of companies -- appears to be out of control, unaccountable and
unconstrained in providing governments with relatively low-cost access to the sorts of spying tools that only the most advanced state intelligence services previously were able to use.
The industry and its defenders will say this
is a price to pay for confronting terrorism. We must sacrifice some liberty to protect our people from another 9/11, they argue. As one well-placed person claimed to me, such surveillance is mandatory; and, what's more, it is complicated, to protect
privacy and human rights.
All I can say is, give me a break. The companies hardly seem to be trying -- and, more importantly, neither are the governments that could do something about it. In fact, governments have been happy to
have these companies help them carry out this dirty work. This isn't a question of governments using tools for lawful purposes and incidentally or inadvertently sweeping up some illegitimate targets: this is using spyware technology to target vulnerable
yet vital people whom healthy democracies need to protect.
The European Commission is struggling to agree how to extend internet censorship and control to US messaging apps such as Facebook's WhatsApp and Microsoft's Skype.
These services are run from the US and it is not so easy for European police to obtain
say tracking or user information as it is for more traditional telecoms services.
The Commission has been angling towards applying the rules controlling national telecoms companies to these US 'OTT' messaging services. Extended ePrivacy regulation
was the chosen vehicle for new censorship laws.
But now it is reported that the EU countries have yet to find agreement on such issues as tracking users' online activities, provisions on detecting and deleting child pornography and of course how
to further the EU's silly game of trying to see how many times a day EU internet users are willing to click consent boxes without reading reams of terms and conditions.
EU ambassadors meeting in Brussels on Friday again reached an impasse, EU
officials said. Tech companies and some EU countries have criticized the ePrivacy proposal for being too restrictive, putting them at loggerheads with privacy activists who back the plan.
Now doubt the censorship plans will be resuming soon.
Nigerian lawmakers have proposed legislation that would hit Internet users with steep fines or jail time for spreading what authorities decide is 'fake news'.
Under what is known as the social media bill, which the Nigerian Senate advanced last week,
police could arrest people whose posts are thought to threaten national security, sway elections or diminish public confidence in the government, according to the draft text.
Authorities could also cut the Internet access of those that violate the
Nigerian social media users are widely condemning the new internet censorship proposal.
Last year, the inventor of the web, Sir Tim Berners-Lee, called for governments, companies and citizens from across the world to take action to protect the web as a force for good.
Today, we stand together to launch the result of
that call: a new Contract for the Web.
Experts and citizens have come together -- bringing a diverse range of experiences and perspectives -- to build a global plan of action to make our online world safe and empowering for
Launching the Contract, Sir Tim said: The power of the web to transform people's lives, enrich society and reduce inequality is one of the defining opportunities of our time. But if we don't act now -- and act together
-- to prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering that potential.
At this pivotal moment for the web, we have a shared responsibility to fight for the web we
want. Many of the most vocal campaigners on this issue have already recognised that this collaborative approach is critical.
Brett Solomon of Access Now, said:
Only through real commitment and
concrete action from all members of the internet community -- especially governments and companies -- will we make the necessary reforms to put people and rights back at the center of the internet.
The Contract gives
us a roadmap -- embodied in 76 clauses -- to do that. For governments, the Contract requires them to ensure all their citizens can connect to the internet all of the time.
We have seen the damaging effect of internet shutdowns
around the world. The Contract makes clear that no one should be denied their right to full access to the web.
For companies, the Contract says they must make connectivity affordable and accessible to everyone, and to protect and
respect the rights and freedoms of people online.
To restore trust in the web and its power for good, people must be in control of their lives online, and crucially they must be empowered with clear and meaningful choices around
their data and privacy.
The Contract sets out policies and proposals to ensure companies place these considerations front of mind, and that none of their users are excluded from using and shaping the web.
And crucially, we all have a responsibility as web users to create the web that we want. The Contract calls on all citizens to build strong communities that respect civil discourse and human dignity.
Roya Mahboob, NewNow Leader and CEO of Digital Citizen Fund, said:
The Contract gives us concrete actions to build a web that works for future generations, especially girls and women. Women face
a disproportionate set of barriers in accessing education, setting up businesses or working outside the home across the globe. We need to see the web as a pathway to unleash their power. That is why The NewNow has taken part in the core group of
organisations developing the contract.
For the first time, we have a shared vision for the web we want and a roadmap for the policies and actions we need to get there. And we have a powerful new tool to hold companies
and governments to account -- to ensure they're living up to the commitments they make.
At launch, the Contract for the Web -- led by Berners-Lee's World Wide Web Foundation -- has the backing of over 160 organisations, including
Microsoft, Google, Electronic Frontier Foundation, DuckDuckGo, CIPESA, Access Now, Reddit, Facebook, Reporters Without Borders and Ranking Digital Rights. Thousands of individuals, hundreds of organisations and the governments of Germany, France and
Ghana all signed up to the Contract's founding principles.
The launch of the Contract is just the beginning of our fight for the web we want. But it is a critical milestone. In an era of fear about technology and the future, we
must celebrate vehicles for change and a hopeful future.
Thanks to the determination, dedication and drive of all those involved, we now have a Contract for the Web that can drive real change.
A Galway MP to bring forward a bill in the Irish Parliament to prevent children accessing pornography on phones.
Fianna Fáil spokesperson on Youth Affairs, Anne Rabbitte, is hoping to bring a bill before the Dáil in January.
legislation would mean under 18s using pre-pay mobile phones would have to prove their age when accessing certain content. She says the bill means companies would have an automatic adult filter that will need age verification before being removed.
The professional body for UK directors has released its first set of guidelines for directing nudity and simulated sex in TV and film.
Directors UK has advised a ban on full nudity in any audition or call back and no semi-nudity in first
auditions, and have instead suggested performers wear a bikini or trunks and bring a chaperone.
The group also suggested that if a recall requires semi-nudity, the performer and their agent must have 48 hours' notice and the full script.
And that the production must also obtain explicit written consent from the performer prior to them being filmed or photographed nude or semi-nude.
The release of guidelines follows the #MeToo movement, and the revelation that some in the industry demanded sexual favours for work.
It all seems reasonable enough, but a feminist columnist in the Guardian is rather hoping that the rules
will lead to the end of the nude scene. Barbara Ellen writes in an article from theguardian.com :
All of which is commendable, but shouldn't audiences also change their attitudes? As it is, certain men weirdly seem to presume that they have a right to see women naked. Guys, calm down -- you bought a television
subscription or a cinema ticket, not a VIP seat at a lap-dancing show.
Let's face it, most nude scenes are gratuitous -- even when integral to the story, nudity could usually be suggested without anyone actually being naked. Yet
here we are, two years since #MeToo, and actresses are still not only having to strip but being denounced for hating doing it. While on-screen nudity is a choice, and some are fine about it, too many others feel uncomfortable and obliged.
Perhaps the new guidelines will help people such as Clarke in the simplest, most effective way possible -- making it a damn sight more difficult to justify asking them to get undressed in the first place.
Access to the internet is gradually being restored in Iran after an unprecedented five-day shutdown that cut its population off from the rest of the world and suppressed news of the deadliest unrest since the country's 1979 revolution.
blackout that commenced last Friday is part of a growing trend of governments interfering with the internet to curb violent unrest, but also legitimate dissent.
The internet-freedom group Access Now recorded 75 internet outages in 2016, which more
than doubled to 196 last year.
But Iran's restriction of the internet this week was something more sophisticated and alarming, researchers say. Iranians were cut off from the global internet, but internally, networks appeared to be functioning
relatively normally. The Islamic Republic managed to successfully wall its citizens off from the world, without taking down the internet entirely.
Iran, Russia and of course China have all been taking action to design a local internet that
continue to operate when the plug to the outside world is pulled. This has taken years of preparation to ensure there are local services to replace the core US based essentials of Google, Facebook, Paypal and co that are absolutely irreplaceable in most
countries around the world.
And of course the effectiveness of the shutdown in Iran will surely spur on ther oppressive regimes that liek waht they saw.
Russia has passed a law banning the sale of devices, including smartphones, computers and smart TVs, that are not pre-installed with Russian software. The law will come into force in July 2020.
Proponents of the legislation say it is aimed at
promoting Russian technology and making it easier for people in the country to use the gadgets they buy. But of course the move also enables better surveillance and internet control for the authorities.
Foreign apps will still be allowed for the
moment though as long as there are Russian alternatives installed too.
The legislation was passed by Russia's lower house of parliament on Thursday. A complete list of the gadgets affected and the Russian-made software that needs to be
pre-installed will be determined by the government.
The German Foreign Office has warned travellers to Turkey that they could face legal repercussions if they are caught using a VPN in the country.
It is the first time that a formal warning has been made about using VPNs in the country, but it comes
from the highest level and is one that travellers from all countries should be aware of.
Under the dictatorial leadership of President Recep Tayyip Erdogan, Turkey's slide towards authoritarianism has been remarkably swift. In the government's
drive to control the internet and restrict its political opponents, Turkey has sought to block VPNs , banned the use of encrypted messaging services , and routinely blocked social media sites and instigated total internet shutdowns at politically
sensitive times. Hundreds of thousands of websites are now inaccessible in Turkey, which has ironically driven more and more Turkish citizens and ex-pats onto VPNs in order to enjoy free access to the internet.
Windows will improve user privacy with DNS over HTTPS
Here in Windows Core Networking, we're interested in keeping your traffic as private as possible, as well as fast and reliable. While there are many ways we can and do approach
user privacy on the wire, today we'd like to talk about encrypted DNS. Why? Basically, because supporting encrypted DNS queries in Windows will close one of the last remaining plain-text domain name transmissions in common web traffic.
Providing encrypted DNS support without breaking existing Windows device admin configuration won't be easy. However, at Microsoft we believe that
"we have to treat privacy as a human right. We have to have end-to-end cybersecurity built into technology."
We also believe Windows adoption of encrypted DNS will help make the overall Internet ecosystem healthier.
There is an assumption by many that DNS encryption requires DNS centralization. This is only true if encrypted DNS adoption isn't universal. To keep the DNS decentralized, it will be important for client operating systems (such as Windows) and Internet
service providers alike to widely adopt encrypted DNS .
decision made to build support for encrypted DNS, the next step is to figure out what kind of DNS encryption Windows will support and how it will be configured. Here are our team's guiding principles on making those decisions:
Windows DNS needs to be as private and functional as possible by default without the need for user or admin configuration because Windows DNS traffic represents a snapshot of the user's browsing history. To Windows users,
this means their experience will be made as private as possible by Windows out of the box. For Microsoft, this means we will look for opportunities to encrypt Windows DNS traffic without changing the configured DNS resolvers set by users and system
Privacy-minded Windows users and administrators need to be guided to DNS settings even if they don't know what DNS is yet. Many users are interested in controlling their privacy and go looking for
privacy-centric settings such as app permissions to camera and location but may not be aware of or know about DNS settings or understand why they matter and may not look for them in the device settings.
Windows users and
administrators need to be able to improve their DNS configuration with as few simple actions as possible. We must ensure we don't require specialized knowledge or effort on the part of Windows users to benefit from encrypted DNS. Enterprise policies
and UI actions alike should be something you only have to do once rather than need to maintain.
Windows users and administrators need to explicitly allow fallback from encrypted DNS once configured. Once Windows has
been configured to use encrypted DNS, if it gets no other instructions from Windows users or administrators, it should assume falling back to unencrypted DNS is forbidden.
Based on these principles, we are making plans to adopt DNS over HTTPS (or DoH) in the Windows DNS client. As a platform, Windows Core Networking seeks
to enable users to use whatever protocols they need, so we're open to having other options such as DNS over TLS (DoT) in the future. For now, we're prioritizing DoH support as the most likely to provide immediate value to everyone. For example, DoH
allows us to reuse our existing HTTPS infrastructure.
Why announce our intentions in advance of DoH being available to Windows Insiders? With encrypted DNS gaining more attention, we felt it was
important to make our intentions clear as early as possible. We don't want our customers wondering if their trusted platform will adopt modern privacy standards or not.
In what sounds like a profound change to the commercial profiling of people's website browsing history, Google has announced that it will withhold data from advertisers that categorises web pages.
In response to the misuse of medical related browsing
data, Google has announced that from February 2020 it will cease to inform advertisers about the content of webpage where advertising space is up for auction. Presumably this is something along the lines of Google having an available advert slot on
worldwidepharmacy.com but not telling the advertiser that the John Doe is browsing an STD diagnosis page, but the advertiser will still be informed of the URL.
Chetna Bindra, senior product manager of trust and privacy at Google wrote:
While we already prohibit advertisers from using our services to build user profiles around sensitive categories, this change will help avoid the risk that any participant in our auctions is able to associate individual ad
identifiers with Google's contextual content categories.
Google also plans to update its EU User Consent Policy audit program for publishers and advertisers, as well as our audits for the Authorized Buyers program, and continue to
engage with data protection authorities, including the Irish Data Protection Commission as they continue their investigation into data protection practices in the context of Authorized Buyers.
Although this sounds very good news for people wishing
to keep their sensitive data private it may not be so good for advertisers who will see costs rise and publishers who will see incomes fall.
ANd of course Google will still know itself that John Doe has been browsing STD diagnosis pages. There
could be other consequences such as advertisers sending their own bots out to categorise likely advertising slots.
The launch of the Disney+ streaming service has feature in censorship news a lot this week but another incident is now being reported.
One of the biggest selling points of Disney+ has to be the entire back catalogue of The Simpsons episodes that are available -- thanks to the recent acquisition of 20th Century Fox by Disney. However, fans have noticed that there is a notable absence in the earlier seasons.
The Season 3 opener, Stark Raving Dad , has been omitted due to its featuring Michael Jackson. The missing episode saw Homer Simpson being sent to a Mental Institution after going to work in a pink shirt. Whilst committed, he meets fellow
patient, Leon Kompowsky -- a man who believes himself to be Michael Jackson, voiced by the singer himself, credited as John Jay Smith.
In fact the censorship seems wider than Disney+, the episode has been banned from TV and it is reported that it
will be missing from any future disk releases too.
US moralist campaigners of the Parent's TV Council wrote:
Disney created a safe platform compared to other streaming services ...BUT... Disney could go the extra mile and add more parental controls. PTC President Tim
Disney+ is an 80% streaming solution for families, and we applaud the company for its focus on making family-friendly content. So far, the biggest challenge we see with Disney+ is that it does not include parental
controls or content filtering. While the company has promised not to include R-rated content, by its own admission , Disney+ was not designed exclusively for children.
Research from PTC indicates that PG and PG-13 movies might not
be appropriate for children. After all, the MPAA allows up to two F-words for PG-13 movies.
Even titles from Marvel and Star Wars franchises contain higher levels of violence, and some PG-13 titles may include harsher language or
profanity, sexual innuendo or suggestive dialogue. To be an even more ideal streaming platform for families, Disney+ must give families the ability to allow filtering, Winter said.
The US internet giants have got a little too effective at censoring user uploaded videos sonow the world is looking for less well policed alternatives.
An Chinese mainlanders found a temporary alternative in Pornhub. A few Chinese nationals created a
channel called the Chinese Communist Youth League. They then posted videos boosting the agenda of authorities in Beijing and criticising the Hong Kong protesters.
One shocking video calls rioters cockroaches, a term Hong Kong police have used, and
shows a man being set on fire after arguing with protesters. Nearly a dozen of the videos appeared in total which had about 9,000 views and gained 32 subscribers.
A rep for PornHub told The NY Post on Thursday the firm has taken down the videos in
A TV ad for PopJam, a social media app designed for 7 to 12 year olds, seen in July 2019 on CITV. An on-screen image of a phone showed an illustrative scroll of a PopJam news feed which displayed various users' PopJam virtual artwork. Large text on
the right of the image stated LIKES with a heart emoji and with an increasing figure. The next clip showed an image of a phone with a different virtual drawing on its screen. Large text to the left stated FOLLOWERS with an image of a number rising
quickly from 96 to 10,000. A star emoji was seen increasing in size as the figures increased. A female voice-over stated, Get likes and followers to level up.
A complainant, who was concerned that the ad's encouragement to get
likes and followers to level up could be detrimental to children's mental health and affect their self-esteem, challenged whether the ad could cause harm to those under 18 years of age and was irresponsible.
The ASA understood that PopJam was an app designed for 7- to 12-year-old children and that the ad was seen on a children's TV channel. The ad featured the claim get likes and followers to level up, which we
considered explicitly encouraged children to seek likes and followers in order to progress through the app. We understood that there were other ways of advancing through the app, but that was not explained in the ad. We considered that the suggestion
that the acquisition of likes and followers was the only means of progression was likely to give children the impression that popularity on social media was something that should be pursued because it was desirable in its own right. We were therefore
concerned that the ad's encouragement to gain likes and followers could cause children to develop an unhealthy perception that popularity on social media was inherently valuable which was likely to be detrimental to their mental health and self-esteem.
As such, we concluded that the ad was likely to cause harm to those under 18 and was irresponsible.
The ad must not be broadcast again in its current form. We told SuperAwesome Trading Ltd t/a PopJam not to use the claim get likes
and followers to level up in future and to ensure that they did not suggest that gaining popularity and the acquisition of likes and followers were desirable things in their own right.
Last October, Nepal's government blocked 25,000 porn sites, but a new report shows that the effort was inevitably futile.
A year ago the government introduced stiff fines of approximately $4,200 on ISPs that failed to adequately block porn sites.
But now a new report by the Nepalese news site Annapurna Express shows that little has changed. Nepalese porn surfers have actually been watching even more porn than a year ago, Annapurna Express reported, based on data provided to it by xHamster. In
fact, according to research by the Nepalese news site, internet users based in Nepal visit porn sites more often than they visit any of the country's news portals.
In another unsurprising finding, the site found that the porn ban has done nothing
to curb rising levels of sexual violence in Nepal. In the year since the ban, reported rape cases in Kathmandu have climbed from 145 to 225.
Imagine if ITV had to offer an option to let viewers opt out of adverts whilst continuing to watch for free. There would soon be no ITV. Yet the EU cloud cuckoolanders are trying to force the internet to offer that same option See
article from theguardian.com
Its a perennial silly story that gets repeated around the world, that Net Nanny type software reports how many attempts to access porn are made by government ministers, or their staff, or whatever.
Journalists are quit to jump to the conclusion that
people are trying to watch Pornhub whilst at work.
In the latest example New Zealand's prime minister has ticked off public servants after it was revealed that staff at several ministries had their access to explicit material blocked hundreds of
times. Documents showed, among staff from other ministries, Department of Conservation staff have been blocked from accessing pornography websites 148 times since January 29.
In reality 148 times is hardly any, 15 times a month for the whole
staff. And of course there is an easy explanation for those 148 times. Sites like Melon Farmers are often classed as porn by internet filters as the reason for blocking them from children. Fair enough Melon Farmers frequently references porn and may
indeed not be suitable for children...but it is not a porn website. Those 148 access attempts could easily explained by blocked access to Melon Farmers.
In fact I would argue that 148 blocked access attempts in 10 months rather proves that the
staff in question are NOT spending their time watching porn.
Australia's internet censor will block gambling websites hosted offshore under new powers now in effect. Gamblers have been warned by The Australian Communications and Media Authority (ACMA) to withdraw their funds now from any unlicensed overseas
gambling sites before they are blocked.
Internet gambling sites such as Emu Casino and FairGo Casino which are run from Curacao in the Caribbean will be among the first to be blocked, the Sydney Morning Herald reported.
ACMA said on Monday it
will ask ISPs to block websites in breach of the Interactive Gambling Act 2001 using new internet censorship powers now in effect. ACMA chair Nerida O'Loughlin said
In many cases these sites refuse to pay significant
winnings, or only a small portion. Customers had also experienced illegal operators continuing to withdraw funds from their bank account without authorisation. There is little to no recourse for consumers engaging with these unscrupulous operators. If
you have funds deposited with an illegal gambling site, you should withdraw those funds now.
ACMA publishes a list of licensed gambling services where people can check if online gambling websites are licensed in Australia on their
BritBox, the new internet TV joint venture from the BBC and ITV will not include classic homegrown series that are deemed to be inappropriate for fragile modern audiences.
The new £5.99-a-month service, which will also offer shows from
Channel 4 and Channel 5, is aiming to compete with Netflix and Amazon Prime Video.
However, bosses have said a range of classic shows, such as the BBC's Till Death Us Do Part and ITV's Love Thy Neighbour , will not appear on the
service because of content deemed racist or otherwise unacceptable.
Reemah Sakaan, the senior ITV executive responsible for launching the service confirmed that Till Death Us Do Part, Love Thy Neighbour, and It Ain't Half Hot Mum will all
There are numerous individual episodes of some shows that will appear on BritBox eg Only Fools and Horses and Fawlty Towers could be deemed inappropriate for modern viewing. However, it is understood that no Fawlty Towers
episodes will be cut from the service, although they will run with warnings about offensive language, (and presumably censor cuts).
The US authorities came down heavily on Google for YouTube's violations of the 1998 US children's data privacy law called COPPA. This ended up with Google handing over $170 million in settlement of claims from the US FTC (Federal Trade Commission).
COPPA restricts operators of websites and online services from collecting the personal information of under-13 users without parental permission. The definition of personal information includes personal identifiers used in cookies to profile internet
users for targeted advertising purposes.
So now YouTube has announced new procedures starting 1st January 2010. All content creators will have to designate whether or not each of their videos is directed to children (aka kid-directed aka
child-directed) by checking a box during the upload process. Checking that box will prevent the video from running personalized ads. This rule applies retrospectively so all videos will have to be reviewed and flagged accordingly.
It is probably
quite straightforward to identify children's videos, but creators are worried about more general videos for people of all ages that also appeal to kids.
And of course there are massive concerns for all those creators affected about revenues decreasing
as adverts switch from personalised to general untargeted ads.
tubefilter.com ran a small
experiment suggesting that revenues will drop between 60 and 90% for videos denies targeted advertising.
And of course this will have a knock on to the viability of producing videos for a young audience. No doubt the small creators will be hit
hardest, leaving the market more open for those that can make up the shortfall by working at scale.
Governments around the world are increasingly using social media to manipulate elections and monitor their citizens, tilting the technology toward digital authoritarianism. As a result of these trends, global internet freedom declined for the ninth
consecutive year, according to Freedom on the Net 2019 , the latest edition of the annual country-by-country
assessment of internet freedom, released by Freedom House.
Adding to the problem of meddling by foreign regimes, a new menace to democracy has risen from within, as populist leaders and their armies of online supporters seek to
distort politics at home. Domestic election interference marred the online landscape in 26 of the 30 countries studied that held national votes over the past year. Disinformation was the most commonly used tactic. Authorities in some countries blocked
websites or cut off access to the internet in a desperate bid to cling to power.
Mike Abramowitz, president of Freedom House said:
"Many governments are finding that on social media,
propaganda works better than censorship. Authoritarians and populists around the globe are exploiting both human nature and computer algorithms to conquer the ballot box, running roughshod over rules designed to ensure free and fair elections."
Governments from across the democratic spectrum are indiscriminately monitoring citizens' online behavior to identify perceived threats--and in some cases to silence opposition. Freedom House has found evidence of
advanced social media surveillance programs in at least 40 of the 65 countries analyzed.
Adrian Shahbaz, Freedom House's research director for technology and democracy said:
for the world's most powerful intelligence agencies, big-data spying tools are making their way around the world. Advances in AI are driving a booming, unregulated market for social media surveillance. Even in countries with considerable safeguards for
fundamental freedoms, there are already reports of abuse."
The proliferation of sophisticated monitoring tools has reduced people's ability to freely express themselves and be civically active online. Of the 65
countries assessed in this report, a record 47 featured arrests of users for political, social, or religious speech.
"The future of internet freedom rests on our ability to
fix social media. Since these are mainly American platforms, the United States must be a leader in promoting transparency and accountability in the digital age. This is the only way to stop the internet from becoming a Trojan horse for tyranny and
Declines outnumber gains for the ninth consecutive year. Since June 2018, 33 of the 65 countries assessed in Freedom on the Net experienced a deterioration in internet freedom. The biggest score declines took place
in Sudan and Kazakhstan, followed by Brazil, Bangladesh, and Zimbabwe. Improvements were measured in 16 countries, with Ethiopia recording the largest gains.
Internet freedom declines in the United States. US law
enforcement and immigration agencies increasingly monitored social media and conducted warrantless searches of travelers' electronic devices, with little oversight or transparency. In a number of troubling cases, the monitoring targeted constitutionally
protected activities such as peaceful protests and newsgathering. Disinformation was again prevalent around major political events, spread increasingly by domestic actors.
China is the world's worst abuser of internet
freedom for the fourth consecutive year. Censorship reached unprecedented extremes in China as the government enhanced its information controls ahead of the 30th anniversary of the Tiananmen Square massacre and in the face of persistent
antigovernment protests in Hong Kong.
Digital platforms are the new battleground for democracy. Domestic state and partisan actors used propaganda and disinformation to distort the online landscape during elections in
at least 24 countries over the past year, making it by far the most popular tactic for digital election interference. Often working in tandem with government-friendly media personalities and business magnates, semiautonomous online mobs transmitted
conspiracy theories, inflammatory views, and misleading memes from marginal echo chambers to the political mainstream.
Governments harness big data for social media surveillance. In at least 40 out of 65 countries,
authorities have instituted advanced social media monitoring programs. These sophisticated mass surveillance systems can quickly map users' relationships; assign a meaning to their social media posts; and infer their past, present, or future locations.
Machine learning enables the programs to find patterns that may be invisible to humans, and even to identify whole new categories of patterns for further investigation.
Free expression is under assault. A record high
of 47 out of 65 countries featured arrests of users for political, social, or religious speech. Individuals endured physical violence in retribution for their online activities in at least 31 countries.
normalize blanket shutdowns as a policy tool. Social media and communication applications were blocked in at least 20 countries, and telecommunications networks were suspended in 17 countries, often in the lead-up to elections or during protests and
More governments enlist bots and fake accounts to manipulate social media. Political leaders employed individuals to surreptitiously shape online opinions and harass opponents in 38 of the 65 countries
covered in this report--another new high.
Freedom House is an independent watchdog organization that supports democratic change, monitors the status of freedom around the world, and advocates for democracy and human rights.
Oliver Dowden, Minister for the Cabinet Office, Priti Patel, Home Secretary and Nicky Morgan MP have called on social media bosses t protect them from abuse. They wrote:
To Mark Zuckerberg Facebook CEO Jack Dorsey Twitter
CEO Sundar Pichai Google CEO
The UK General Election campaign starts tomorrow. We must ensure robust debate during the campaign does not mutate into intimidation, harassment and abuse.
Freedom of speech
is a fundamental tenet of British democracy, and this includes the freedom to speak without being threatened or abused. We must tackle this worrying trend of abuse of people in public life and that certain groups are not deterred from standing or
speaking freely because they fear for their safety.
It is important to distinguish between strongly felt political debate on one hand, and unacceptable acts of abuse, hatred, intimidation and violence.
Chief Constables continue to contact candidates in their force area to re-emphasise the importance of reporting any criminal offences, safety concerns or threats to the police. We know that you have also been working to tackle abusive behaviour on your platforms, including through delivering training on online safety and creating dedicated reporting channels. We welcome these measures - it is right that processes are in place to deal with cases of abuse or intimidation in an appropriate and timely manner.
As we enter this election period, we are conscious that there are a large number of new candidates who will be unfamiliar with how to seek help if they believe they are being subjected to abuse, and in some cases, illegal activity
online. You will be aware that a number of MPs have also identified the online abuse and threats they receive as a particular concern as we approach another electoral event. Therefore we would encourage you to:
Work together to provide a one stop shop piece of advice for candidates which will include what content breaches your terms and conditions, where to report where they believe content may be breaching these, and what response they
can expect from you.
Work with officials and the Political Parties to ensure that safety and reporting guidance reach the widest possible audience of candidates and electoral staff as soon as possible.
Have regular dialogue between you during the campaign to ensure where content or users are breaching your terms and conditions, this information is shared between you to reduce a lag time in action as abusive material or users migrate
Continue to have an open and regular dialogue with the security, policing and electoral authorities. We will ask officials to liaise with you on the best way to take this forward.
Protecting our democracy and ensuring this election is fought fairly and safely is all our responsibilities. We trust that you are taking the necessary steps to ensure this is the case during the forthcoming election period, and look
forward to you providing an update on this.
The UK Parliament's Joint Committee on Human Rights has reported on serious grounds for concern about the nature of the "consent" people provide when giving over an extraordinary range of information about themselves, to
be used for commercial gain by private companies:
Privacy policies are too complicated for the vast majority of people to understand: while individuals may understand they are consenting to data collection from a given site in exchange for "free" access to content,
they may not understand that information is being compiled, without their knowledge, across sites to create a profile. The Committee heard alarming evidence about eye tracking software being used to make assumptions about people's sexual orientation,
whether they have a mental illness, are drunk or have taken drugs: all then added to their profile.
Too often the use of a service or website is conditional on consent being given -- raising questions about whether
it is freely given
People cannot find out what they have consented to: it is difficult, if not nearly impossible, for people - even tech experts - to find out who their data has been shared with, to stop it being
shared or to delete inaccurate information about themselves.
The consent model relies on individuals knowing about the risks associated with using web based services when the system should provide adequate protection
from the risks as a default..
It is completely inappropriate to use consent when processing children's data: children aged 13 and older are, under the current legal framework, considered old enough to consent to
their data being used, even though many adults struggle to understand what they are consenting to.
Key conclusions and recommendations
The Committee points out that there is a real risk of discrimination against some groups and individuals through the way this data is used: it heard deeply
troubling evidence about some companies using personal data to ensure that only people of a certain age or race, for example, see a particular job opportunity or housing advertisement.
There are also long-established
concerns about the use of such data to discriminate in provision of insurance or credit products.
Unlike traditional print advertising where such blatant discrimination would be obvious and potentially illegal
personalisation of content means people have no way of knowing how what they see online compares to anyone else.
Short of whistleblowers or work by investigative journalists, there currently appears to be no mechanism
for protecting against such privacy breaches or discrimination being in the online "Wild West".
The Committee calls on the Government to ensure there is robust regulation over how our data can be collected
and used and it calls for better enforcement of that regulation.
The Committee says:
The "consent model is broken" and should not be used as a blanket basis for processing. It is impossible for people to know what they are consenting to when making a non-negotiable, take it-or-leave-it
"choice" about joining services like Facebook, Snapchat and YouTube based on lengthy, complex T&Cs, subject to future changes to terms.
This model puts too much onus on the individual, but the
responsibility of knowing about the risks with using web based services cannot be on the individual. The Government should strengthen regulation to ensure there is safe passage on the internet guaranteed
completely inadequate to use consent when it comes to processing children's data,. If adults struggle to understand complex consent agreements, how do we expect our children to give informed consent? The Committee says setting the digital age of consent
at 13 years old should be revisited.
The Government should be regulating to keep us safe online in the same way as they do in the real world - not by expecting us to become technical experts who can judge whether our
data is being used appropriately but by having strictly enforced standards that protect our right to privacy and freedom from discrimination.
It should be made much simpler for individuals to see what data has been
shared about them, and with whom, and to prevent some or all of their data being shared.
The Government should look at creating a single online registry that would allow people to see, in real time, all the companies
that hold personal data on them, and what data they hold.
The report is worth a read and contains many important points criticising the consent model as dictated by GDPR and enfoced by ICO. Here are a few passages from the report's summary:
The evidence we heard during this inquiry,
however, has convinced us that the consent model is broken. The information providing the details of what we are consenting to is too complicated for the vast majority of people to understand. Far too often, the use of a service or website is conditional
on consent being given: the choice is between full consent or not being able to use the website or service. This raises questions over how meaningful this consent can ever really be.
Whilst most of us are probably unaware of who
we have consented to share our information with and what we have agreed that they can do with it, this is undoubtedly doubly true for children. The law allows children aged 13 and over to give their own consent. If adults struggle to understand complex
consent agreements, how do we expect our children to give informed consent. Parents have no say over or knowledge of the data their children are sharing with whom. There is no effective mechanism for a company to determine the age of a person providing
consent. In reality a child of any age can click a consent button.
The bogus reliance on consent is in clear conflict with our right to privacy. The consent model relies on us, as individuals, to understand, take decisions, and be
responsible for how our data is used. But we heard that it is difficult, if not nearly impossible, for people to find out whom their data has been shared with, to stop it being shared or to delete inaccurate information about themselves. Even when
consent is given, all too often the limit of that consent is not respected. We believe companies must make it much easier for us to understand how our data is used and shared. They must make it easier for us to opt out of some or all of our data being
used. More fundamentally, however, the onus should not be on us to ensure our data is used appropriately - the system should be designed so that we are protected without requiring us to understand and to police whether our freedoms are being protected.
As one witness to our inquiry said, when we enter a building we expect it to be safe. We are not expected to examine and understand all the paperwork and then tick a box that lets the companies involved off the hook. It is
the job of the law, the regulatory system and of regulators to ensure that the appropriate standards have been met to keep us from harm and ensure our safe passage. We do not believe the internet should be any different. The Government must ensure that
there is robust regulation over how our data can be collected and used, and that regulation must be stringently enforced.
Internet companies argue that we benefit from our data being collected and shared. It means the content we
see online - from recommended TV shows to product advertisements - is more likely to be relevant to us. But there is a darker side to personalisation. The ability to target advertisements and other content at specific groups of people makes it possible
to ensure that only people of a certain age or race, for example, see a particular job opportunity or housing advertisement. Unlike traditional print advertising, where such blatant discrimination would be obvious, personalisation of content means people
have no way of knowing how what they see online compares to anyone else. Short of a whistle-blower within the company or work by an investigative journalist, there does not currently seem to be a mechanism for uncovering these cases and protecting people
We also heard how the data being used (often by computer programmes rather than people) to make potentially life-changing decisions about the services and information available to us is not even necessarily
accurate, but based on inferences made from the data they do hold. We were told of one case, for example, where eye-tracking software was being used to make assumptions about people's sexual orientation, whether they have a mental illness, are drunk or
have taken drugs. These inferences may be entirely untrue, but the individual has no way of finding out what judgements have been made about them.
We were left with the impression that the internet, at times, is like the Wild
West, when it comes to the lack of effective regulation and enforcement.
That is why we are deeply frustrated that the Government's recently published Online Harms White Paper explicitly excludes the protection of people's
personal data. The Government is intending to create a new statutory duty of care to make internet companies take more responsibility for the safety of their users, and an independent regulator to enforce it. This could be an ideal vehicle for
requiring companies to take people's right to privacy, and freedom from discrimination, more seriously and we would strongly urge the Government to reconsider its decision to exclude data protection from the scope of their new regulatory framework. In
particular, we consider that the enforcement of data protection rules - including the risks of discrimination through the use of algorithms - should be within scope of this work.
TikTok has surged in popularity over the past year, becoming not just a place for music mashups, but also short memes in the spirit of Vine. However, the rise of TikTok has also piqued the interest of US federal officials, who are worried that the
China-owned social media network could be storing user data improperly or censoring content.
The Committee on Foreign Investment in the United States (CIFUS), which reviews buyouts from foreign companies for national security risks, is said to be
investigating unpublished concerns.
US senators are also worried about TikTk's collection of user data, and whether the service censors content in the U.S.
Recent attacks on encryption have diverged. On the one hand, we've seen Attorney General William Barr call for "lawful access" to encrypted communications, using arguments that have barely changed since the 1990's . But we've also seen
suggestions from a different set of actors for more purportedly "reasonable" interventions , particularly the use of client-side scanning to stop the transmission of contraband files, most often child exploitation imagery (CEI).
Sometimes called "endpoint filtering" or "local processing," this privacy-invasive proposal works like this: every time you send a message, software that comes with your messaging app first checks it against a
database of "hashes," or unique digital fingerprints, usually of images or videos. If it finds a match, it may refuse to send your message, notify the recipient, or even forward it to a third party, possibly without your knowledge.
On their face, proposals to do client-side scanning seem to give us the best of all worlds: they preserve encryption, while also combating the spread of illegal and morally objectionable content.
unfortunately it's not that simple. While it may technically maintain some properties of end-to-end encryption, client-side scanning would render the user privacy and security guarantees of encryption hollow . Most important, it's impossible to build a
client-side scanning system that can only be used for CEI. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger's encryption itself and open the door to broader abuses. This post is a technical
deep dive into why that is.
A client-side scanning system cannot be limited to CEI through technical means
Imagine we want to add client-side scanning to WhatsApp. Before encrypting and sending an
image, the system will need to somehow check it against a known list of CEI images.
The simplest possible way to implement this: local hash matching. In this situation, there's a full CEI hash database inside every client device.
The image that's about to be sent is hashed using the same algorithm that hashed the known CEI images, then the client checks to see if that hash is inside this database. If the hash is in the database, the client will refuse to send the message (or
forward it to law enforcement authorities).
At this point, this system contains a complete mechanism to block any image content. Now, anyone with the ability to add an item to the hash database can require the client to block any
image of their choice. Since the database contains only hashes, and the hashes of CEI are indistinguishable from hashes of other images, code that was written for a CEI-scanning system cannot be limited to only CEI by technical means.
Furthermore, it will be difficult for users to audit whether the system has been expanded from its original CEI-scanning purpose to limit other images as well, even if the hash database is downloaded locally to client devices. Given
that CEI is illegal to possess, the hashes in the database would not be reversible.
This means that a user cannot determine the contents of the database just by inspecting it, only by individually hashing every potential image to
test for its inclusion--a prohibitively large task for most people. As a result, the contents of the database are effectively unauditable to journalists, academics, politicians, civil society, and anyone without access to the full set of images in the
Client-side scanning breaks the promises of end-to-end encryption
Client-side scanning mechanisms will break the fundamental promise that encrypted messengers make to their users: the
promise that no one but you and your intended recipients can read your messages or otherwise analyze their contents to infer what you are talking about . Let's say that when the client-side scan finds a hash match, it sends a message off to the server to
report that the user was trying to send a blocked image. But as we've already discussed, the server has the ability to put any hash in the database that it wants.
Given that online content is known to follow long-tail
distributions , a relatively small set of images comprises the bulk of images sent and received. So, with a comparatively small hash database, an external party could identify the images being sent in a comparatively large percentage of messages.
As a reminder, an end-to-end encrypted system is a system where the server cannot know the contents of a message, despite the client's messages passing through it. When that same server has direct access to effectively decrypt a
significant portion of messages, that's not end-to-end encryption.
In practice, an automated reporting system is not the only way to break this encryption promise. Specifically, we've been loosely assuming thus far that the
hash database would be loaded locally onto the device. But in reality, due to technical and policy constraints, the hash database would probably not be downloaded to the client at all . Instead, it would reside on the server.
means that at some point, the hash of each image the client wants to send will be known by the server. Whether each hash is sent individually or a Bloom filter is applied, anything short of an ORAM-based system will have a privacy leakage directly to the
server at this stage, even in systems that attempt to block, and not also report, images. In other words, barring state-of-the-art privacy-preserving remote image access techniques that have a provably high (and therefore impractical) efficiency cost,
the server will learn the hashes of every image that the client tries to send.
Further arguments against client-side scanning
If this argument about image decryption isn't sufficiently
compelling, consider an analogous argument applied to the text of messages rather than attached images. A nearly identical system could be used to fully decrypt the text of messages. Why not check the hash of a particular message to see if it's a chain
letter, or misinformation ? The setup is exactly the same, with the only change being that the input is text rather than an image. Now our general-purpose censorship and reporting system can detect people spreading misinformation... or literally any text
that the system chooses to check against. Why not put the whole dictionary in there, and therefore be able to decrypt any word that users type (in a similar way to this 2015 paper )? If a client-side scanning system were applied to the text of
messages, users would be similarly unable to tell that their messages were being secretly decrypted.
Regardless of what it's scanning for, this entire mechanism is circumventable by using an alternative client to the officially
distributed one, or by changing images and messages to escape the hash matching algorithm, which will no longer be secret once it's performed locally on the client's device.
These are just the tip of the iceberg of technical
critiques, not to mention policy reasons, we shouldn't build a censorship mechanism into a private, secure messenger.
Thailand's Digital Economy and Society (DES) Minister, Buddhipongse Punnakanta, has launched the government's 'anti-fake-news' centre at the head office of the country's state telecoms company TOT.
Buddhipongse said that any challenged infomation
will be verified within two hours by the centre. The verification process is said to include both human and artificial intelligence. He added:
Some 200 organisations in our network will each send two people to serve as
contact persons within 24 hours who have to receive cases and help verify whether their obtained information is true or false.
The centre will look at the top 10-20 most-shared news items or messages on social media platforms,
including Facebook, Google, YouTube and Twitter.
People are also allowed to send information they find suspicious to the centre so it can be checked and verified with relevant organisations. The verified information will be shared
through online channels.
Any information deemed as infringement will be forwarded to the Royal Thai Police for investigation.
The center will employ about 30 checkers who will target news about government
policies and content that broadly affects peace and order, good morals, and national security.
Hong Kong has been dealt its first court ruling that censors the internet after a court ordered the banning of certain online messages related to protests.
On 31st October Hong Kong's High Court issued an interim injunction banning people from
disseminating, circulating, publishing, or re-publishing on any internet-based platform or medium any information that promotes, encourages, or incites the use or threat of violence.
Two platforms were named in government press release announcing
the order. A local Reddit-like forum LIHKG and the messaging app Telegram.
The government of pro-Beijing leader Carrie Lam stated that these platforms and mediums have been abused to incite protesters to participate in unlawful activities, such as
damaging targeted properties.
The injunction was issued at the request of Hong Kong's Secretary of Justice and the ban will be effective until 16th November when a full court judgement will be announced.