A Californian law that prevented the Internet Movie Database (IMDb) from publishing the age of movie stars has been declared
unconstitutional, and so the law is struck down on First Amendment grounds. A federal judge declared it not only to be unconstitutional, but also a bad solution to the wrong problem.
The law went into effect in 2017 after being signed by California Gov. Jerry Brown. The goal was to mitigate age discrimination in a youth-obsessed Hollywood by requiring IMDb to remove age-related information upon the request of a subscriber.
The judge explained:
Even if California had shown that the law was passed after targeted efforts to eliminate discrimination in the entertainment industry had failed, the law is not narrowly tailored. For one, the law is underinclusive, in that it bans only one kind
of speaker from disseminating age-related information, leaving all other sources of that information untouched. Even looking just at IMDb.com, the law requires IMDb to take down some age-related information -- that of the members of its
subscription service who request its removal -- but not the age-related information of those who don't subscribe to IMDbPro, or who don't ask IMDb.com to take their information down.
The judge adds that the law is also overinclusive:
For instance, it requires IMDb not to publish the age-related information of all those who request that their information not be published, not just of those protected by existing age discrimination laws, states the opinion (read below). If the
state is concerned about discriminatory conduct affecting those not covered by current laws, namely those under 40, it certainly has a more direct means of addressing those concerns than imposing restrictions on IMDb's speech.
Californian officials said the state will be appealing this ruling to the Ninth Circuit Court of Appeals.
Freedom House has published its annual survey of freedom around the word. Its key findings are somewhat grim:
Democracy faced its most serious crisis in decades in 2017 as its basic tenets--including guarantees of free and fair elections, the rights of minorities, freedom of the press, and the rule of law--came under attack
around the world.
Seventy-one countries suffered net declines in political rights and civil liberties, with only 35 registering gains. This marked the 12th consecutive year of decline in global freedom.
The United States retreated from its traditional role as both a champion and an exemplar of democracy amid an accelerating decline in American political rights and civil liberties.
Over the period since the 12-year global slide began in 2006, 113 countries have seen a net decline, and only 62 have experienced a net improvement.
Google has tweaked its Google's image search to make it slightly more difficult to view images in full size before downloading
them. Google has also added a more prominent copyright warning.
Google acted as part of a peace deal with photo library Getty Images. In 2017, Getty Images complained to the European Commission, accusing Google of anti-competitive practices.
Google said it had removed some features from image search, including the view image button. Images can still be viewed in full size from the right click menu, at least on my Windows version of Firefox. Google also removed the search by image
button, which was an easy way of finding larger copies of photographs. Perhaps the tweaks are more about restricting the finding of high resolution version of image rather than worrying about standard sized images.
Getty Images is a photo library that sells the work of photographers and illustrators to businesses, newspapers and broadcasters. It complained that Google's image search made it easy for people to find Getty Images pictures and take them, without
the appropriate permission or licence.
In a statement, Getty Images said:
We are pleased to announce that after working cooperatively with Google over the past months, our concerns are being recognised and we have withdrawn our complaint.
In a ruling of particular interest to those working in the adult entertainment biz, a German court has ruled that Facebook's real name policy is illegal and that users must be allowed to sign up for the service under pseudonyms.
The opinion comes from the Berlin Regional Court and disseminated by the Federation of German Consumer Organizations, which filed the suit against Facebook. The Berlin court found that Facebook's real name policy was a covert way of obtaining
users' consent to share their names, which are one of many pieces of information the court said Facebook did not properly obtain users' permission for.
The court also said that Facebook didn't provide a clear-cut choice to users for other default settings, such as to share their location in chats. It also ruled against clauses that allowed the social media giant to use information such as profile
pictures for commercial, sponsored or related content.
Facebook told Reuters it will appeal the ruling, but also that it will make changes to comply with European Union privacy laws coming into effect in June.
Facebook has been ordered to stop tracking people without consent, by a court in Belgium. The company has been told to delete all the data it had gathered on people who did not use Facebook. The court ruled the data was gathered illegally.
Belgium's privacy watchdog said the website had broken privacy laws by placing tracking code on third-party websites.
Facebook said it would appeal against the ruling.
The social network faces fines of 250,000 euros a day if it does not comply.
The ruling is the latest in a long-running dispute between the social network and the Belgian commission for the protection of privacy (CPP). In 2015, the CPP complained that Facebook tracked people when they visited pages on the site or clicked
like or share, even if they were not members.
The United Kingdom's reputation for online freedom has suffered significantly in recent years, in no small part due to the draconian Investigatory Powers Act, which came into power last year and created what many people have described as the worst
surveillance state in the free world.
But despite this, the widely held perception is that the UK still allows relatively free access to the internet, even if they do insist on keeping records on what sites you are visiting. But how true, is this perception?
There is undeniably more online censorship in the UK than many people would like to admit to. But is this just the tip of the iceberg? The censorship of one YouTube video suggests that it might just be. The video in question contains footage
filmed by a trucker of refugees trying to break into his vehicle in order to get across the English Channel and into the UK. This is a topic which has been widely reported in British media in the past, but in the wake of the Brexit vote and the
removal of the so-called 'Jungle Refugee Camp', there has been strangely little coverage.
Yet, if you try to access this video in the UK, you will find that it is blocked. It remains accessible to users elsewhere in the world, albeit with content warnings in place.
And it is not alone. It doesn't take too much research to uncover several similar videos which are also censored in the UK. The scale of the issue likely requires further research. But it safe to say, that such censorship is both unnecessary and
potentially illegal as it as undeniably denying British citizens access to content which would feed an informed debate on some crucial issues.
Al Arabiya News is an Arabic language news and current affairs channel licensed by Ofcom.
Mr Husain Abdulla complained to Ofcom on behalf of Mr Hassan Mashaima about unfair treatment and unwarranted infringement of privacy in connection with the obtaining of material included in the programme and the programme as broadcast on Al
Arabiya News on 27 February 2016.
The programme reported on an attempt made in February and March 2011, by a number of people including the complainant, Mr Hassan Mashiama, to change the governing regime in Bahrain from a Kingdom to a Republic. It included an interview with Mr
Mashaima, filmed while he was in prison awaiting a retrial, as he explained the circumstances which had led to his arrest and conviction.
The interview included Mr Mashaima making confessions as to his participation in certain activities. Only approximately three months prior to the date on which Al Arabiya News said the footage was filmed, an official Bahraini Commission of Inquiry
had found that similar such confessions had been obtained from individuals, including Mr Mashaima, under torture. During Mr Mashaima's subsequent retrial and appeal, he maintained that his conviction should be overturned, as confessions had been
obtained from him under torture.
Ofcom's Decision is that the appropriate sanction should be a financial penalty of £120,000 and that the Licensee should be directed to broadcast a statement of Ofcom's findings, on a date to be determined by Ofcom, and that it should be directed
to refrain from broadcasting the material found in breach again.
Al Arabiya News Channel has surrendered with immediate effect its license with the U.K. broadcasting censor Ofcom, which received a complaint over the channel's involvement in covering the crime of hacking Qatar News Agency (QNA), British law firm
QNA had hired Carter-Ruck to submit a complaint at Ofcom against Al Arabiya and Sky News Arabia for broadcasting fabricated and false statements attributed to Emir Sheikh Tamim bin Hamad Al-Thani after QNA's website was hacked on May 24, 2017, The
four countries of Saudi Arabia, UAE, Bahrain and Egypt used this event to justify the siege that they have been imposing on Qatar since June 5, 2017.
The surrendering of the license by Al Arabiya, a Dubai-based satellite broadcaster owned by Saudi businessmen, was to avoid an an Ofcom investigation.
QNA says Al Arabiya's decision was dictated by the inquiry but the channel says business reasons also influenced the move.
The UK government has unveiled a tool it says can accurately detect jihadist content and block it from being viewed.
Home Secretary Amber Rudd told the BBC she would not rule out forcing technology companies to use it by law. Rudd is visiting the US to meet tech companies to discuss the idea, as well as other efforts to tackle extremism.
The government provided £600,000 of public funds towards the creation of the tool by an artificial intelligence company based in London.
Thousands of hours of content posted by the Islamic State group was run past the tool, in order to train it to automatically spot extremist material.
ASI Data Science said the software can be configured to detect 94% of IS video uploads. Anything the software identifies as potential IS material would be flagged up for a human decision to be taken.
The company said it typically flagged 0.005% of non-IS video uploads. But this figure is meaningless without an indication of how many contained any content that have any connection with jihadis.
In London, reporters were given an off-the-record briefing detailing how ASI's software worked, but were asked not to share its precise methodology. However, in simple terms, it is an algorithm that draws on characteristics typical of IS and its
It sounds like the tool is more about analysing data about the uploading account, geographical origin, time of day, name of poster etc rather than analysing the video itself.
Comment: Even extremist takedowns require accountability
Can extremist material be identified at 99.99% certainty as Amber Rudd claims today? And how does she intend to ensure that there is legal accountability for content removal?
The Government is very keen to ensure that extremist material is removed from private platforms, like Facebook, Twitter and Youtube. It has urged use of machine learning and algorithmic identification by the companies, and threatened fines for
failing to remove content swiftly.
Today Amber Rudd claims to have developed a tool to identify extremist content, based on a database of known material. Such tools can have a role to play in identifying unwanted material, but we need to understand that there are some important
caveats to what these tools are doing, with implications about how they are used, particularly around accountability. We list these below.
Before we proceed, we should also recognise that this is often about computers (bots) posting vast volumes of material with a very small audience. Amber Rudd's new machine may then potentially clean some of it up. It is in many ways a propaganda
battle between extremists claiming to be internet savvy and exaggerating their impact, while our own government claims that they are going to clean up the internet. Both sides benefit from the apparent conflict.
The real world impact of all this activity may not be as great as is being claimed. We should be given much more information about what exactly is being posted and removed. For instance the UK police remove over 100,000 pieces of extremist content
by notice to companies: we currently get just this headline figure only. We know nothing more about these takedowns. They might have never been viewed, except by the police, or they might have been very influential.
The results of the government's' campaign to remove extremist material may be to push them towards more private or censor-proof platforms. That may impact the ability of the authorities to surveil criminals and to remove material in the future. We
may regret chasing extremists off major platforms, where their activities are in full view and easily used to identify activity and actors.
Whatever the wisdom of proceeding down this path, we need to be worried about the unwanted consequences of machine takedowns. Firstly, we are pushing companies to be the judges of legal and illegal. Secondly, all systems make mistakes and require
accountability for them; mistakes need to be minimised, but also rectified.
Here is our list of questions that need to be resolved.
1 What really is the accuracy of this system?
Small error rates translate into very large numbers of errors at scale. We see this with more general internet filters in the UK, where our blocked.org.uk project regularly uncovers and reports errors.
How are the accuracy rates determined? Is there any external review of its decisions?
The government appears to recognise the technology has limitations. In order to claim a high accuracy rate, they say at least 6% of extremist video content has to be missed. On large platforms that would be a great deal of material needing human
review. The government's own tool shows the limitations of their prior demands that technology "solve" this problem.
Islamic extremists are operating rather like spammers when they post their material. Just like spammers, their techniques change to avoid filtering. The system will need constant updating to keep a given level of accuracy.
2 Machines are not determining meaning
Machines can only attempt to pattern match, with the assumption that content and form imply purpose and meaning. This explains how errors can occur, particularly in missing new material.
3 Context is everything
The same content can, in different circumstances, be legal or illegal. The law defines extremist material as promoting or glorifying terrorism. This is a vague concept. The same underlying material, with small changes, can become news, satire or
commentary. Machines cannot easily determine the difference.
4 The learning is only as good as the underlying material
The underlying database is used to train machines to pattern match. Therefore the quality of the initial database is very important. It is unclear how the material in the database has been deemed illegal, but it is likely that these are police
determinations rather than legal ones, meaning that inaccuracies or biases in police assumptions will be repeated in any machine learning.
5 Machines are making no legal judgment
The machines are not making a legal determination. This means a company's decision to act on what the machine says is absent of clear knowledge. At the very least, if material is "machine determined" to be illegal, the poster, and users
who attempt to see the material, need to be told that a machine determination has been made.
6 Humans and courts need to be able to review complaints
Anyone who posts material must be able to get human review, and recourse to courts if necessary.
7 Whose decision is this exactly?
The government wants small companies to use the database to identify and remove material. If material is incorrectly removed, perhaps appealed, who is responsible for reviewing any mistake?
It may be too complicated for the small company. Since it is the database product making the mistake, the designers need to act to correct it so that it is less likely to be repeated elsewhere.
If the government want people to use their tool, there is a strong case that the government should review mistakes and ensure that there is an independent appeals process.
8 How do we know about errors?
Any takedown system tends towards overzealous takedowns. We hope the identification system is built for accuracy and prefers to miss material rather than remove the wrong things, however errors will often go unreported. There are strong incentives
for legitimate posters of news, commentary, or satire to simply accept the removal of their content. To complain about a takedown would take serious nerve, given that you risk being flagged as a terrorist sympathiser, or perhaps having to enter
formal legal proceedings.
We need a much stronger conversation about the accountability of these systems. So far, in every context, this is a question the government has ignored. If this is a fight for the rule of law and against tyranny, then we must not create
arbitrary, unaccountable, extra-legal censorship systems.
The new German law that compels social media companies to remove hate speech and other illegal content can lead to unaccountable, overbroad censorship and should be promptly reversed, Human Rights Watch said today. The law sets a dangerous
precedent for other governments looking to restrict speech online by forcing companies to censor on the government's behalf. Wenzel Michalski, Germany director at Human Rights Watch said:
Governments and the public have valid concerns about the proliferation of illegal or abusive content online, but the new German law is fundamentally flawed. It is vague, overbroad, and turns private companies into overzealous censors to avoid
steep fines, leaving users with no judicial oversight or right to appeal.
Parliament approved the Network Enforcement Act , commonly known as NetzDG, on June 30, 2017, and it took full effect on January 1, 2018. The law requires large social media platforms, such as Facebook, Instagram, Twitter, and YouTube, to promptly
remove "illegal content," as defined in 22 provisions of the criminal code , ranging widely from insult of public office to actual threats of violence. Faced with fines up to 50 million euro, companies are already removing content to
comply with the law.
At least three countries -- Russia, Singapore, and the Philippines -- have directly cited the German law as a positive example as they contemplate or propose legislation to remove "illegal" content online. The Russian draft law,
currently before the Duma, could apply to larger social media platforms as well as online messaging services.
Two key aspects of the law violate Germany's obligation to respect free speech, Human Rights Watch said. First, the law places the burden on companies that host third-party content to make difficult determinations of when user speech violates the
law, under conditions that encourage suppression of arguably lawful speech. Even courts can find these determinations challenging, as they require a nuanced understanding of context, culture, and law. Faced with short review periods and the risk
of steep fines, companies have little incentive to err on the side of free expression.
Second, the law fails to provide either judicial oversight or a judicial remedy should a cautious corporate decision violate a person's right to speak or access information. In this way, the largest platforms for online expression become "no
accountability" zones, where government pressure to censor evades judicial scrutiny.
At the same time, social media companies operating in Germany and elsewhere have human rights responsibilities toward their users, and they should act to protect them from abuse by others, Human Rights Watch said. This includes stating in user
agreements what content the company will prohibit, providing a mechanism to report objectionable content, investing adequate resources to conduct reviews with relevant regional and language expertise, and offering an appeals process for users who
believe their content was improperly blocked or removed. Threats of violence, invasions of privacy, and severe harassment are often directed against women and minorities and can drive people off the internet or lead to physical attacks.
Thailand asks developers to speed up its 'Foreigner Database' that will record the entries and exits of all foreigners, and require them to report to local police every time they change hotel or address
Thailand's Immigration Bureau and the Interior Ministry have been instructed to speed up the implementation of a
single-platform online database of foreigners entering and leaving the kingdom. The two agencies were told to have the new system fully functioning in six months.
The order was given by the Deputy leader of Thailand's military governement, General Prawit Wongsuwan.
The single platform database would enable the government to keep tabs on all foreigners so that they can be easily located by the police.
As part of the new system, the Immigration Bureau will cancel the use of the Immigration 6 form and instead use e-passport data. A spokesman said each immigration checkpoint would be equipped with identity-checking equipment, such as fingerprint
readers and passport scanners, to enter information into the database.
At the same time, the Interior Ministry's Provincial Administration Department must ensure that all hotels, apartments, guesthouses and other accommodation services keep and report records of foreigners using their services by informing the
nearest immigration office or police station, which will in turn feed the data to the database. Foreigners also now have to report to the local police or immigration every time they change hotel or where they stay whilst in Thailand.
tumblr is an image sharing website. It has just announced that it will changesafe mode wrks. In a email to wisers, tumblr writes: the way that its
Last year we introduced Safe Mode, which filters sensitive content in your dashboard and search results so you have control over what you see and what you don't. And now that it's been out for a while, we want to make sure everyone has the chance
to try it out.
Over the next couple weeks, you might see some things in your dashboard getting filtered. If you like it that way, that's great. If you don't, no problem. You can go back by turning off Safe Mode any time.
Update: Are the safe mode changes related to impending UK porn censorship?
Tumblr has long been one of the freest spaces on the internet for porn and sex-positive content, thanks to lax guidelines compared to Facebook or Instagram. Porn creators, fetish community artists, and more were able to share work with little
trouble. Tumblr made a major change last year with the introduction of a Safe Mode that initially filtered NSFW content if users chose to enable it. Now though, Tumblr is making Safe Mode the default setting for users.
The Safe Mode feature hides sensitive images -- for example nude images, even, as Tumblr's guidelines note, if artistic or education nudity like classic art or anatomy. As Motherboard reports , it's a function that claims to give users more
control over what you see and what you don't, updating the Safe Search option that the platform introduced back in 2012 that removed sensitive stuff from the site's search results.
Rolling out the default setting means users will have to go out of their way to switch back and see unfiltered content. An email sent to Tumblr users last week states that they want to make sure everyone has the chance to try it out.
Many adult content creators are concerned this will affect their work and space on the platform. Tumblr user, freelance artist, and adult comic-maker Kayla-Na told Dazed of her frustrations: I understand wanting to make Tumblr a safer environment
for younger audiences, but Tumblr has to remember that the adult community is still part of the website as a whole, and shouldn't be suppressed into oblivion.
Perhaps the Tumblr safe mode has also been introduced as a step towards the UK's porn censorship by age verification. The next step maybe for the safe mode to mandatorily imposed on internet viewers in Britain and can only be turned off when they
subject themselves to age verification.
Irish book censors have not banned a single magazine and have blocked just one book in the last ten years. Now a member of
the Irish Parliament has called for the Censorship of Publications Board to be shut down.
Fianna Fail Arts and Culture Spokesperson Niamh Smyth said: This is one quango that should be whacked. She was referring to a political campaign slogan whack a quango, to shut down quangos. Smyth added:
The ongoing existence of a Censorship Board that doesn't censor anything is bringing the concept of censorship into disrepute at a time where we need it more than ever.
The only time the board has been heard of in ten years was the ludicrous submission of Alan Shatter's novel Laura over something to do with abortion.
Some users have reported seeing pop ups in Instagram (IG) informing them that, from now on, Instagram will be flagging when you
record or take a screenshot of other people's IG stories and informing the originator that you have rsnapped or ecorded the post.
According to a report by Tech Crunch , those who have been selected to participate in the IG trial can see exactly who has been creeping and snapping their stories. Those who have screenshotted an image or recorded a video will have a little
camera shutter logo next to their usernames, much like Snapchat.
Of course, users have already found a nifty workaround to avoid social media stalking exposure. So here's the deal: turning your phone on airplane mode after you've loaded the story and then taking your screenshot means that users won't be
notified of any impropriety (sounds easy for Instagram to fix this by saving the keypress until the next time it communicates with the Instagram server). You could also download the stories from Instagram's website or use an app like Story
Reposter. Maybe PC users just need another small window on the desktop, then move the mouse pointer to the small window before snapping the display.
Clearly, there's concerns on Instagram's part about users' content being shared without their permission, but if the post is shared with someone for viewing, it is pretty tough to stop then from grabbing a copy for themselves as they view it.
The US-based global tech giant Apple Inc. is set to hand over the operation of its iCloud data center in mainland China to a local corporation called Guizhou-Cloud Big Data (GCBD) by February 28, 2018. When this transition happens, the local
company will become responsible for handling the legal and financial relationship between Apple and China's iCloud users. After the transition takes place, the role of Apple will restricted to an investment of US one billion dollars, for the
construction of a data center in Guiyang, and for providing technical support to the center, in the interest of preserving data security.
GCBD was established in November 2014 with a RMB 235 million yuan [approximately US$ 37.5 million] registered capital investment. It is a state enterprise solely owned by Guizhou Big Data Development and Management Bureau. The company is also
supervised by Guizhou Board of Supervisors of State-owned Enterprises.
What will happen to Apple's Chinese customers once iCloud services are handed over to GCBD? In public statements, Apple has avoided
acknowledging the political implications of the move:
This will allow us to continue to improve the speed and reliability of iCloud in China and comply with Chinese regulations.
Apple Inc. has not explained the real issue, which is that a state-owned big data company controlled by the Chinese government will have access to all the data of its iCloud service users in China. This will allow the capricious state apparatus to
jump into the cloud and look into the data of Apple's Chinese users.
Apple Inc. has not explained the real issue, which is that a state-owned big data company controlled by the Chinese government will have access to all the data of its iCloud service users in China.
Over the next few weeks, iCloud users in China will receive a notification from Apple, seeking their endorsement of the new service terms. These "iCloud (operated by GCBD) terms and conditions" have a newly added paragraph, which reads:
If you understand and agree, Apple and GCBD have the right to access your data stored on its servers. This includes permission sharing, exchange, and disclosure of all user data (including content) according to the application of the law.
In other words, once the agreement is signed, GCBD -- a company solely owned by the state -- would get a key that can access all iCloud user data in China, legally.
Apple's double standard
Why would a company that built its reputation on data security surrender to the Chinese government so easily?
I still remember how in February 2016, after the attack in San Bernardino, Apple CEO Tim Cook withstood pressure from the US Department of Justice to build an iPhone operating system that could circumvent security features and install it in the
iPhone of the shooter. Cook even issued an open letter
to defend the company's decision.
Apple's insistence on protecting user data won broad public support. At the same time, it was criticized by the Department of Justice
, which retorted that the open letter "appears to be based on its concern for its business model and public brand marketing strategy."
This comment has proven true today, because it is clear that the company is operating on a double standard in its Chinese business. We could even say that it is bullying the good actor while being terrified by the bad one.
Apple Inc. and Tim Cook, who had once stayed firm against the US government, suddenly have become soft in front of Chinese government. Faced with the unreasonable demand put forward by the Chinese authorities, Apple has not demonstrated a will to
resist. On the contrary, it is giving people the impression that it will do whatever needed to please the authorities.
Near the end of 2017, Apple lnc. admitted it had removed 674 VPN apps
from Chinese App Store. These apps are often used by netizens for circumventing the Great Firewall (blocking of overseas websites and content). Skype
from the Chinese App Store. And Apple's submission to the Chinese authorities' requests generated a feeling of "betrayal" among Chinese users.
Some of my friends from mainland China have even decided to give up using Apple mobile phones and shifted to other mainland Chinese brands. Their decision, in addition to the price, is mainly in reaction to Apple's decision to take down VPN apps
from the Chinese Apple store.
Some of these VPN apps can still be downloaded from mobile phones that use the Android system. This indicates that Apple is not "forced" to comply. People suspect that it is proactively performing a "obedient" role.
The handover of China iCloud to GCBD is unquestionably a performance of submission and kowtow. Online, several people have quipped: "the Chinese government is asking for 50 cents, Apple gives her a dollar."
Selling the iPhone in China
Apple says the handover is due to new regulations that cloud servers must be operated by local corporation. But this is unconvincing. China's Cybersecurity Law, which was implemented on June 1 2017, does demand that user information and data
collected in mainland China be stored within the border
. But it does not require that the data center be operated by a local corporation.
In other words, even according to Article 37 of the Cybersecurity Law, Apple does not need to hand over the operation of iCloud services to a local corporation, to say nothing of the fact that the operator is solely owned by the state. Though
Apple may have to follow the "Chinese logic" or "unspoken rule", the decision looks more like a strategic act, intended to insulate Apple from financial, legal and moral responsibility to their Chinese users, as stated in the
new customer terms and conditions on the handover of operation. It only wants to continue making a profit by selling iPhone in China.
Many people have encountered similar difficulties when doing business in China -- they have to follow the authorities' demands. Some even think that it is inevitable and therefore reasonable. For example, Baidu's CEO Robin Li said in
a recent interview with Time Magazine, "That's our way of doing business here".
I can see where Apple is coming from. China is now the third largest market
for the iPhone. While confronting vicious competition from local brands, the future growth of iPhone in China has been threatened
. And unlike in the US, if Apple does not submit to China and comply with the Cybersecurity Law, the Chinese authorities can use other regulations and laws like the Encryption Law of the People's Republic of China (drafting) and Measures for
Security Assessment of Cross-border Data Transfer (drafting) to force Apple to yield.
However, as the world's biggest corporation in market value which has so many loyal fans, Apple's performance in China is still disappointing. It has not even tried to resist. On the contrary, it has proactively assisted [Chinese authorities] in
selling out its users' private data.
Assisting in the making of a 'Cloud Dictatorship'
This is perhaps the best result that China's party-state apparatus could hope for. In recent years, China has come to see big data as a strategic resource for its diplomacy and for maintaining domestic stability. Big data is as important as
military strength and ideological control. There is even a new political term "Data-in-Party-control" coming into use.
As an Apple fans, I lament the fact that Apple has become a key multinational corporation offering its support to the Chinese Communist Party's engineering of a "Cloud Dictatorship". It serves as a very bad role model: Now Apple that has
kowtowed to the CCP, how long will other tech companies like Facebook, Google and Amazon be able to resist the pressure?
The UK's digital and culture secretary, Matt Hancock, has ruled out creating a new internet censor targeting social media
such as Facebook and Twitter.
In an interview on the BBC's Media Show , Hancock said he was not inclined in that direction and instead wanted to ensure existing regulation is fit for purpose. He said:
If you tried to bring in a new regulator you'd end up having to regulate everything. But that doesn't mean that we don't need to make sure that the regulations ensure that markets work properly and people are protected.
Meanwhile the Electoral Commission and the Department for Digital, Culture, Media and Sport select committee are now investigating whether Russian groups used the platforms to interfere in the Brexit referendum in 2016. The DCMS select committee
is in the US this week to grill tech executives about their role in spreading fake news. In a committee hearing in Washington yesterday, YouTube's policy chief said the site had found no evidence of Russian-linked accounts purchasing ads to
interfere in the Brexit referendum.
has managed to get hold of the Australian censor's reasoning behind its ban of Omega Labyrinth Z . The censors write:
The game features a variety of female characters with their cleavages emphasised by their overtly provocative clothing, which often reveal the sides or underside of theiur breasts and obscured genital region. Multiple female characters are also
depicted fully nude, with genitals obscured by objects and streams of light throughout the game. Although of indeterminate age, most of these characters are adult-like, with voluptuous bosoms and large cleavages that are flaunted with a variety
of skimpy outfits.
One character, Urara Rurikawa, is clearly depicted as child-like in comparison with the other female characters. She is flat-chested, physically underdeveloped (particularly visible in her hip region) and is significantly shorter than otehr
characters in the game. She also has a child-like voice, wears a school uniform-esque outfit and appears naive in her outlook on life.
At one point in the game, Urara Rurikawa and a friend are referred to as "the younger girls" by one of the game's main characters. In the Boards opinion, the character of Urara Rurikawa is a depiction of a person who is, or appears to
be, a child under 18 years.
In some gameplay modes, including the "awakening" mode, the player is able to touch the breasts, buttocks, mouths and genital regions of each character, including Urara Rurikawa, while they are in sexualised poses, receiving positive
verbal feedback for interactions which are implied to be pleasurable for the characters and negative verbal feedback, including lines of dialogue such as "I-It doesn't feel good..." and "Hyah? Don't touch there!," for
interactions which are implied to be unpleasurable, implying a potential lack of consent.
The aim of these sections is, implicity, to sexually arouse these characters to the point that a "shame break" is activated, in which some of the characters clothing is removed - with genital regions obscured by light and various
objects - and the background changes colour as they implicitly orgasm.
In one "awakening" mode scenario, thee player interacts with Urara Rurikawa, who is depicted lying down, clutching a teddy bear, with lines of dialogue such as "I'm turning sleepy...", "I'm so sleepy now..." and
"I might wake up..." implying that she is drifting in and out of sleep.
The player interacts with this child-like character in the same manner as they interact with adult characters, clicking her breasts, buttocks, mouth and genital regions until the "shame break" mode is activated. During this section of
the game, with mis-clicks, dialogue can be triggered, in which Urara Rurikawa says, "Stop tickling...", "Stop poking..." and "Th-that feels strange...", implying a lack of consent.
In the Board's opinion, the ability to interact with the character Urara Rurikawa in the manner described above constituted a simulation of sexual stimulation of a child.
This week, Senators Hatch, Graham, Coons, and Whitehouse introduced a bill that diminishes the data privacy of people around the world.
The Clarifying Overseas Use of Data ( CLOUD
) Act expands American and foreign law enforcement's ability to target and access people's data across international borders in two ways. First, the bill creates an explicit provision for U.S. law enforcement (from a local police department to
federal agents in Immigration and Customs Enforcement) to access "the contents of a wire or electronic communication and any record or other information" about a person regardless of where they live or where that information is located
on the globe. In other words, U.S. police could compel a service provider--like Google, Facebook, or Snapchat--to hand over a user's content and metadata, even if it is stored in a foreign country, without following that foreign country's privacy
Second, the bill would allow the President to enter into "executive agreements" with foreign governments that would allow each government to acquire users' data stored in the other country, without following each other's privacy laws.
For example, because U.S.-based companies host and carry much of the world's Internet traffic, a foreign country that enters one of these executive agreements with the U.S. to could potentially wiretap people located anywhere on the globe (so long
as the target of the wiretap is not a U.S. person or located in the United States) without the procedural safeguards of U.S. law typically given to data stored in the United States, such as a warrant, or even notice to the U.S. government. This is
an enormous erosion of current data privacy laws.
This bill would also moot legal proceedings now before the U.S. Supreme Court. In the spring, the Court will decide whether or not current U.S. data privacy laws allow U.S. law enforcement to serve warrants for information stored outside the
United States. The case, United States v. Microsoft
(often called "Microsoft Ireland"), also calls into question principles of international law, such as respect for other countries territorial boundaries and their rule of law.
Notably, this bill would expand law enforcement access to private email and other online content, yet the Email Privacy Act
, which would create a warrant-for-content requirement, has still not passed the Senate, even though it has enjoyed
in the House for the past two years
The CLOUD Act and the US-UK Agreement
The CLOUD Act's proposed language is not new. In 2016, the Department of Justice first proposed
legislation that would enable the executive branch to enter into bilateral agreements with foreign governments to allow those foreign governments direct access to U.S. companies and U.S. stored data. Ellen Nakashima at the Washington Post
the story that these agreements (the first iteration has already been negotiated with the United Kingdom) would enable foreign governments to wiretap any communication in the United States, so long as the target is not a U.S. person. In
, the Justice Department re-submitted the bill for Congressional review, but added a few changes: this time including broad language to allow the extraterritorial application of U.S. warrants outside the boundaries of the United States.
In September 2017, EFF, with a coalition of 20 other privacy advocates, sent a letter
to Congress opposing the Justice Department's revamped bill.
The executive agreement language in the CLOUD Act is nearly identical to the language in the DOJ's 2017 bill. None of EFF's concerns
have been addressed. The legislation still:
Includes a weak standard for review that does not rise to the protections of the warrant requirement under the 4th Amendment.
Fails to require foreign law enforcement to seek individualized and prior judicial review.
Grants real-time access and interception to foreign law enforcement without requiring the heightened warrant standards that U.S. police have to adhere to under the Wiretap Act.
Fails to place adequate limits on the category and severity of crimes for this type of agreement.
Fails to require notice on any level -- to the person targeted, to the country where the person resides, and to the country where the data is stored. (Under a separate provision regarding U.S. law enforcement extraterritorial orders, the bill
allows companies to give notice to the foreign countries where data is stored, but there is no parallel provision for company-to-country notice when foreign police seek data stored in the United States.)
The CLOUD Act also creates an unfair two-tier system. Foreign nations operating under executive agreements are subject to minimization and sharing rules when handling data belonging to U.S. citizens, lawful permanent residents, and corporations.
But these privacy rules do not extend to someone born in another country and living in the United States on a temporary visa or without documentation. This denial of privacy rights is unlike other U.S. privacy laws. For instance, the
Stored Communications Act
protects all members of the "public" from the unlawful disclosure of their personal communications.
An Expansion of U.S. Law Enforcement Capabilities
The CLOUD Act would give unlimited jurisdiction to U.S. law enforcement over any data controlled by a service provider, regardless of where the data is stored and who created it. This applies to content, metadata, and subscriber information --
meaning private messages and account details could be up for grabs. The breadth of such unilateral extraterritorial access creates a dangerous precedent for other countries who may want to access information stored outside their own borders,
including data stored in the United States.
EFF argued on this basis (among others) against unilateral U.S. law enforcement access to cross-border data, in our Supreme Court
in the Microsoft Ireland case.
When data crosses international borders, U.S. technology companies can find themselves caught in the middle between the conflicting data laws of different nations: one nation might use its criminal investigation laws to demand data located beyond
its borders, yet that same disclosure might violate the data privacy laws of the nation that hosts that data. Thus, U.S. technology companies lobbied for and received provisions in the CLOUD Act allowing them to move to quash or modify U.S. law
enforcement orders for extraterritorial data. The tech companies can quash a U.S. order when the order does not target a U.S. person and might conflict with a foreign government's laws. To do so, the company must object within 14 days, and undergo
a complex "comity" analysis -- a procedure where a U.S. court must balance the competing interests of the U.S. and foreign governments.
Failure to Support Mutual Assistance
Of course, there is another way to protect technology companies from this dilemma, which would also protect the privacy of technology users around the world: strengthen the existing international system of Mutual Legal Assistance Treaties (MLATs).
This system allows police who need data stored abroad to obtain the data through the assistance of the nation that hosts the data. The MLAT system encourages international cooperation.
It also advances data privacy. When foreign police seek data stored in the U.S., the MLAT system requires them to adhere to the Fourth Amendment's warrant requirements. And when U.S. police seek data stored abroad, it requires them to follow the
data privacy rules where the data is stored, which may include important " necessary and proportionate
" standards. Technology users are most protected when police, in the pursuit of cross-border data, must satisfy the privacy standards of both countries.
While there are concerns from law enforcement that the MLAT system has become too slow, those concerns should be addressed with improved resources, training, and streamlining.
The CLOUD Act raises dire implications for the international community, especially as the Council of
is beginning a process to review the MLAT system that has been supported for the last two decades by the Budapest Convention. Although Senator Hatch has in the past introduced
that would support the MLAT system, this new legislation fails to include any provisions that would increase resources for the U.S. Department of Justice to tackle its backlog of MLAT requests, or otherwise improve the MLAT system.
A growing chorus of privacy groups in the United States opposes the CLOUD Act's broad expansion of U.S. and foreign law enforcement's unilateral powers over cross-border data. For example, Sharon Bradford Franklin of
(and the former executive director of the U.S. Privacy and Civil Liberties Oversight Board) objects that the CLOUD Act will move law enforcement access capabilities "in the wrong direction, by sacrificing digital rights."
and Access Now
also oppose the bill.
Sadly, some major U.S. technology companies
and legal scholars support the legislation. But, to set the record straight, the CLOUD Act is not a " good
." Nor does it do a " remarkable job
of balancing these interests in ways that promise long-term gains in both privacy and security." Rather, the legislation reduces protections for the personal privacy of technology users in an attempt to mollify tensions between law
enforcement and U.S. technology companies.
Legislation to protect the privacy of technology users from government snooping has long been overdue in the United States. But the CLOUD Act does the opposite, and privileges law enforcement at the expense of people's privacy. EFF strongly
opposes the bill. Now is the time to strengthen the MLAT system, not undermine it.
Fake celebrity porn has been a bit of fun, but what about the wider issue of being able to easily fake videos. Perhaps 'evidence' supporting #MeToo accusations, or a bit of fun with Donald Trump in Moscow.
US network TV is very strict about strong language and the basic cables channels have generally followed suit. However some
of the more late night programming on basic cable has started to care less and less about tiptoeing around language.
In fact, SyFy and USA, both networks owned by NBC Universal, are now throwing caution to the wind and have been letting fly with 'fuck' since earlier this year.
Previously, swearing on SyFy and USA stuck to the guidelines laid out by the Federal Communications Commission, but as a basic cable channel, their Standards and Practices division was not actually beholden to follow those rules strictly. In fact
the only thing holding back basic cable networks from using what is considered to be more vulgar language is their advertisers who traditionally don't like it.
To keep things clean, they usually dip the audio of either the f or the k whenever fuck is said in an episode. But according to Buzzfeed, USA and SyFy have worked that all out because their stance now is when language 'fuck' specifically is deemed
important to the style or plot of a show, Syfy and USA now allow it. Such language results in a TV-MA rating so audiences know it's intended for mature audiences only.
However, basic cable channels have started to push the envelope. The word shit has been thrown around a lot more on networks like FX, AMC, and Comedy Central. The latter was even the first to bring uncensored usage of fuck to basic cable by
creating their late night programming block called The Secret Stash, which began with the airing of the R-rated film adaptation South Park: Bigger, Longer & Uncut. They don't have that block anymore, but their late night programming still airs
the uncensored versions of movies and stand-up specials.
Fans of The Magicians on SyFy might have already noticed this change. Ever since the third season premiered on SyFy back in January, they've been dropping f-bombs uncensored.
Now doubt the US moralist campaigners will be reaching for their mageaphones.
Illegal content and terrorist propaganda are still spreading rapidly online in the European Union -- just not on mainstream platforms, new analysis shows.
Twitter, Google and Facebook all play by EU rules when it comes to illegal content, namely hate speech and terrorist propaganda, policing their sites voluntarily.
But with increased scrutiny on mainstream sites, alt-right and terrorist sympathizers are flocking to niche platforms where illegal content is shared freely, security experts and anti-extremism activists say.
Smartphones rule our lives. Having information at our fingertips is the height of convenience. They tell us all
sorts of things, but the information we see and receive on our smartphones is just a fraction of the data they generate. By tracking and monitoring our behaviour and activities, smartphones build a digital profile of shockingly intimate
information about our personal lives.
These records aren’t just a log of our activities. The digital profiles they create are
traded between companies
and used to make inferences and decisions that affect the opportunities open to us and our lives. What’s more, this typically happens without our knowledge, consent or control.
New and sophisticated methods built into smartphones make it easy to track and monitor our behaviour. A vast amount of information can be collected from our smartphones, both when being actively used and while running in the background. This
information can include our location, internet search history, communications, social media activity, finances and biometric data such as fingerprints or facial features. It can also include metadata – information about the data – such as the time
and recipient of a text message.
Each type of data can reveal something about our interests and preferences, views, hobbies and social interactions. For example, a study conducted by MIT demonstrated how
email metadata can be used to map our lives
, showing the changing dynamics of our professional and personal networks. This data can be used to infer personal information including a person’s background, religion or beliefs, political views, sexual orientation and gender identity, social
connections, or health. For example, it is possible to deduce our specific health conditions
simply by connecting the dots between a series of phone calls.
Different types of data can be consolidated and linked to build a comprehensive profile of us. Companies that buy and sell data –
– already do this. They collect and combine billions of data elements about people to make inferences about them. These inferences may seem innocuous but can
reveal sensitive information
such as ethnicity, income levels, educational attainment, marital status, and family composition.
A recent study found that seven in ten smartphone apps share data
with third-party tracking companies like Google Analytics. Data from numerous apps can be linked within a smartphone to build this more detailed picture of us, even if permissions for individual apps are granted separately. Effectively,
smartphones can be converted into surveillance devices.
The result is the creation and amalgamation of digital footprints that provide in-depth knowledge about your life. The most obvious reason for companies collecting information about individuals is for profit, to deliver targeted advertising and
personalised services. Some targeted ads, while perhaps creepy, aren’t necessarily a problem, such as an ad for the new trainers you have been eyeing up.
But targeted advertising based on our smartphone data can have real impacts on livelihoods and well-being, beyond influencing purchasing habits. For example, people in financial difficulty might be
targeted for ads for payday loans
. They might use these loans to pay for unexpected expenses
, such as medical bills, car maintenance or court fees, but could also rely on them for
recurring living costs
such as rent and utility bills. People in financially vulnerable situations can then become trapped in
as they struggle to repay loans due to the high cost of credit.
Targeted advertising can also enable companies to discriminate against people and deny them an equal chance of accessing basic human rights, such as housing and employment. Race is not explicitly included in Facebook’s basic profile information,
but a user’s “ethnic affinity” can be worked out based on pages they have liked or engaged with. Investigative journalists from ProPublica found that it is possible to exclude those who match certain ethnic affinities
from housing ads
, and certain age groups from job ads
This is different to traditional advertising in print and broadcast media, which although targeted is not exclusive. Anyone can still buy a copy of a newspaper, even if they are not the typical reader. Targeted online advertising can completely
exclude some people from information without them ever knowing. This is a particular problem because the internet, and social media especially, is now such a common source of information.
Social media data can also be used to calculate creditworthiness
, despite its dubious relevance. Indicators such as the level of sophistication in a user’s language on social media, and their friends’ loan repayment histories can now be used for credit checks. This can have a direct impact on the fees and
interest rates charged on loans, the ability to buy a house, and even employment prospects
There’s a similar risk with payment and shopping apps. In China, the government has announced plans
to combine data about personal expenditure with official records, such as tax returns and driving offences. This initiative, which is being led by both the government and companies, is
currently in the pilot stage
. When fully operational, it will produce a social credit score
that rates an individual citizen’s trustworthiness. These ratings can then be used to issue rewards or penalties, such as privileges in loan applications or limits on career progression.
These possibilities are not distant or hypothetical – they exist now. Smartphones are
effectively surveillance devices
, and everyone who uses them is exposed to these risks. What’s more, it is impossible to anticipate and detect the full range of ways smartphone data is collected and used, and to demonstrate the full scale of its impact. What we know could be
just the beginning.
Government outlines next steps to make the UK the safest place to be online
The Prime Minister has announced plans to review laws and make sure that what is illegal offline is illegal online as the Government marks Safer Internet Day.
The Law Commission will launch a review of current legislation on offensive online communications to ensure that laws are up to date with technology.
As set out in the Internet Safety Strategy Green Paper
, the Government is clear that abusive and threatening behaviour online is totally unacceptable. This work will determine whether laws are effective enough in ensuring parity between the treatment of offensive behaviour that happens offline and
The Prime Minister has also announced:
That the Government will introduce a comprehensive new social media code of practice this year, setting out clearly the minimum expectations on social media companies
The introduction of an annual internet safety transparency report - providing UK data on offensive online content and what action is being taken to remove it.
Other announcements made today by Secretary of State for Digital, Culture, Media and Sport (DCMS) Matt Hancock include:
A new online safety guide
for those working with children, including school leaders and teachers, to prepare young people for digital life
A commitment from major online platforms including Google, Facebook and Twitter to put in place specific support during election campaigns to ensure abusive content can be dealt with quickly -- and that they will provide advice and guidance to
Parliamentary candidates on how to remain safe and secure online
DCMS Secretary of State Matt Hancock said:
We want to make the UK the safest place in the world to be online and having listened to the views of parents, communities and industry, we are delivering on the ambitions set out in our Internet Safety Strategy.
Not only are we seeing if the law needs updating to better tackle online harms, we are moving forward with our plans for online platforms to have tailored protections in place - giving the UK public standards of internet safety unparalleled
anywhere else in the world.
Law Commissioner Professor David Ormerod QC said:
There are laws in place to stop abuse but we've moved on from the age of green ink and poison pens. The digital world throws up new questions and we need to make sure that the law is robust and flexible enough to answer them.
If we are to be safe both on and off line, the criminal law must offer appropriate protection in both spaces. By studying the law and identifying any problems we can give government the full picture as it works to make the UK the safest place to
The latest announcements follow the publication of the Government's Internet Safety Strategy Green Paper
last year which outlined plans for a social media code of practice. The aim is to prevent abusive behaviour online, introduce more effective reporting mechanisms to tackle bullying or harmful content, and give better guidance for users to identify
and report illegal content. The Government will be outlining further steps on the strategy, including more detail on the code of practice and transparency reports, in the spring.
To support this work, people working with children including teachers and school leaders will be given a new guide for online safety, to help educate young people in safe internet use. Developed by the UK Council for Child Internet Safety (
, the toolkit describes the knowledge and skills for staying safe online that children and young people should have at different stages of their lives.
Major online platforms including Google, Facebook and Twitter have also agreed to take forward a recommendation from the Committee on Standards in Public Life (CSPL) to provide specific support for Parliamentary candidates so that they can remain
safe and secure while on these sites. during election campaigns. These are important steps in safeguarding the free and open elections which are a key part of our democracy.
Included in the Law Commission's scope for their review will be the Malicious Communications Act and the Communications Act. It will consider whether difficult concepts need to be reconsidered in the light of technological change - for example,
whether the definition of who a 'sender' is needs to be updated.
The Government will bring forward an Annual Internet Safety Transparency report, as proposed in our Internet Safety Strategy green paper. The reporting will show:
the amount of harmful content reported to companies
the volume and proportion of this material that is taken down
how social media companies are handling and responding to complaints
how each online platform moderates harmful and abusive behaviour and the policies they have in place to tackle it.
Annual reporting will help to set baselines against which to benchmark companies' progress, and encourage the sharing of best practice between companies.
The new social media code of practice will outline standards and norms expected from online platforms. It will cover:
The development, enforcement and review of robust community guidelines for the content uploaded by users and their conduct online
The prevention of abusive behaviour online and the misuse of social media platforms -- including action to identify and stop users who are persistently abusing services
The reporting mechanisms that companies have in place for inappropriate, bullying and harmful content, and ensuring they have clear policies and performance metrics for taking this content down
The guidance social media companies offer to help users identify illegal content and contact online, and advise them on how to report it to the authorities, to ensure this is as clear as possible
The policies and practices companies apply around privacy issues.
The UK Prime Minister's proposals for possible new laws to stop intimidation against politicians have the potential to prevent legal
protests and free speech that are at the core of our democracy, says Index on Censorship. One hundred years after the suffragette demonstrations won the right for women to have the vote for the first time, a law that potentially silences angry
voices calling for change would be a retrograde step.
No one should be threatened with violence, or subjected to violence, for doing their job, said Index chief executive Jodie Ginsberg. However, the UK already has a host of laws dealing with harassment of individuals both off and online that cover
the kind of abuse politicians receive on social media and elsewhere. A loosely defined offence of 'intimidation' could cover a raft of perfectly legitimate criticism of political candidates and politicians -- including public protest.
Germany is looking into imposing
restrictions on loot boxes in videogames, according to Welt. A study by the University of Hamburg has found that elements of gambling are becoming increasingly common in videogames. It's an important part of the game industry's business model, but
the chairman of the Youth Protection Commission of the State Media Authorities warned that it may violate laws against promoting gambling to children and adolescents.
The Youth Protection Commission will render its decision on loot boxes in March.
Ardalan Shekarabi, the nation's minister of civil affairs, is concerned about making sure Swedish consumer protection laws apply across the board when it comes to gaming. Shekrabi admits that loot boxes are like gambling, but has asked Swedish
authorities to consider whether that's what they should actually be classified as. The idea is to have legislation ready by January of next year to ensure Swedish gamers don't have to worry about a transaction falling outside of the nation's
consumer protection laws in the event something goes south.
Greater transparency for users around news broadcasters
Today we will start rolling out notices below videos uploaded by news broadcasters that receive some level of government or public funding.
Our goal is to equip users with additional information to help them better understand the sources of news content that they choose to watch on YouTube.
We're rolling out this feature to viewers in the U.S. for now, and we don't expect it to be perfect. Users and publishers can give us feedback through the send feedback form. We plan to improve and expand the feature over time.
The notice will appear below the video, but above the video's title, and include a link to Wikipedia so viewers can learn more about the news broadcaster.