US singer Katy Perry has become the latest artist to be banned from China.
The indefinite ban is apparently due to her wearing a sunflower dress at her 2015 concert in Taiwan capital Taipei. The sunflower has become a symbol of the anti-China movement in Taiwan. At the same concert, the singer also draped a Taiwan flag
The singer wore the same dress when performing a little later in Shanghai and so has ended up on China's never again list.
Google is escalating its campaign of internet censorship, announcing that it will expand its workforce of human censors to over 10,000. The
censors' primary focus will be videos and other content on YouTube, but will work across Google to censor content and train its automated systems, which remove videos at a rate four times faster than its human employees.
Human censors have already reviewed over 2 million videos since June. YouTube has already removed over 150,000 videos, 50 percent of which were removed within two hours of upload. The company is working to accelerate the rate of takedown through
machine-learning from manual censorship.
YouTube CEO Susan Wojcicki explained the move in an official blog post:
Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly
2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future. We are also taking aggressive action on comments, launching new comment moderation tools and in some cases
shutting down comments altogether. In the last few weeks we've used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments. Our teams also work closely with NCMEC, the IWF,
and other child safety organizations around the world to report predatory behavior and accounts to the correct law enforcement agencies.
We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.
At the same time, we are expanding the network of academics, industry groups and subject matter experts who we can learn from and support to help us better understand emerging issues.
We will use our cutting-edge machine learning more widely to allow us to quickly and efficiently remove content that violates our guidelines. In June we deployed this technology to flag violent extremist content for human review and we've seen
Since June we have removed over 150,000 videos for violent extremism.
Machine learning is helping our human reviewers remove nearly five times as many videos than they were previously.
Today, 98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms.
Our advances in machine learning let us now take down nearly 70 percent of violent extremist content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed.
Since we started using machine learning to flag violent and extremist content in June, the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.
The European Commission has joined the list of organisations calling on the likes of Google, Facebook and Twitter to
do more to remove extremist content - or face further legislation.
EU home affairs commissioner Dimitris Avramopoulos warned the real battlefield is against 21st century terrorism. He said most of the recent terrorist attackers had never travelled to Syria or Iraq. But most of them had been influenced, groomed
and recruited to terrorism on the internet.
Avramopoulos said he believed it was feasible to reduce the time it takes to remove content to a few hours. There is a lot of room for improvement, for this cooperation to produce even better results.
Avramopoulos also said he thought it was worthwhile to harness artificial intelligence to complete the task. You now.. like Facebook censoring Robin Redbreast Christmas cards because the word 'breast' appeared in filenames.
The Commission said it would make a decision by May next year on whether additional measures -- including legislation -- are required in order to better address the problem of illegal content on the internet.
Charlie Pearce has been convicted of attempted murder. He was obsessed with sexually violent images when he raped and bludgeoned his victim on his
17th birthday, leaving her for dead.
Feminists have used the case to call for an extension to Britain's porn censorship laws about violent porn in particular, and of course, for a wider ban of porn. Sarah Green, co-director of the End Violence Against Women Coalition, said:
This case is extremely disturbing and the age of the offender should alarm us all. The evidence about his searches for online porn before the attack tell us that we urgently need public discussion about the contents of contemporary online
pornography, its accessibility and what is known about the way it influences those who use it.
It is currently a criminal offence in England and Wales to possess pornographic material which is grossly offensive, disgusting or otherwise obscene and explicitly and realistically depicts life threatening and serious injury.
However pornographic material that is obviously scripted and not realistic is legal. Feminists claim the vast majority of images depicting rape are therefore lawful to possess.
Back in March, Australia shelved plans to extend its copyright safe harbor provisions to services such as Google and
Facebook. Now, following consultations with the entertainment industries, the government has revealed it will indeed exclude such platforms from safe harbour provisions.
Services such as Google, Facebook and YouTube now face massive legal uncertainty as they themselves can be held responsible for copyright infringing posts by users. The logical result would be that the companies will have to check every post
before upload. The vast quantity of posts to check would make this an economically unviable option.
Proposed amendments to the Copyright Act earlier this year would've seen enhanced safe harbor protections for such platforms but they were withdrawn at the eleventh hour due to lobbying by media companies. Such companies accuse platforms like
YouTube of exploiting safe harbor provisions in the US and Europe, which forces copyright holders into an expensive battle to have infringing content taken down.
Communications Minister Mitch Fifield has confirmed the exclusions, so now it is up to Google and Facebook to consider how they can operate under this law.
Iran's telecommunications minister says that his ministry wants to customize Internet blocking based on user's occupation, age, and other factors.
The attorney general's office has conditionally agreed with this plan, Minister Mohammad Javad Azari Jahromi announced on December 4.
Without providing any details, he said his ministry had reviewed suggestions made by the attorney general and prepared appropriate technical responses. He expressed hope that the office would give its final approval for the implementation of the
Despite the regime's extenisve efforts to censor the Internet, Iranian users currently get around the restrictions by using anti-filtering programs or virtual private networks.
The 15:17 To Paris is a 2018 USA drama by Clint Eastwood.
Starring Jenna Fischer, Judy Greer and Jaleel White.
American soldiers discover a terrorist plot on a Paris-bound train.
The Warner Bros film was submitted to the MPAA in December 2017 and was rated R for a sequence of violence and bloody images. The distributors are now appealing to the CARA Appeals Board, presumably seeking a PG-13 rating.
Egyptian singer Shyma has been arrested on suspicion of incitement to debauchery over her new video for song Andy Zoroof (I Have Problems), which authorities considered to be too daring and suggestive.
If convicted, the singer faces a one-year prison sentence, and in the mean time she is being held in custody.
At a court hearing where the singer's detention was extended by a further seven days, the singer stated she didn't know her video would cause such controversy and was acting according to the video director's requests.
Additionally, the Music Syndicate have decided to withdraw the singer's annual license, leaving her unable to perform and earn a living as a singer. The union also claimed that her video was pornographic and harmed the values of community and
The video, which sparked outrage in the country, features the singer in a classroom in front of male students licking an apple and slowly unpeeling a banana, eating it and pouring milk on it, and worst of all, pulling her bra strap off her
Google makes their internal processes difficult to track by design, but the author of a report By Karlaplan states that these changes are fairly
recent, suspected to have been implemented on the 30th of August -- the changes having only been discovered in late October.
However, until the publication of this document , little other than anecdotal evidence was presented with complaints from YouTube content creators.
Through extensive analysis of the YouTube Data API and other sources, Karlaplan found that YouTube tags demonetized videos according to both severity and type of sensitive content -- neither of which is transparent to the uploader.
The report also notes that videos are more likely to be hidden from viewers if their likely viewership is low. Perhaps as higher viewership videos may be more likely to be appealed, or more likely to be spotted as examples of censorship and hence
generate bad publicity for Google.
Google have published an information page that is quite useful in detailing which videos get censored. Google outlines two levels of sensitivity that advertisers can select when not wanting to be associated with sensitive content. Google
While the Standard content filter excludes the most inappropriate content, it doesn't exclude everything that a particular advertiser may find objectionable. The Sensitive content categories allow you to opt out of additional content that many
advertisers find inappropriate. Eg:
Tragedy and conflict
Standard: Excludes graphic footage of combat or war
Sensitive: Excludes the above plus footage of soldiers marching with weapons
Sensitive social issues
Standard: Excludes videos intended to elicit a response about controversial issues
Sensitive: Excludes the above plus news commentary about controversial issues
Sexually suggestive content
Standard: Excludes videos about sex or sexual products
Sensitive: Excludes the above plus music videos with suggestive themes
Sensational and shocking
Standard: Excludes videos of disasters or accidents that show casualties or death
Sensitive: Excludes the above plus videos of moderate disasters or accidents that show minimal casualties or harm
Profanity and rough language
Standard: Excludes videos with frequent use of profanity
Sensitive: Excludes the above plus videos with profanity that has been bleeped out
Cloudflare's decision to ban the Daily Stormer has led to an increase in censorship requests. Since August, Cloudflare has received more than 7,000 requests from across the political spectrum for removal of content
Senior police officers are to lose the power to self-authorise access to personal phone and web browsing records under a series of late changes
to the snooper's charter law proposed by ministers in an attempt to comply with a European court ruling on Britain's mass surveillance powers.
A Home Office consultation paper published on Thursday also makes clear that the 250,000 requests each year for access to personal communications data by the police and other public bodies will in future excluded for investigations into minor
crimes that carry a prison sentence of less than six months.
But the government says the 2016 European court of justice (ECJ) ruling in a case brought by Labour's deputy leader, Tom Watson , initially with David Davis, now the Brexit secretary, does not apply to the retention or acquisition of personal
phone, email, web history or other communications data by national security organisations such as GCHQ, MI6 or MI5, claiming that national security is outside the scope of EU law.
The Open Rights Group has been campaigning hard on issues of liberty and privacy and writes:
This is major victory for ORG, although one with dangers. The government has conceded that independent authorisation is necessary for communications data requests, but refused to budge on retained data and is pushing ahead with the Request
Filter, to enable rapid interrogation and analysis of the stored communications data.
Adding independent authorisation for communications data requests will make the police more effective, as corruption and abuse will be harder. It will improve operational effectiveness, even if less data is used during investigations and trust in
the police should improve.
Nevertheless the government has disregarded many key elements of the judgment
It isn't going to reduce the amount of data retained
It won't notify people whose data is used during investigations
It won't keep data within the EU, instead it will continue to transfer it, presumably specifically to the USA
The Home Office has opted for a six month sentence definition of serious crime rather than the Lords' definition of crimes capable of sentences of at least one year.
These are clear evasions and abrogations of the judgment. The mission of the Home Office is to uphold the rule of law. By failing to do what the courts tell them, the Home Office is undermining the very essence of the rule of law.
If the Home Office won't do what the highest courts tell it to do, why should anybody else? By picking and choosing the laws they are willing to care about, they are playing with fire.
There was one final surprise. The Code of Practice covers the operation of the Request Filter . Yet again we are told that this police search engine is a privacy safeguard. We will now run through the code in fine detail to see if any such
safeguards are there. On a first glance, there are not.
If the Home Office genuinely believe the Request Filter is a benign tool, they must rewrite this section to make abundantly clear that it is not a mini version of X-Keyscore (the NSA / GCHQ'S tool to trawl their databases of people linked to
their email and web visits) and does not operate as a facility to link and search the vast quantities of retained and collected communications data.
Hawaii State Representative Sean Quinlan has advocated for self-regulation of loot boxes by the video game industry whilst also
suggesting that such games should carry a 21+ age rating.
He said that ultimately, it's best for the industry to self-police. The ideal solution would be for the game industry to stop having gambling or gambling-like mechanics in games that are marketed to kids... BUT ... he believes games
makers should be held accountable. The ESRB would need to enforce higher-grade ratings and other labels to distinguish games that rely on predatory monetization. As an example, he said that the ESRB could say that if a game has loot crates, it
gets a 21-plus rating.
The Entertainment Software Association is proving resistant, however. Their response ran along the same lines as many publishers, asserting that loot boxes are a voluntary feature and that the gamer makes the decision in regards to their purchase
The Russian government is currently discussing plans to build its own independent internet infrastructure that will be used
by BRICS member states 204 Brazil, Russia, India, China, and South Africa.
The Russian Security Council has today formally asked the country's government to start the building of a global DNS system that Russia and fellow BRICS member states could use to take control of the internet as used within the BRICS countries.
Russia and fellow BRICS nations would have the option to flip a switch and move Internet traffic from today's main DNS system to their own private system. The states will then have absolute and direct control of sites to be blocked. Furthermore,
the alternative DNS system also allows oppressive regimes to deanonymize Tor traffic and hunt for dissidents, via an attack called DefecTor.
Russia, China, and many other countries have criticized the US for hoarding control over the domain naming system (DNS), a position they claim has allowed the US to intercept and tap global Internet traffic. Last year, the US handed over control
over the DNS system to ICANN , an independent organization. While Russia and China welcomed the move, they actually wanted the DNS system to be controlled by the United Nations' International Telecommunication Union. This is because the two
countries have more power in UN matters than control over an NGO, like ICANN.
New Zealand was clearly a little embarrassed over the banning of book for young people. Ted Dawe's award-winning novel Into the River , when campaigners called for a review of the book's age classification.
When an interim restriction order was issued in 2015 an anomaly in the law meant there were only two options - leave it unrestricted or ban it entirely until the board of review met. The book was banned for six weeks until the interim order was
reviewed and the restriction was lifted.
A new bill has now been passed by the New Zealand parliament that gives the censor board the ability to issue interim orders based on age or specified classes of persons.
National MP Chris Bishop drafted the bill and n the case of Into the River it would have meant the book could have reverted to its R14 status rather than banning it outright. Bishop said after his bill had been passed unanimously. He added:
It is clear that Into the River should not have been banned - this small but useful change will help ensure such a situation doesn't happen again.
54% of 12- to 15-year-olds use social media platforms such as Facebook and Twitter, to access online news, making it the second most popular
source of news after television (62%).
The news that children read via social media is provided by third-party websites. While some of these may be reputable news organisations, others may not.
73% of online tweens are aware of the concept of 'fake news', and four in ten (39%) say they have seen a fake news story online or on social media.
The findings are from Ofcom's Children and Parents Media Use and Attitudes Report 2017 . This year, the report examines for the first time how children aged 12 to15 consume news and online content.
Filtering fake news
The vast majority of 12-15s who follow news on social media are questioning the content they see. Almost nine in ten (86%) say they would make at least one practical attempt to check whether a social media news story is true or false.
The main approaches older children say they would take include:
seeing if the news story appears elsewhere (48% of children who follow news on social media would do this);
reading comments after the news report in a bid to verify its authenticity (39%);
checking whether the organisation behind it is one they trust (26%); and
assessing the professional quality of the article (20%).
Some 63% of 12- to 15-year-olds who are aware of fake news are prepared to do something about it, with 35% saying they would tell their parents or other family member. Meanwhile, 18% would leave a comment saying they thought the news story was
fake; and 14% would report the content to the social media website directly.
Children's online lives
More children are using the internet than ever before. Nine in ten (92% of 5- to 15-year-olds) are online in 2017 -- up from 87% last year.
More than half of pre-schoolers (53% of 3-4s) and 79% of 5-7s are online -- a year-on-year increase of 12 percentage points for both these age groups.
Much of this growth is driven by the increased use of tablets: 65% of 3-4s, and 75% of 5-7s now use these devices at home -- up from 55% and 67% respectively in 2016.
Children's social media preferences have also shifted over recent years. In 2014, 69% of 12-15s had a social media profile, and most of these (66%) said their main profile was on Facebook. The number of 12-15s with a profile now stands at 74%,
while the number of these who say Facebook is their main profile has dropped to 40%.
Though most social media platforms require users to be 13 or over, they are very popular with younger children. More than a quarter (28%) of 10-year-olds have a social media profile, rising to around half of children aged 11 or 12 (46% and 51%
Negative online experiences
Half of children (49%) aged 12 to 15 who use the internet say they 'never' see hateful content online.
But the proportion of children who have increased this year, from 34% in 2016 to 45% in 2017.
More than a third (37%) of children who saw this type of content took some action. The most common response was to report it to the website in question (17%). Other steps included adding a counter-comment to say they thought it was wrong (13%),
and blocking the person who shared or made the hateful comments (12%).
ISP Website blocking
Use of network level filters increased again this year. Nearly two in five parents of 3-4s and 5-15s who have home broadband and whose child goes online use home network-level content filters, and this has increased for both groups since 2016,
part of a continuing upward trend.
Use of parental control software (software set up on a particular device, e.g. Net Nanny, McAfee Family Protection) has also increased among parents of 3-4s and 5-15s, to around three in ten.
More than nine in ten parents of 5-15s who use either of these tools consider them useful, and around three-quarters say they block the right amount of content.
One in five parents who use network-level filters think their child would be able to bypass them, although fewer 12-15s say they have done thisOne in five of the parents of 5-15s who use network-level filters say they think their child would be
able to unset, bypass or over-ride them; more likely than in 2016. This is similar to the number of 12-15s who say they know how to do this, although fewer say they have ever done it (6%).
Twitter announced yesterday that it would begin removing verification badges for famous tweeters that it does not
approve of. Not for what is tweeted, but for offline behaviour Twitter does not like.
The key phrase in Twitter's policy update is this one: Reasons for removal may reflect behaviors on and off Twitter. Before yesterday, the rules explicitly applied only to behavior on Twitter. From now on, holders of verified badges will be held
accountable for their behavior in the real world as well. Twitter has promised further information about the new censorship policy in due course.
Many questions remain unanswered. What will the company's review consist of? How will it examine users' offline behavior? Will it simply respond to reports, or will it actively look for violations? Will it handle the work with its existing team,
or will it expand its trust and safety team?
Twitter has immediately rescinded blue tick verification from accounts belonging to far-right activists, including Jason Kessler, a US white supremacist, and Tommy Robinson, founder of the English Defence League.
Offsite Comment: Twitter has turned its back on free speech
The platform plans to exercise ideological control over its users.
A study by Princeton researchers came to light earlier this month, revealing that over 400 of the world's most popular websites
use the equivalent of hacking tools to spy on you without your knowledge or consent.
Using session replay scripts from third-party companies, websites are recording your every act, from mouse moves to clicks, to keylogging what you type, and extracting your personal info off the page. If you accidentally paste something into a
text field from your clipboard, like an address or password you didn't want to type out, the scripts can record, transmit, and store that, too.
What these sites are doing with this information, and how much they anonymize or secure it, is a crapshoot.
Among top retail offenders recording your every move and mistake are Costco, Gap.com, Crate and Barrel, Old Navy, Toys R Us, Fandango, Adidas, Boots, Neiman Marcus, Nintendo, Nest, the Disney Store, and Petco.
Tech and security websites spying on users include HP.com, Norton, Lenovo, Intel Autodesk, Windows, Kaspersky, Redhat.com, ESET.com, WP Engine, Logitech, Crunchbase, HPE.com (Hewlett Packard Enterprise), Akamai, Symantec, Comodo.com, and MongoDB.
Other sites you might recognize that are also using active session recording are RT.com, Xfinity, T-Mobile, Comcast, Sputnik News, iStockphoto, IHG (InterContinental Hotels), British Airways, NatWest, Western Union, FlyFrontier.com, Spreadshirt,
Deseret News, Bose, and Chevrolet.com