Israel's Ministerial Committee for Legislation has unanimously approved a bill forcing Israeli ISPs to censor pornography by default.
Under the terms of the bill, users who want to opt in for adult content would be required to notify their service providers either in writing, by phone, or via the ISP's website.
The bill will now head to the Knesset, Israel's parliament, to start the process of legislative approval.
Critics say that aside from limiting freedom of information, the attempt to censor inappropriate content would likely lasso similar but unrelated content such information on breast cancer and other educational material. In addition, critics said,
the need for users to notify providers in order to gain access to pornography is arguably a violation of privacy.
The bill was sponsored by Jewish Home MK Shuli Moalem-Refaeli, who noted that similar default blocking of adult content has been introduced in other Western countries, notably Britain.
Moalem said that ensuring non-pornographic states are not filtered by accident would be a challenge to overcome as the bill is fine-tuned before approval by the Knesset. She noted that sites which contain both adult-oriented and family-suitable
material also present difficulties to censoring systems.
The current wording of the Digital Economy Bill punishes foreign adult websites who do not implement age verification by suffocating them from payments and advertising. It does not at the moment facilitate such websites by being blocked by ISPs.
Open Rights Group reports on a clamour by censorial MPs to table amendment to give powers to block non-complying websites. I suspect that in reality the security services would not be very appreciative as maybe massive use of VPNs and the like
would hinder surveillance of criminals and terrorists. The Open Rights Group reports:
Now we want censorship: porn controls in the Digital Economy Bill are running out of control
The government's proposal for age verification to access pornography is running out of control. MPs have worked out that attempts to verify adult's ages won't stop children from accessing other pornographic websites: so their proposed answer is
to start censoring these websites.
This only serves to illustrate the problems with the AV proposal. Age verification was always likely to be accompanied by calls to block "non-compliant" overseas websites, and also to be extended to more and more categories of
We have to draw a line. Child protection is very important, but let's try to place this policy in some context:
Take up of ISP filters is around 10-30% depending on ISP, so roughly in line with expectations and already restricting content in the majority of households with children (other measures may be restricting access in other cases).
3% of children aged 9-12 are believed to have accessed inappropriate material
Pornography can and will be circulated by young people by email, portable media and private messaging systems
The most effective protective measures are likely to be to help young people understand and regulate their own behaviour through education, which the government refuses to make compulsory
MPs have to ask whether infringing on the right of the entire UK population to receive and impart legal material is a proportionate and effective response to the challenges they wish to address.
Censorship is an extreme response, that should be reserved for the very worst, most harmful kinds of unlawful material: it impacts not just the publisher, but the reader. Yet this is supposed to be a punishment targeted at the publishers, in
order to persuade the sites to "comply".
If website blocking was to be rolled out to enforce AV compliance, then the regulator would be forced to consider whether to block a handful of websites, and fail to "resolve" the accessibility of pornography, or else to try to censor
thousands of websites, with the attendant administrative burden and increasing likelihood of errors.
You may ask: how likely is this to become law? Right now, Labour seem to be considering this approach as quite reasonable. If Labour did support these motions in a vote, together with a number of Conservative rebels, this amendment could easily
be added to the Bill.
Another area where the Digital Economy Bill is running out of control is the measures to target services who "help" pornography publishers. The Bill tries to give duties to "ancillary services" such as card payment providers
or advertising networks, to stop the services from making money from UK customers. However, the term is vague. They are defined as someone who:
provide[s], in the course of a business, services which enable or facilitate the making available of pornographic material or prohibited material on the internet by the [publisher]
Ancillary services could include website hosts, search engines, DNS services, web designers, hosted script libraries, furniture suppliers ... this needs restriction just for the sake of some basic legal certainty.
Further problems are arising for services including Twitter, who operate on the assumption that adults can use them to circulate whatever they like, including pornography. It is unclear if or when they might be caught by the provisions. They are
also potentially "ancillary providers" who could be forced to stop "supplying" their service to pornographers to UK customers. They might therefore be forced to block adult content accounts to UK adults, with or without age
The underlying problem starts with the strategy to control access to widely used and legal content through legislative measures. This is not a sane way to proceed. It has and will lead to further calls for control and censorship as the first
steps fail. More calls to "fix" the holes proceed, and the UK ends up on a ratchet of increasing control. Nothing quite works, so more fixes are needed. The measures get increasingly disproportionate.
Website blocking needs to be opposed, and kept out of the Bill.
Proposed amendments to the UK's Digital Economy Bill have revealed a desire by some MPs to force search engines to tackle piracy. A new clause would require search engines to come to a voluntary arrangement with rightsholders, or face being
forced into one by the government.
Content owners regularly accuse companies such as Google and Bing of including infringing content in their search results, often on the initial pages following a search where exposure to the public is greatest.
In addition to having these pirate results demoted or removed entirely, content providers believe that results featuring genuine content should receive priority, to ensure that the legitimate market thrives.
At least in part, Google has complied with industry requests. Sites which receive the most takedown notices are demoted in results, while some legitimate content has been appearing higher. But of course, entertainment industry companies want more
-- and they might just get it.
Currently under discussion in Parliament is the Digital Economy Bill. The Bill appears to be broadening in scope and the role of search engines is now on the agenda, something which the BPI hinted at last week in comments to TorrentFreak. A new
clause titled Power to provide for a code of practice related to copyright infringement envisions a situation whereby search engines come to a voluntary agreement with rightsholders on how best to tackle piracy, or have one forced upon
The Secretary of State may by regulations make provision for a search engine to be required to adopt a code of practice concerning copyright infringement that complies with criteria specified in the regulations, the proposed clause reads.
The regulations may provide that if a search engine fails to adopt such a code of practice, any code of practice that is approved for the purposes of that search engine by the Secretary of State, or by a person designated by the Secretary of
State, has effect as a code of practice adopted by the search engine.
If the clause was adopted, the Secretary of State would also be granted powers to investigate disputes surrounding a search engine's compliance with any code, appoint a regulator, and/or impose financial penalties or other sanctions if
companies like Google fall short.
The Pirate Party in Iceland continues its shakeup of the local political arena. According to the latest polls the party now has a serious shot at taking part in the next Government coalition, with roughly 20% of all votes one week before the
The Pirate Party was founded in 2006 by Rick Falkvinge, and has scored some significant victories over the years including a continuing presence in the European Parliament.
Iceland's Pirates have a great track record already, with three members in the national Parliament. However, more may join in the future as the party has added many new supporters in recent months. The Pirates have been
leading the polls for
most of the year and are currently neck-and-neck with the Social Democratic Alliance to become the largest party in the country.
This brings the Pirates in an unusual position where they have to start thinking about possible partners to form a coalition Government, for the first time in history.
TorrentFreak spoke with Ásta Helgadóttir, Member of Parliament for the Icelandic Pirate Party, who says that the party is ready to bring the change many citizens are longing for. Despite the Pirate name, copyright issues are not central to their
plans. That said, they have spoken out against recent web-blocking efforts.
Iceland's ISPs have been ordered to
block access to 'infringing' sites such as The Pirate Bay, which the party sees as a step in the wrong direction. The party fears that these censorship efforts will lead to more stringent measures. Helgadóttir said:
These measures are not a solution and only exacerbate the problem. There needs to be a review of copyright law and how creators are compensated for their work.
In 2013 the Pirate Party came along. The freedom of information aspect attracted me--I'm very much against censorship.
One idea being mooted at the time was the blocking of porn sites in Iceland, which set alarm bells ringing for Ãsta. According to Icelandic law, pornography is illegal. It's a law from the 19th century, and it hasn't been enforced for fifteen
years now. Then the idea of building a pornography shield around Iceland came up. And I thought, No, you can't do that! It's censorship! And they were like, No, it's not censorship, we're thinking about the children!'"
The Pirate Party is trying to infiltrate the system and change these 'heritage laws, because when you read a law, you have to understand the root of that law--when was it written, what was the context, and the culture. And now we're in the 21st
century, with the internet, which changes everything.
The parliamentary elections will take place next week, October 29.
The UK government has introduced an amendment to the Investigatory Powers Bill currently going through Parliament, to make ensure that data retention orders cannot require ISPs to collect and retain third party data. The Home Office had
previously said that they didn't need powers to force ISPs to collect third party data, but until now refused to provide guarantees in law.
Third party data is defined as communications data (sender, receiver, date, time etc) for messages sent within a website as opposed to messages sent by more direct methods such as email. It is obviously a bit tricky for ISPs to try and decode
what is going on within websites as messaging data formats are generally proprietary, and in the general case, simply not de-cypherable by ISPs.
The Government will therefore snoop on messages sent, for example via Facebook, by demanding the communication details from Facebook themselves.
Facebook's VPs Joel Kaplan and Justin Osofsky wrote in a blog:
In recent weeks, we have gotten continued feedback from our community and partners about our Community Standards and the kinds of images and stories permitted on Facebook. We are grateful for the input, and want to share an update on our
Observing global standards for our community is complex. Whether an image is newsworthy or historically significant is highly subjective. Images of nudity or violence that are acceptable in one part of the world may be offensive -- or even
illegal -- in another. Respecting local norms and upholding global practices often come into conflict. And people often disagree about what standards should be in place to ensure a community that is both safe and open to expression.
In the weeks ahead, we're going to begin allowing more items that people find newsworthy, significant, or important to the public interest -- even if they might otherwise violate our standards. We will work with our community and partners to
explore exactly how to do this, both through new tools and approaches to enforcement. Our intent is to allow more images and stories without posing safety risks or showing graphic images to minors and others who do not want to see them.
As always, our goal is to channel our community's values, and to make sure our policies reflect our community's interests. We're looking forward to working closely with experts, publishers, journalists, photographers, law enforcement officials
and safety advocates about how to do better when it comes to the kinds of items we allow. And we're grateful for the counsel of so many people who are helping us try to get this right.
Facebook is notoriously terrible when it comes to censorship of the naked human body, especially when it comes to pieces of the female anatomy. So it's not surprising that a non-profit's breast cancer awareness video was taken down because it
featured stylised female nipples.
So the Swedish Cancer Society countered with a replacement ad, which featured square breasts instead of round ones. The organization posted the video earlier this week, but it was removed because, as Facebook said the:
Ad can not market sex products or services nor adults products or services.
The organization wrote up an open letter to Facebook, in which it introduced the shape-based compromise:
We understand that you have to have rules about the content published on your platform. But you must also understand that one of our main tasks is to disseminate important information about cancer -- in this case breast cancer.
After trying to meet your control for several days without success, we have now come up with a solution that will hopefully make you happy: Two pink squares! This can not possibly offend you, or anyone. Now we can continue to spread our
important breast school without upsetting you.
Facebook later apologized for its crap censorship rules being found out:
We apologize for the error and have let the advertiser know we are approving their ads.
China has proposed new restrictions for online gaming companies to implement. Major tech companies with significant presence in the region could have to undergo substantial operational changes, reports Dow Jones Business News.
The draft rules posted online by the Chinese government on Sept. 30, would require online-game operators to lock out users under the age of 18 between the hours of midnight and 8 a.m. The rules will apply to all smart devices.
The regulation is vague as to whether companies would have to use Beijing-approved software. The country says it will support the development of web-filtering software to keep children safe online and will determine whether preexisting products
comply with the new requirements.
Along with the internet curfew would be a requirement for a number of websites to post warnings for content unsuitable for minors.
Censorship on the internet is rampant with 60+ countries engaging in state censorship. A Cambridge University research project is aiming to uncover the scale of this censorship, and how it affects users and publishers of information
Germanpulse has published an interesting piece about German politicians expecting social media websites to pre-censors posts that the government doesn't like:
We have reported on the German government's war against social media giants Facebook, Twitter and Google many times over the last year as the country tries to rid the popular sites of any signs of hate speech. While the companies have made
attempts to appease government officials with stricter enforcement, each move is said to still not be enough. The question is: is Germany taking the fight too far?
Volker Kauder, a member of the CDU, spoke with Der Spiegel this week to say the time for roundtables is over. I've run out of patience, and argues that Facebook, Twitter and Google have failed and should pay 50,000 euro ($54,865) fines
for not providing a strict level of censorship.
All major social media sites do provide tools to report hate speech offenders, but Kauder isn't the only one to argue that the tool is ineffective.
Justice Minister Heiko Maas made a statement that only 46 percent of the comments were erased by Facebook, while a mere one percent were taken care of by Twitter.
Maas' solution is not much different from Kauder's, as he told Handelsblatt that the companies should face legal consequences.
Der Spiegel has also published an opinion piece showing a little exasperation with trying to get comments censored by Facebook.
In June, the national body made up of justice ministers from the 16 federal states in Germany launched a legislative initiative to introduce a law which, if passed, would require operators of Internet platforms to immediately disclose the
identity of users whose online actions are the subject of criminal proceedings. The law explicitly covers companies that are not based in Germany, but in fact do business here.
Justice Minister Maas must now introduce the draft law to Chancellor Merkel's cabinet, but he's hesitant out of fear of a backlash among a net community that still views Facebook as a symbol of Internet freedom. So far, he has done little that
goes beyond appeals. If he wanted too, however, Maas could push for a further tightening of the country's telecommunications law. All that would be needed is a clause stipulating that every Internet company that does business in Germany would be
required to name one person within the firm who is a resident in the country who could be held liable under German law.
Social media users who encourage flame wars or retweet the doxing (revealing identifying information with malicious intent) of others are set to be punished more severely by British prosecutors.
The Crown Prosecution Service (CPS)'s latest Guidelines on prosecuting cases involving communications sent via social media target doxing, online mobs, fake social media profiles and other social media misbehaviour.
Also included in the latest version of the guidance is a specific encouragement to prosecutors to charge those who egg on others to break social media speech laws. Those who encourage others to commit a communications offence may be charged
with encouraging an offence under the Serious Crime Act 2007, warns the guidance.
In a Kafka-esque twist, the guidance also includes this chilling line, discussing how prosecutors can prove the criminal offence of sending a grossly offensive message, under section 127 of the Communications Act 2003:
The offence is committed by sending the message. There is no requirement that any person sees the message or be offended by it.
Another nasty touch is that the CPS will allow victims to decide whether crimes are deemed to be 'hate crimes' and therefore attract more severe penalties. The CPS policy consultation defines race/religion hate crimes as follows:
Crimes involving hostility on the basis of race or religion
The reporting and prosecution of hate crime are shaped by two definitions; one is subjective and is based on the perception of the victim and the other is objective and relies on supporting evidence.
Both the subjective and objective definitions refer to hostility, not hatred. There is no statutory definition of hostility and the everyday or dictionary definition is applied, encompassing a broad spectrum of behaviour.
We have an agreed definition with the police for identifying and flagging cases involving hostility on the basis of race or religion. The joint definition is:
Any criminal offence which is perceived by the victim or any other person, to be motivated by a hostility or prejudice based on a person's race or religion or perceived race or religion.
The equivalent paragraph an disability hate crime adds explaining how the CPS has waved its hands and extended the scope:
This definition is wider than the statutory definition, to ensure we capture all relevant cases:
The guidance also encourages prosecutors to treat social media crimes committed against persons serving the public more seriously than nasty words directed against their fellow members of the public. Similarly, coordinated attacks by
different people should also attract greater prosecutorial attention.
Prosecution in all cases is said to be less likely if swift and effective action has been taken by the suspect and/or others, for example service providers, to remove the communication .
Australia's advert censor has upheld a complaint against Sony Pictures Australia over online advertising for animated comedy movie Sausage Party that a couple of viewers found 'offensive'.
The Advertising Standards Board (ASB) released the case reports of two separate complaints about the advertising, one appearing on Facebook and the other appearing on news.com.au.
The advertisement shows characters from the movie with dialogue including fuck you up, move your fucking ass, and shit . The dialogue is not only spoken, but the words appear written on the screen in large letters.
The complainants told the ASB the advertisement was a pop-up, which they did not choose to open, and involved no warning of inappropriate language.
In a response to the complainants and the ASB, Sony claimed the advertisement was purchased programmatically and was not intended for viewing by people under the age of 15. In response to the complainant who saw the video on Facebook, Sony said,
Facebook requires everyone to be at least 13 years old before they can create an account . Sony claimed that due to the programmatic purchasing of the online advertisement, which is intended to limit the age groups that can view the ad, the
content did not breach the code.
However the ASB disagreed, believing the advertisement's use of the word fuck infringed on the code of ethics, and upheld the complaints. The ASB also determined the ad's placement on Facebook would include people under the age of 15, as
website allows users to register once they turn 13.
A legal case against Facebook, which will involve a 14-year-old taking the company to court over naked images published on the social network, could open the floodgates for other civil claims, according to lawyers who work with victims of revenge
Facebook's forthcoming trial, which centres on the claim that it is liable for the publication of a naked picture of the girl posted repeatedly on a shame page as an act of revenge.
The girl's lawyers say the photograph, which the girl's parents say was extracted from her through blackmail, was removed by Facebook several times after being reported, but it had not been permanently blocked.
A lawyer for Facebook argued the claim for damages should be dismissed, saying the company always took down the picture when it was notified. They pointed to a European directive that they claimed provided protection from having to monitor a vast
amount of online material.
Facebook currently actively scans every image uploaded on to the site, and uses PhotoDNA to block known child abuse images. Other potentially problematic images, such as those in revenge pornography cases, have to be reported and reviewed before they are taken down. But critics argue that if it has the technology to catch other photographs that cause distress, it should do more to protect users from repeated harassment.
But there may be another reason Facebook is not removing images before they are reported. Under current EU law social media sites are immune from liability for content as long as they react quickly to complaints, under a notice and takedown
One reason they could be reluctant to proactively search for all potentially abusive images is that, ironically, by assuming some level of editorial responsibility, in theory they could be held liable for the abuse they miss. Censorship
campaigner John Carr noted:
It's all a mess which is why we need a specific law saying that if companies try and prevent bad content, they won't lose their immunity if they don't always get it right.
Google is now reported to be blocking the searches of would-be ISIS recruits and sending them to anti-ISIS websites.
That means that if you search for keywords like the Isis slogan baqiya wa tatamaddad (remaining and expanding), the deferential term al dawla al islamiya (supporters of Islamic State), or ISIS media sources like Al-Furqan and
Al-I'tisam, you'll end up seeing videos on why ISIS is bad.
All very commendable but now doubt the censorship capability will be eyed by not such shining causes. How long before searches for your local chippie get redirected to government dietary websites, or how long before searches for escorts get
redirected to vintage car auctions.
Microsoft is now removing apps because they don't fall under the new International Age Rating Coalition (IARC) rating system.
Microsoft informed developers months ago that their public and private apps needed to be updated to meet the new age rating. Those that didn't comply would face having their apps kicked off the Windows Store on September 30.
The IARC provides a global, unified platform for games and apps so that customers from around the world know the age requirements for the software. The IARC was established in 2013, and makes it easy for developers to obtain age ratings from
different regions. All developers need to do is answer a five-minute questionnaire about the app's content and interactive elements.
Previously, Microsoft's Windows Store provided five ratings for apps and games: 3+ (suitable for young children), 7+ (suitable for ages 7 and older), 12+ (suitable for ages 12 and older), 16+ (suitable for ages 16 and older), and 18+ (suitable