Sadly, Internet censorship is rife in many countries. Consequently hundreds of millions of people worldwide are denied daily their right to knowledge by governmental controls on sites such as YouTube and Facebook.
It is imperative we recognise and challenge the powers that restrict not only the public's access to the simple joy of funny cat content, but to information as a whole.
Freedom is knowledge. Knowledge is power. Cats know everything.
From the very heart of the Internet we raise our banner with #ThePussycatRiot: a new protest movement to unite the cats of the world and their owners in opposition to cyber censorship. We aim to raise awareness of the oppressive regimes preventing people
from freely enjoying the boundless wealth of mankind's innovation and creativity... And cat videos.
A research article has appeared in the journal Science . It is titled Reverse-engineering censorship in China: Randomized experimentation and participant observation by Gary King, Jennifer Pan and Margaret E. Roberts.
The abstract reveals that the censorship of people's social media posting is more about preventing organised protests than censoring personal opinions:
Chinese censorship of individual social media posts occurs at two levels:
(i) Many tens of thousands of censors, working inside Chinese social media firms and government at several levels, read individual social media posts, and decide which ones to take down.
(ii) They also read social media submissions that are prevented from being posted by automated keyword filters, and decide which ones to publish.
To study the first level, we devised an observational study to download published Chinese social media posts before the government could censor them, and to revisit each from a worldwide network of computers to see which was censored. To study the second
level, we conducted the first large scale experimental study of censorship by creating accounts on numerous social media sites throughout China, submitting texts with different randomly assigned content to each, and detecting from a worldwide network of
computers which ones were censored.
To find out the details of how the system works, we supplemented the typical current approach (conducting uncertain and potentially unsafe confidential interviews with insiders) with a participant observation study, in which we set up our own social
media site in China. While also attempting not to alter the system we were studying, we purchased a URL, rented server space, contracted with Chinese firms to acquire the same software as used by existing social media sites, and---with direct access to
their software, documentation, and even customer service help desk support---reverse engineered how it all works.
Criticisms of the state, its leaders, and their policies are routinely published, whereas posts with collective action potential are much more likely to be censored---regardless of whether they are for or against the state (two concepts not previously
distinguished in the literature). Chinese people can write the most vitriolic blog posts about even the top Chinese leaders without fear of censorship, but if they write in support of or opposition to an ongoing protest---or even about a rally in favor
of a popular policy or leader---they will be censored.
We clarify the internal mechanisms of the Chinese censorship apparatus and show how changes in censorship behavior reveal government intent by presaging their action on the ground. That is, it appears that criticism on the web, which was thought to be
censored, is used by Chinese leaders to determine which officials are not doing their job of mollifying the people and need to be replaced.
In the latest blow for free speech, the government of the southern Indian state of Karnataka has passed legislation that makes it illegal to upload, share, or like content with a view to hurt religious sentiments knowingly or unknowingly .
Back in June, Karnataka police warned citizens about the type of things that were covered by the Information Technology Act:
Citizens are warned not to upload, modify, resend (forward) and like (share) malicious or misleading images, videos and messages through any medium with a view to hurt religious sentiments knowingly or unknowingly. Citizens are encouraged to inform the
Police Control Room at...
New legislation, the lengthily named Karnataka Prevention of Dangerous Activities of Bootleggers, Drug-offenders, Gamblers, Goondas, Immoral Traffic Offenders, Slum-Grabbers and Video or Audio Pirates (Amendment) Bill, 2014, means that citizens can now
actually be arrested if they have even committed an offence under the Information Technology Act.
Fark is a website described as an older, weirder precursor to Reddit. It has now decided to censor bad taste jokes with a misogyny them. The website has made the following announcement:
Adding misogyny to Fark moderator guidelines.
We've actually been tightening up moderation style along these lines for awhile now, but as of today, the FArQ will be updated with new rules reminding you all that we don't want to be the He Man Woman Hater's Club. This represents
enough of a departure from pretty much how every other large internet community operates that I figure an announcement is necessary.
There are lots of examples of highly misogynistic language in pop culture, and Fark has used those plenty over the years. From SNL's Jane, you ignorant slut to Blazing Saddles' multiple casual references to rape, there are a
lot of instances where views are made extreme to parody them. On Fark, we have a tendency to use pop culture references as a type of referential shorthand with one another.
On SNL and in a comedy movie, though, the context is clear. On the Internet, it's impossible to know the difference between a person with hateful views and a person lampooning hateful views to make a point. The mods try to be
reasonable, and context often matters. We will try and determine what you meant, but that's not always a pass. If your post can be taken one of two ways, and one of those ways can be interpreted as misogynistic, the mods may delete it -- even if that
wasn't your intent.
Things that aren't acceptable:
Calling women as a group whores or sluts or similar demeaning terminology
Jokes suggesting that a woman who suffered a crime was somehow asking for it
Obviously, these are just a few examples and shouldn't be taken as the full gospel.
We're trying to make the Fark community a better place, and hopefully this will be a few steps in the right direction.
Google is planning to offer accounts to children under 13 years old for the first time.
Accounts on Google services such as Gmail and YouTube are currently not officially offered to children, though there is little to stop them from logging on anonymously or posing as adults to sign up for accounts.
Now Google is trying to establish a new system that lets parents set up accounts for their kids, control how they use Google services and what information is collected about their offspring. Google is also developing a child version of its online video
site YouTube that would let parents control content.
Google and most other Internet companies tread carefully because of the Children's Online Privacy Protection Act, or COPPA. The law imposes strict limits on how information about children under 13 is collected; it requires parents' consent and tightly
controls how that data can be used for advertising.
Under the Kremlin's Internet surveillance program known as SORM-2 , Russian Internet service providers are obligated to purchase and install special equipment that allows the Federal Security Service (FSB) to track specific words (like bomb
or government ) in online writing and conversation. If officials request additional information about a particular user, the ISP must comply.
Until recently, SORM-2 applied only to ISPs. Last week, Russian Prime Minister Dmitry Medvedev signed a decree that will expand SORM-2's reach to online social networks and all websites that allow people to message one another. Sites like Facebook and
Google are now obligated to install surveillance gadgetry, sometimes referred to as backdoors, that will allow the FSB to monitor Internet users independently. It's impossible to say exactly how this will work, as Medvedev's order prohibits
websites from disclosing the technical details of the government's surveillance operations.
Decree N743 is intended to amend the controversial Law on Bloggers, which created a government registry for bloggers who have more than 3,000 daily readers. Registered bloggers are subject to media-focused regulations that can make them more
vulnerable to fines and lawsuits than their less popular counterparts. Registered bloggers also are banned from using obscene language and required to fact-check any information they publish. Critics say the law places serious curbs on Internet freedom.
Medvedev's decision to extend Internet surveillance mechanisms to social networks surprised Russia's Internet companies. A PR officer from Yandex, the country's largest search engine, said the company received no advanced notice of the change.
Once again, it's unclear what we're supposed to do, what the actual requirements are, and how much all this will cost, said Anton Malginov, legal head of the Mail.ru, which owns Odnoklassniki.ru, one of Russia's most popular social networks.
Businesses are still awaiting clarification from Russia's Communications Ministry.
If the government chooses to enforce every letter of Medvedev's decree, Russia's social networks will join ISPs in buying and installing equipment that allows the FSB to spy on users. Thus SORM-2 would have its 2.0.
At first glance, SORM 2.0 seems redundant, as social network traffic already passes through the wiretaps now installed at the ISP level. In order to obtain detailed information about individual users, however, the FSB must file formal requests, which can
be a burdensome process. Installing surveillance instruments at the source of the data, however, will grant authorities the power to conduct targeted realtime surveillance. The procedure will be faster and simpler than dealing with ISPs.
Before August 1, websites were under no obligation to record and store users' data. The Law on Bloggers changed that. Since August 1, even before Medvedev interpreted the blogger law to be an extension of SORM-2, social networks have been required to
keep certain information on file for six months. The costs of this storage will undoubtedly fall on businesses and, in turn, consumers. Websites that cannot attract additional advertising revenue might erect paywalls or even be forced to close down.
These massive data stores can also be vulnerable to malicious hacking by third party actors.
And the degree to which extending SORM-2 controls to social networks will help authorities catch criminals remains largely unknown.
How should bloggers respond to these developments? Most Russian Internet users don't have to worry about anything. As Anton Nossik, one of the founding fathers of the RuNet, put it almost a year ago, the government's actions against bloggers are
politically driven. For the most part, Russia's new laws don't threaten Internet users who steer clear of politics. Those who do speak out about sociopolitical issues, however, might attract the FSB's sudden attention, though there are only enough
federal police to keep a close eye on the country's leading dissidents.
Of course, that may be little solace in a world where Big Brother never sleeps.
The Guardian seems to have been the only source that I have spotted that actually tries to explain what will be going on:
Music videos will go through the same classification system as films and other video content. The voluntary pilot will involve the big three music labels in the UK, Sony, Universal and Warner Music, as well as the BBFC, YouTube and music video platform
Vevo. The pilot will run for three months, kicking off in October.
It is presumably related that music videos sold or distributed on disc or other physical form and deemed to include 12-rated-plus material will have to go through the age-classification process also starting in October under amendments to the Video
Recording Act. The music labels will submit music videos that they consider could contain content that should be classified as for age 12 or over, using BBFC guidelines. The BBFC will then rate the videos as it does with other content, for which the
labels will pay a fee to cover the cost of rating in the same way that the film industry currently does. The rating process should take around 24 hours, according to the BBFC. A rating of 12, 15 or 18 will be assigned to the music video and passed on to
the label. Videos deemed not to include unsuitable content for children under 12 will not be classified.
The pilot scheme announced by Cameron will only cover music videos and will not be expanded to cover other video content on sites such as YouTube.
The music labels will tag the video with the age rating from the BBFC when uploading the video to hosting services. YouTube and Vevo are part of the pilot study, and will be supporting the ratings, placing a visible age rating on the video title on the
The visible rating will probably take the form of the BBFC's age certification logos, although that is not yet set in stone, and is intended to give parents more information about the videos their children are watching.
YouTube has a similar system for displaying BBFC ratings on films, and requires users to be at least 13 years old to have an account, although most videos are viewable without an account.
The three-month pilot is intended to finalise a system that works for rating the videos and having the data tagged to them when uploaded to say they are classified. For the initial trial it will simply be a notification on the video of an age
After the three-month trial it is expected that YouTube and Vevo, as well as other video hosting services, will look at developing parental control filters that screen out videos marked as inappropriate for children of specific age ranges.
Only new videos submitted by the music labels will be rated during the pilot, although there will be a decision at the end of the pilot as to whether videos that are already available should be retroactively classified.
The big three labels will conduct the pilot, but the BPI, which represents Sony, Universal and Warner Music and more than 300 independent music companies, expects that all music labels will adopt the system once finalised.
During the pilot the ratings will be there for information purposes only, to help parents make an informed decision. Parental controls on YouTube and others could be used to screen out videos via ratings, but their effectiveness will be determined by how
difficult it is to get around age verification.
YouTube, like most other online services, does not verify a user's age beyond the date of birth given by the user at the point of signing up for an account. Age verification issues are beyond the scope of this initial pilot scheme.
Newsbeat spoke to Gennaro Castaldo from the BPI and asked if the pilot will have any impact if music videos by American artists, known for being racier, aren't certified?
Yes it's true that a lot of music video content comes from outside the UK, but also a huge amount of music that sells well around the world does come from Britain and from British artists.
So I think, what we do in this country is followed by other territories. So I'm sure they'll be following our pilot with interest and in due course I think they'll then decide how they want to act on that.
I think this is a really good place to start, we have to start somewhere and if we can begin here in the UK, for other territories to follow, then I think that would be a really good example too.
Frankie and Friends was a British porn website offering both photo sets and videos to paying members.
ATVOD had targeted the site with view to making it pay expensive fees and to subject it to onerous and impractical age verification requirements.
In May 2013 ATVOD had informed the website it that it must sign up for Video on Demand censorship. But Frankie and Friends appealed the decision to the senior TV and internet censor, Ofcom. (Hardly seems much of an independent appeal process to appeal to
the same censor that is in ultimate charge and who delegated the task to a junior censor).
Frankie and Friends based the appeal on the videos being shorter than typical TV programmes and pointing out that there were more photograph galleries than video galleries. [The Law requires that a website should have a primary purpose of being Video on
Demand before being subjected to ATVOD censorship].
Ofcom have dismissed the relevance of short form videos several times now, noting that for instance, Television X, broadcasts plenty of short videos on a UK licensed linear TV service.
Ofcom dismissed the argument about photographs being the primary purpose:
Ofcom's overall view was that characteristics of the material available on the Appellant's website and the manner in which it was provided support the finding ATVOD made, that the site constituted a service a principal purpose of which was providing
audiovisual material. Whilst the large volume of non-TV like material available demonstrated that the Service sought to make use of still images as well as video in providing its service, Ofcom nevertheless considered that the catalogue of a significant
amount of audiovisual material available which did not require accompanying information to be fully appreciated did amount to a service the principal purpose of which was to provide an ODPS. The strong thematic connection between the two bodies of
content on the site supports the conclusion that the website as a whole had a principal purpose of providing an ODPS in relation to adult content.
So Ofcom ruled that Frankie and Friends was in fact subject to ATVOD censorship.
And as ATVOD's onerous and impractical rules make it almost impossible to continue in business, the website is now closed.
Police in Washington state are asking the public to stop tweeting during shootings and manhunts to avoid accidentally telling the bad guys what officers are doing.
The TweetSmart campaign began in late July and aims to raise awareness about social media's potential impact on law enforcement.
A social media 'expert' at the International Association of Chiefs of Police said she's unaware of similar campaigns elsewhere but the problem that prompted the outreach is growing. Nancy Korb, who oversees the group's Center for Social Media said:
All members of the public may not understand the implications of tweeting out a picture of SWAT team activity.
It's not that they don't want the public to share information. ..[BUT].. .It's the timing of it.
DCMS formally informs the European Commission of a draft UK regulation to incorporate ATVOD's impractical age verification rules into UK law. (And then ludicrously claims that this will not have an impact on international trade).
On the 7th July 2014, the UK Government Department of Culture, Media, Sport and Censorship notified the European Commission of its draft regulation to incorporate ATVOD's impractical age verification rules for accessing hardcore porn on the internet into
The DCMS document states:
The Audiovisual Media Services Regulations 2014
Part 4A of the Communications Act 2003 (inserted by the Audiovisual Media Services Regulations 2009 and 2010) transpose the requirements of Directive 2010/13/EU in relation to on-demand programme services. Section 368E(2) provides
that on-demand material that might seriously impair the physical, mental or moral development of persons under the age of eighteen must only be made available in a manner which secures that such persons will not normally see or hear it. This draft
instrument amends section 368E in two ways. First, it provides that any material that the British Board of Film Classification (BBFC) has issued a R18 classification certificate in respect of (or any material that would have been issued such a
certificate) (hard-core pornography) must not be included in an on-demand service unless it is behind effective access controls which verify that the user is aged eighteen or over. Secondly, it provides that any material that the BBFC has refused to give
a classification certificate in respect of (or any material that would have been refused such a certificate) must not be included in an on-demand service at all.
Brief Statement of Grounds
In 2010 the Department wrote to Ofcom raising concerns about whether section 368E would in practice provide sufficient safeguards to protect children from sexually explicit material. Ofcom's report in 2011 recommended that the
Government introduce new legislation to prohibit R18 material from being included in on-demand services unless mandatory restrictions are in place and prohibit altogether material whose content the BBFC would refuse to classify. The co-regulators, Ofcom
and the Authority for Television On Demand (ATVOD), were concerned that the evidence for children being caused harm by exposure to R18 material is inconclusive and the legislative protections currently in place were not sufficiently clear to provide
certainty in this area. In the interim period pending legislative changes the co-regulators, adopting a precautionary approach, interpreted section 368E(2) as requiring R18 material to be behind access controls. This instrument has the effect of removing
any uncertainty from the regulatory framework providing clarity to consumers and providers of on-demand services. It also provides the same level of protection that exists on the high street in relation to the sale of hard-copy DVDs to the provision of
on-demand services. In a converging media world these provisions must be coherent. The BBFC classification regime established under the Video Recordings Act 1984 is a tried and tested system of what content is regarded as harmful for minors. This Act was
notified as a technical standard - Notification No. 2009/495/UK.
References of the Basic Texts: Part 4A of the Communications Act 2003
ATVOD Rules and Guidance and research report
Video Recordings Act 1984
Ofcom Report: Sexually Explicit Material and Video On Demand Services, 2011
No - The draft has no significant impact on international trade
Online music videos will carry an age classification from October as part of a pilot scheme by YouTube, music video service Vevo and the BBFC in the name of protecting children from graphic content , David Cameron has announced.
In a speech to the Relationships Alliance on Monday, the prime minister said the rules for online videos should be brought into line with content bought offline. Cameron said:
From October, we're going to help parents protect their children from some of the graphic content in online music videos by working with the British Board of Film Classification, Vevo and YouTube to pilot the age rating of these videos.
We shouldn't cede the internet as some sort of lawless space where the normal rules of life shouldn't apply. So, in as far as it is possible, we should try to make sure that the rules that exist offline exist online. So if you want to go and buy a music
video offline there are age restrictions on it. We should try and recreate that system on the internet.
Last week, Iran's Ministry of Culture and Islamic Guidance announced that all news websites that do not obtain government-issued licenses will be blocked nationwide.
Hassan Mehrabi, the Ministry's director of local press regulations declared that all news websites in the future must obtain licenses from the Ministry's press supervisory board. Further details about the new policy appeared in a report covered by the
Iranian Student News Agency (ISNA).
Prior to the new regulation, most websites registered within Iran would abide by self-censorship in order to avoid being filtered. Targets of filtering have often been reformist websites, such as those associated with the Green Movement and its leaders.
This news comes three months after moderate President Hassan Rouhani's conference on information and communications technologies, where he announced:
The right of citizens to have access to international networks of information is something we formally recognize. Why are we so nervous? Why don't we trust our youth?
With the RuNet already plagued by Roskomnadzor blacklists, blogger registration, and the blocking of Twitter accounts with no discernible justification, Russia now wants to introduce an automated real-time filtering system that will block websites that
contain harmful content.
The proposed plan would add a second layer of censorship to Russia's already-pervasive website blacklist system , under which ISPs are required to block all websites containing calls to riots, extremist activities, the incitement of ethnic and (or)
sectarian hatred, terrorist activity, or participation in public events held in breach of appropriate procedures.
According to an ITAR-TASS report , Russia would require ISPs to install smart filters that would screen and block harmful content , which would presumably be identified based on a pre-determined list of keywords. The smart filtering idea and its technical details have been proposed by the Safe Internet League, a Kremlin-loyal NGO partnering with several large Russian ISPs.
Safe Internet League executive director Denis Davydov explains that existing blacklists are not great at filtering out dangerous content, and says their system, once installed at the level of ISPs, could analyze web content in real time and easily block
We suggest introducing preemptive Internet filtering, which allows us to automatically determine the content of the page queried by the user in real time. The system evaluates the content on the page and determines the category which the information
belongs to. In case the category is forbidden, the system blocks the webpage automatically.
The typically snarky personalities of the RuNet thought the League's new initiative would do nothing to create a safer online environment -- instead, the added layer of algorithmic bureaucracy would only contribute to the existing limits already imposed
on netizens in Russia, and would make the users work even harder to access their preferred content.
Earlier this summer, Duma deputy Yelena Mizulina had already proposed an automated Internet filtering system in an attempt to protect the minds of Russia's youngsters. Mizulina demanded that the Internet service providers block adult Web content
by default in an effort to create a Clean Internet. Consumers would be allowed to opt out of the filtration system, but only by making a special request to their ISP.
Davydov says developers at the Safe Internet League have already tested their two-step filtration model in Kostroma and Omsk regions, as well as the Komi Republic, and have found it works quite well (or so he says). Should the system go into broader use,
it will generate a significant escalation of state attempts to control the Russian Internet. Users have found multiple ways of getting around blocks generated by blacklists, using VPNs and other circumvention tools to view their favorite blacklisted
websites. If the smart filtering system is indeed implemented, one can only guess how quickly Russian netizens will learn to work around the new, ever-pervasive Internet controls.
The Guardian published an article about a report written by Charles Leadbeater, a former Labour policy adviser. The report was commissioned by the Nominet Trust to promote technology for social good and to highlight projects that use the internet as the
basis for social and civic improvement.
Leadbeater claimed that pervading online misogyny is the most visible reason why the internet is failing to live up to its potential to improve people's lives. He cited internet insults suffered by Mary Beard as an example that shows internet has lost
promise of mid-2000s as a route to collaboration for the better. Speaking to the Guardian Leadbeater said:
I'd love to create something like the Mary Beard Prize for women online, to support people who are supporting women to be able to use the internet safely. The kind of abuse [suffered by] the classicist Mary Beard, the gymnast Beth Tweddle and campaigner
Caroline Criado-Perez, would not be tolerated in a public place and there is no reason why it should be online.
It's outrageous that we've got an internet where women are regularly abused simply for appearing on television or appearing on Twitter. If that were to happen in a public space it would cause outrage.
He cites research that the most important signifier of a safe and vibrant public space is the presence of women and families -- when they felt comfortable it was a sign that the space was good for everyone .
The article must have caused more than a little colourful debate as the Guardian published a follow up article discussing the online comments received. The article was headlined:
The readers' editor on... the online abuse that follows any article on women's issues.
Perhaps it is time to assess whether online anonymity should be an option rather than the default position
The Guardian then alludes to the robust comments received in comments on such politically correct articles:
I'd love to create something like the 'Mary Beard Prize for women online' to support people who are supporting women to be able to use the internet safely, Charles Leadbeater said in the article , which was published on 8 August.
A great idea and one that would win support from many editors at the Guardian who see the amount of the moderators' time spent weeding out either off-topic or offensive comments in threads attached to any article loosely related to feminism or women's
As one moderator told me: There seems to be a huge backlash against the Guardian's increasing coverage of feminist issues, from more frivolous pieces (body hair, sunbathing topless, anything to do with Beyonce') to pieces on domestic violence, FGM
etc. WATM (what about the men) is now something we look out for on any piece about women as standard.
Alex Needham, acting network editor, raised the issue at the Guardian's morning conference following an article by Hadley Freeman on 5 August about the arguments for and against women shaving their body hair.
He told me in an email: On any article by Laura Bates or Jessica Valenti, or most recently this piece by Hadley, the first 15 or 20 comments always say 'not this again, Guardian, where are the men? We face this kind of problem, so cover that instead.'
Because the comments are off-topic they're then removed, which leads to cries of censorship and the claim that the Guardian is sexist -- that the problems of white working-class males (who these commenters say are the real victims in society) are
The Guardian goes on to discuss how to censor the opposition to its political correctness by mandating real identities for commenters. Of course at no stage is it considered that perhaps the Guardian could tone down its one sided, men belittling,
politically correct bullying pieces and offer a little more balance for the other side.
Chinese users of instant messaging apps will have to register their real names, and seek approval before publishing political news, under new censorship rules.
Public users of popular services such as WeChat will also have to sign agreements promising to uphold the socialist system , state media say. The State Internet Information Office (SIIO) announced the rules, which come into immediate effect. The
Instant messaging services should require users to verify their real-name identities before registering an account.
Where users break the rules, the providers will, as appropriate, issue warnings, restrict their posts... or even close their accounts, while retaining the relevant records so they can fulfil their reporting obligations to the authorities.
Meanwhile, South Korean officials said the Chinese authorities had told them that access to foreign messaging apps including KakaoTalk and Line - both owned by South Korean firms - had been blocked.
The foundation which operates Wikipedia has criticised of the right to be forgotten ruling, describing it as unforgivable censorship .
Speaking at the announcement of the Wikimedia Foundation's first-ever transparency report in London, Wikipedia founder Jimmy Wales said the public had the right to remember :
Wikipedia is founded on the belief that everyone, everywhere should be able to have access to the sum of all knowledge. However, this is only possible if people can contribute and participant in those projects without reservation.
This means the right to create content, including controversial content, should be protected. People should feel secure that their curiosity and contributions are not subject to unreasonable Government requests for their account histories. They should
feel confident that the knowledge they are receiving is complete, truthful and uncensored.
The Foundation's chief executive Lila Tretikov called the ruling from the European Court of Justice a direct threat to our mission :
Our Transparency Report explains how we fight and defend against that. We oppose censorship. Recently, however, a new threat has emerged - the removal of links from search results following the recent judgment from the European Court of Justice regarding
the right to be forgotten .
This right to be forgotten is the idea that people may demand to have truthful information about themselves selectively removed from the published public record or at least make it more difficult to find. This ruling, unfortunately, has compromised the
public's right to information and freedom of expression.
Links, including those to Wikipedia itself may now be quietly, silently deleted with no transparency, no notice, no judicial review and no appeals process. Some search engines are giving proper notice and some are not. We find this type of compelled
censorship unacceptable. But we find the lack of disclosure unforgivable.
As part of the Foundation's bid for greater transparency, it has issued its first transparency report, detailing the number of requests it has received from governments, individuals and organisations to disclose information about users or to change
content on web pages. According to the report, the Foundation received 56 requests for user data in the last two years. In 14% of those cases, information was produced. The report also revealed that 304 requests were made for content to be either altered
or removed, with the Foundation confirming that none of those requests were granted.
Geoff Brigham, general counsel at the Wikimedia Foundation, said:
The decision is going to have direct and critical repercussions for Wikipedia. Without safeguards, this decision hurts free information, and let me tell you why: the decisions are made without any real proof, there's no judicial review, no public
explanation, there's no appeals process.
Yet the decision allows censorship of truthful information when one would expect such judicial safeguards. If I may so say, in allowing this to happen, the European Court of Justice has basically abandoned its responsibility to protect the right to
freedom of expression and access to truthful information. Two extremely important rights for democratic society.
In our opinion, we are on a path to secret, online sanitation of truthful information. No matter how well it may be intended, it is compromising human rights, the freedom of expression and access to information, and we cannot forget that. So we have to
expose it and we have to reject this kind of censorship.