If you're looking to follow news and advocacy about an anticipated Vermont legislature vote this week on legalizing marijuana, a search for the latest tweets that use the combined terms Vermont and marijuana will for many Twitter users
yield zero results.
Same goes for searches for tweets using the terms pot, weed or cannabis. The latest results for jackass and jerk , words generally printed without censorship by news outlets, also yield a blank page with a message
claiming: Nothing came up for that search, which is a little weird. Maybe check what you searched for and try again.
The omissions are examples of a new censorship syste introduced by Twitter, with users required to opt out of a filter to see uncensored results.
Top results for restricted terms still appear, but results for the most recent posts and for photos, videos and news content tabs do not.
Social media giants Facebook, Google and Twitter will be forced to change their terms of service for EU users within a month,
or face hefty fines from European authorities, an official said on Friday.
The move was initiated after politicians have decided to blame their unpopularity on 'fake news' rather than their own incompetence and their failure to listen to the will of the people.
The EU Commission sent letters to the three companies in December, stating that some terms of service were in breach of EU protection laws and urged them to do more to prevent fraud on their platforms. The EU has also urged social media companies to do
more when it comes to assessing the suitability of user generated content.
The letters, seen by Reuters, explained that the EU Commission also wanted clearer signposting for sponsored content, and that mandatory rights, such as cancelling a contract, could not be interfered with.
Germany said this week it is working on a new law that would see social media sites face fines of up to $53 million if they failed to strengthen their efforts to remove material that the EU does not like. German censorship minister Heiko Mass said:
There must be as little space for criminal incitement and slander on social networks as on the streets. Too few criminal comments are deleted and they are not erased quickly enough. The biggest problem is that networks do not take the complaints of their
own users seriously enough...it is now clear that we must increase the pressure on social networks.
Pakistani has threatened to ban social media networks if they failed to censor content considered insulting to Islam. The government's Fderal
Investigation Agency (FIA) is also in talks with Interpol to identify supposedly blasphemous content.
The FIA has sent a formal request to Facebook but the company's management has yet to respond. Pakistan's interior Minister Nisar Ali Khan urged Facebook to comply:
I hope that the management of Facebook will respect the religious sentiments of 200 million Pakistanis and tens of millions of users of Facebook in Pakistan and will cooperate in that regard.
These requests come after the Islamabad high court ordered the government to start an investigation into online blasphemy and threatened to ban social media networks if they failed to censor content deemed insulting to Islam, lawyers told AFP.
Earlier this week
we explained how the tide is turning against the European Commission's proposal for Internet platforms to adopt new compulsory copyright filters as part of its upcoming Directive on Copyright in the Digital Single Market. As we explained, users and even
the European Parliament's Committee on the Internal Market and Consumer Protection (IMCO) have criticized the Commission's proposal, which could stifle online expression, hinder competition, and suppress legal uses of copyrighted content, like creating
and sharing Internet memes .
Since then, a leaked report has revealed that one of the European Parliament's most influential committees has also come out against the proposal
. As the IMCO committee's report had done,
the report of the European Parliament's Legal Affairs (JURI) Committee not only criticizes the upload filtering proposal (aka. Article 13, or the #censorshipmachine), but renders even harsher judgment on a separate proposal to require online news
aggregators to pay copyright-like licensing fees to the publishers they link to (aka. Article 11, or the
). We'll take these one at a time.
JURI Committee Scales Back the EU's Censorship Machine
The JURI committee would maintain the requirement for copyright holders to "take appropriate and proportionate measures to ensure the functioning of agreements concluded with rightsholders for the use of their works." But the committee rejects
the proposed requirement for automatic blocking or deletion of uploaded content, because it fails to take account of the limitations and exceptions to copyright that Europe recognizes, such as the right of quotation. The committee writes in an
The process cannot underestimate the effects of the identification of user uploaded content which falls within an exception or limitation to copyright. To ensure the continued use of such exceptions and limitations, which are based on public interest
concerns, communication between users and rightsholders also needs to be efficient.
The committee also affirms that the agreements between rightsholders and platforms don't detract from the safe harbor protection for platforms that Europe's E-Commerce Directive already provides (which is analogous to the DMCA safe harbor in the U.S.).
This means that if user-uploaded content appears on a platform without a license from the copyright holder, the platform's only obligation is to remove that content on receipt of a request by the copyright holder.
We would have liked to see a stronger denunciation of the mandate for Internet platforms to enter into licensing agreements with copyright holders, and we maintain that the provision is better deleted altogether. Nonetheless, the committee's report, if
reflected in the final text, should rule out the worst-case scenario of platforms being required to automatically flag and censor copyright material as it is uploaded.Â
European Link Tax Faces its Toughest Odds Ever
The leaked report goes further in its response to the link tax, recommending that it be dropped from the new copyright directive altogether. Given the failure of smaller scale link tax schemes in Germany and Spain , this was the only sensible position
for the committee to take. The Explanatory Statement to the report correctly distinguishes between two separate aspects of the use of news reporting online that the Commission's original proposal incorrectly conflates:
Digitalisation makes it easier for content found in press publications to be copied or taken. Digitalisation also facilitates access to news and press by providing digital users a referencing or indexing system that leads them to a wide range of news and
press. Both processes need to be recognised as separate processes.
Instead of introducing new monopoly rights for publishers, the JURI committee suggests simplifying the process by which publishers can take copyright infringement action in the names of the journalists whose work is appropriated. This would address the
core problem of full news reports being republished without permission, but without creating new rights over mere snippets of news that accompany links to their original sources. Far from being a problem, this use is actually beneficial for news
The JURI committee report is just a recommendation for the amendment of the European Commission proposal, and it will still be some months before we learn whether these recommendations will be reflected in the final compromise text. Nonetheless, it is
heartening to see the extreme proposals of the Commission getting chiseled away by one of the Parliament's most influential committees.
The importance of this shouldn't be underestimated. Although the above proposals are limited to Europe at present, there is the very real prospect that, if they succeed, they will pop up in the United States as well. In fact, U.S. content industry groups
are already advocating for the adoption of an upload filtering proposal stateside. That's why it's vital not only for Europeans to speak out against these dangerous proposals, but also for Internet users around the world to stand on guard, and to be
ready to fight back.
Twitter is continuing its campaign to add controls and warnings to tweets.
It now presents a warning when users click on a profile that may include sensitive content . The warning greys out the profile's tweets, bio and profile picture, but gives users the option to view the profile if they wish.
Twitter used to only mark individual tweets with a sensitivity warning, but has now expanded this to censor whole profiles unless users agree to view them.
The warning message given with the greyed out profile says:
Caution: This profile may include sensitive content. You're seeing this warning because they tweet sensitive images or language. Do you still want to view it?
Twitter did not publicly announce the new feature, and tweeters with profiles being greyed out are not informed by Twitter.
mong other things, Amazon Prime provides a good many of their digital videos available to stream for free. Well, until now anyway. Many indie horror filmmakers are having their videos removed from the Prime service in an apparent new policy on the part
Amazon says it is cracking down on extreme content and is sending out emails to film makers to explain the new censorship policy.
Here is an example email supplied by Scott Schirmer in regards to his film Harvest Lake :
Amazon Video Direct periodically revises our content policy in order to improve the Amazon Video customer experience. Effective March 1, 2017, Amazon Video Direct will no longer allow titles containing persistent or graphic sexual or violent acts,
gratuitous nudity and/or erotic themes ('adult content') to be offered as Included with Prime or Free with Pre-Roll Ad .
We have identified the following titles within your catalog which contain adult content:
In alignment with our new policy, the Included with Prime and/or Free with Pre-Roll Ad offers will be removed from these titles on March 1, 2017.
For any title to remain available to customers with an Included with Prime or Free with Pre-Roll Ad offer, its content including cover images, metadata, and/or video content must be free of persistent or graphic sexual or violent acts,
gratuitous nudity and/or erotic themes.
A politically correct Californian law targeting age discrimination has failed to win the immediate approval of a judge. The law requires date of births or age
to be withheld from documents and publications used for job recruitment. One high profile consequence is that the Internet Movie Database (IMDb) would be banned from including age information in the profiles of stars and crew. This has led to the
challenge of the law on grounds of unconstitutional censorship.
This week's ruling does not look good for the Californian law as the judge decided that birthday prohibition shall not apply until the full legal challenge is decided. District Judge Vince Chhabria ruled:
[I]t's difficult to imagine how AB 1687 could not violate the First Amendment. The statute prevents IMDb from publishing factual information (information about the ages of people in the entertainment industry) on its website for public consumption. This
is a restriction of non-commercial speech on the basis of content.
To be sure, the government has identified a compelling goal -- preventing age discrimination in Hollywood. But the government has not shown how AB 1687 is 'necessary' to advance that goal. In fact, it's not clear how preventing one mere website from
publishing age information could meaningfully combat discrimination at all. And even if restricting publication on this one website could confer some marginal antidiscrimination benefit, there are likely more direct, more effective, and less
speech-restrictive ways of achieving the same end.
Chhabria held that -- because the law restricts IMDb's speech rights -- the site is suffering irreparable harm and enjoined the government from enforcing the law pending the resolution of this lawsuit.
Twitter has introduced a new censorship system with the unlikely sounding capability to detect abusive tweets and suspend accounts
without waiting for complaints to be flagged. Transgressions results in the senders receiving half-day suspensions.
The company has refused to provide details on specifically how the new system works, but using a combination of behavioral and keyword indicators, the filter flags posts it deems to be violations of Twitter's acceptable speech policy and issues users
suspensions of half a day during which they cannot post new globally accessible tweets and their existing tweets are visible only to followers.
From the platform that once called itself the free speech wing of the free speech party, these new tools mark an incredible turn of events. The anti-censorship ethic seems to have been lost in a failed attempt to sell the company after prospective
buyers were unhappy with the lack of censorship control over the platform.
Inevitably Twiiter has refused to provide even outline ideas of the indicators it is using, especially when it comes to the particular linguistic cues it is concerned with. While offering too much detail might give the upper hand to those who would try
to work around the new system, it is important for the broader community to have at least some understanding of the kinds of language flagged by Twitter's new tool so that they can try and stay within the rules.
It is also unclear why Twitter chose not to permit users to contest what they believe to be a wrongful suspension. Given that the feature is brand-new and bound to encounter plenty of unforeseen contexts where it could yield a wrong result, it is
surprising that Twitter chose not to provide a recovery mechanism where it could catch these before they become news.
And the first example of censorship was quick to follow. Many outlets this morning picked up on a frightening instance of the Twitter algorithm's new power to police not only the language we use but the thoughts we express. In this case a user allegedly
tweeted a response to a news report about comments made by Senator John McCain and argued that it was his belief that the senator was a traitor who had committed formal treason against the nation. Twitter did not respond to a request for more
information about what occurred in this case and if this was indeed the tweet that caused the user to be suspended, but did not dispute that the user had been suspended or that his use of the word traitor had factored heavily into that suspension.
A congressman ahs introduced a law bill demanding that visitors to America hand over URLs to their social network accounts.
Representatve Jim Banks says his proposed rules, titled the Visa Investigation and Social Media Act (VISA) of 2017, require visa applicants to provide their social media handles to immigration officials. Banks said:
We must have confidence that those entering our country do not intend us harm. Directing Homeland Security to review visa applicants' social media before granting them access to our country is common sense. Employers vet job candidates this way, and I
think it's time we do the same for visa applicants.
Right now, at the US border you can be asked to give up your usernames by border officers. You don't have to reveal your public profiles, of course. However, if you're a non-US citizen, border agents don't have to let you in, either. Your devices can be
seized and checked, and you can be put on a flight back, if you don't cooperate.
Banks' proposed law appears to end any uncertainty over whether or not non-citizens will have their online personas vetted: if the bill is passed, visa applicants will be required to disclose their online account names so they can be scrutinized for any
unwanted behavior. For travellers on visa-waiver programs, revealing your social media accounts is and will remain optional, but again, being allowed into the country is optional, too.
Banks did not say how his bill would prevent hopefuls from deleting or simply not listing any accounts that may be unfavorable.
The Register reports that the bill is unlikely to progress.
Secretary of Homeland Security John Kelly told Congress this week that the Department of Homeland Security is exploring the
possibility of asking visa applicants not only for an accounting of what they do online, but for full access to their online accounts. In a hearing in the House of Representatives, Kelly said:
We want to say for instance, What sites do you visit? And give us your passwords. So that we can see what they do on the internet. And this might be a week, might be a month. They may wait some time for us to vet. If they don't want to give us
that information then they don't come. We may look at their204we want to get on their social media with passwords. What do you do? What do you say? If they don't want to cooperate, then they don't come in.
TechCrunch' s Devin Coldewey pointed out, asking people to surrender passwords would raise "obvious" privacy and security problems. But beyond privacy and security, the proposed probing of online accounts204including social media and
other communications platforms204would, if implemented, be a major threat to free expression.
The company speaks of providing tools to get such speech removed in a blog post:
Recently, many passionate users have reached out to us regarding instances of hate speech across our network. Language that offends, threatens, or insults groups solely based on race, color, gender, religion, national origin, sexual orientation, or other
traits is against our network terms and has no place on the Disqus network. Hate speech is the antithesis of community and an impediment to the type of intellectual discussion that we strive to facilitate.
We know that language published on our network does not exist within a vacuum. It has the power to reach billions of people, change opinions and incite action. Hate speech is a threat, not only to those it targets, but to constructive discourse of all
forms across all communities. Hate speech creates fear, deters participation in public debate, and hinders diversity of thoughts and opinions.
We have the opportunity and the responsibility to combat hate speech on our network. Our goal is to foster environments where users can express their diverse opinions without the fear of experiencing hate speech. We persistently remove content that
contains hate speech or that otherwise violates our terms and policies . However, we know that simply reactively removing hate speech is not sufficient. That is why we are dedicated to building tools for readers and publishers to combat hate speech, and
are open to partnering with other organizations who share our goal.
We recently released several features to help readers and publishers better control offensive and otherwise unwanted content. User Blocking and User Flagging allow users to block and report other users who are violating our terms of service. Our new
moderation panel makes it easier for publishers to identify and moderate comments based on user reputation .
Currently, we are working on improved tools to help publishers effectively prevent troublesome users from returning to their sites. And as we get smarter about identifying hate speech, we are working on ways to automatically remove it from our network.
As an organization, Disqus firmly stands against hate speech in all forms. To recap, in an effort to combat hate speech both on and off our network, we are making the following commitments:
We will enforce our terms of service by removing hate speech and harassment on our network. To report hate speech and other abusive behavior, please follow these instructions .
We will invest in new features for publishers and readers to better manage hate speech. We hope to talk more about this soon.
To support this philosophy, we will also be supporting organizations that are equipped to fight hate speech outside of Disqus. We are exploring several options and plan to dedicate portions of our advertising profits to fight hate speech.
Wikipedia editors have voted to ban the Daily Mail as a source for the website in all but exceptional circumstances after claiming the
newspaper was generally unreliable .
The move is highly unusual for the online encyclopaedia, which rarely puts in place a blanket ban on publications and which still allows links to move obvious sources of 'fake news' such as Kremlin backed news organisation Russia Today, and Fox News.
The Wikimedia Foundation, which runs Wikipedia but does not control its editing processes, said in a statement that volunteer editors on English Wikipedia had discussed the reliability of the Mail since at least early 2015. The fundation said:
This means that the Daily Mail will generally not be referenced as a 'reliable source' on English Wikipedia, and volunteer editors are encouraged to change existing citations to the Daily Mail to another source deemed reliable by the community.
Some editors opposed the move saying the Daily Mail was sometimes reliable, that historically its record may have been better, and that there were other publications that were also unreliable. Opponents also pointed to inaccurate stories in other
respected publications, and suggested the proposed ban was driven by a dislike of the publication.
However, the fact of the matter is that the DE Bill
gives the BBFC (the regulator, TBC) the power to block any pornographic website that doesn't use age verification tools. It can even block websites that publish pornography that doesn't fit their guidelines of taste and acceptability - which are
significantly narrower than what is legal, and certainly narrower than what is viewed as acceptable by US websites.
A single video of "watersports" or whipping produces marks, for instance, would be enough for the BBFC to ban a website for every UK adult. The question is, how many sites does the regulator want to block, and how many can it block?
Parliament has been told that the regulator wants to block just a few, major websites, maybe 50 or 100, as an "incentive" to implement age checks. However, that's not what Clause 23 says. The "Age-verification regulator's power to direct
internet service providers to block access to material" just says that any site that fits the criteria can be blocked by an administrative request.
What could possibly go wrong?
Imagine, not implausibly, that some time after the Act is in operation, one of the MPs who pushed for this power goes and sees how it is working. This MP tries a few searches, and finds to their surprise that it is still possible to find websites that
are neither asking for age checks nor blocked.
While the first page or two of results under the new policy would find major porn sites that are checking, or else are blocked, the results on page three and four would lead to sites that have the same kinds of material available to anyone.
In short, what happens when MPs realise this policy is nearly useless?
They will, of course, ask for more to be done. You could write the Daily Mail headlines months in advance: BBFC lets kids watch porn .
MPs will ask why the BBFC isn't blocking more websites. The answer will come back that it would be possible, with more funding, to classify and block more sites, with the powers the BBFC has been given already. While individual review of millions of
sites would be very expensive, maybe it is worth paying for the first five or ten thousand sites to be checked. (And if that doesn't work, why not use machines to produce the lists?)
And then, it is just a matter of putting more cash the way of the BBFC and they can block more and more sites, to "make the Internet safe".
That's the point we are making. The power in the Digital Economy Bill given to the BBFC will create a mechanism to block literally millions of websites; the only real restraint is the amount of cash that MPs are willing to pour into the organisation.
The European Union agreed Tuesday on new rules allowing subscribers of online services in one E.U. country access to them
while traveling in another.
The new portability ruling is the first step of regulation under a drive by the European Commission to introduce a single digital market in Europe.
Announced in May 2015, the proposed Digital Single Market was met with full-throated opposition from Hollywood and Europe's movie and TV industry, which viewed it as a threat to its territory-by-territory licensing of movies and TV shows.
The European Commission, the European Parliament and the E.U.'s Council of Ministers all agreed to new laws which will allow consumers to fully use their online subscriptions to films, sports events, e-books, video games or music services when traveling
within the E.U.
The online service providers will have nine months to adapt to the new rules, which means will come into force by the beginning of 2018.
An interesting article in Wired reports on a a recent Westminster eForum meeting when the British establishment got
together to discuss, porn, internet censorship and child protection.
A large portion of the article considers the issue that porn is not generally restricted just to 'porn websites'. It is widely available on more mainstream wesbites such as Google Images. Stephen Winyard, director and VP of ICM Registry and council
member of the digital policy alliance, argued that Twitter is in fact commercially benefiting from the proliferation of pornography on the network:
It's on Twitter, Reddit, Tumblr, mobile apps - Skype is used hugely for adult content. But Twitter is the largest platform for promoting pornography in the world - and it takes money for it. They pay Twitter money to advertise adult content.
Another good good pint was that the Digital Censorship Bill going through parliament was targetting the prevention of children 'stumbling across' porn. Hence a bit of partial blockade of porn may somehow reduce this problem. However Adam Kinsley of Sky
pointed out that partial blockage may not be so effective in stopping kids actively looking for porn. He noted:
The Digital Economy Bill's exact objectives are a little uncertain, but we are trying to stop children stumbling on pornography -- but they are not 'stumbling', they are looking for it and Twitter is where they will [find] it. Whether what the government
is proposing will deal with that threat is unclear. Initially, it did not propose ISPs blocking content. When it comes to extremist sites, the Home Office asks social media platforms to take down content. The government does not ask us to block material
- it has never done that. So this is a big deal. It doesn't happen with the IWF; it doesn't happen with terrorist material, and it wasn't in the government's original proposal. Whether they got it right and how will we deal with these millions of sites,
We're not really achieving anything if only dealing with a few sites.
The Bill is incredibly complex, as it stands. David Austin, from the BBFC, pointed out that for it to implement the bill correctly, it needs to be effective, proportionate, respectful of privacy, accountable - and the
Tens of millions of adults that go online to see legal content must be able to continue to do so.
At the same time, he said:
There is no silver bullet, no one model, no one sector that can achieve all child protection goals.
Parliament's Culture, Media and Sport Select Committee said it would investigate the establishment's concerns about the
public being supposedly swayed by propaganda and untruths.
The inquiry will examine the sources of fake news, how it is spread and its impact on democracy. Damian Collins, the committee chairman, said the rise of propaganda and fabrications is:
A threat to democracy and undermines confidence in the media in general. Just as major tech companies have accepted they have a social responsibility to combat piracy online and the illegal sharing of content, they also need to help address the
spreading of fake news on social media platforms, he said.
Consumers should also be given new tools to help them assess the origin and likely veracity of news stories they read online.
The committee will be investigating these issues as well as looking into the sources of fake news, what motivates people to spread it and how it has been used around elections and other important political debates.
The MPs want to investigate whether the way advertising is bought, sold and placed online has encouraged the growth of fake news. They also want to address the responsibility of search engines and social media to stop spreading it.
New research suggests that online hoaxes and propaganda may have only had limited impact in the US presidential election, however. According to a study by two US economists, fake news which favoured Donald Trump was shared 30 million times in the three
months before the election, four times more than false stories favouring Hillary Clinton. But the authors said that only half of people who saw a false story believed it, and even the most widely circulated hoaxes were seen by only a fraction of voters.
As the internet censorship bill continues its progress through Parliament, news websites have been noted a few opinions and sound
A couple of weeks ago David Kaye, the UN's Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, wrote to ministers to warm them that their proposals could breach international law . In his
letter, he said:
I am concerned that the age-verification provisions give the Government access to information of viewing habits and citizen data. Data provide to one part of government can be shared with other parts of government and private sector companies without a
person's knowledge and consent.
He also warned:
While I am cognizant of the need to protect children against harmful content. I am concerned that the provisions under the bill are not an effective way for achieving this objective as they fall short of the standards of international human rights law.
The age-verification requirement may easily be subject to abuse such as hacking, blackmail and other potential credit card fraud.
He also expressed concern at the bill's lack of privacy obligations and at a significant tightening control over the Internet in the UK.
Murray Perkins, a senior examiner with the BBFC, has indicated that the depiction of violent and criminal pornographic acts would be prohibited both online and off, in accordance with the way obscenity laws are interpreted by British prosecutors.
And the way British prosecutors interpret obscenity laws is very censorial indeed with many totally mainstream porn elements such as squirting and fisting being considered somehow obscene by these government censors.
Jim Killock, executive director of the Open Rights Group, said in an earlier statement the legislation would lead to unprecedented censorship. He noted:
Once this administrative power to block websites is in place, it will invariably be used to censor other content.
Of course pro-censorship campaigners are delighted. Vicki Shotbolt, chief executive officer for Parent Zone, gloated about the end of people's freedom to access porn.
This isn't about reducing anyone's freedom to access porn. It is simply bringing the online world more in line with the offline.
A major fetish forum Fetlife has announced that censorship pressures have lead to the removal of 100's of groups and 1000's of fetishes from
John Baku of Fetlife posted (edited for brevity):
I apologize for the deletion of 100s of groups and 1,000s of fetishes without any warning, let alone sufficient notice. I apologize for not making this announcement earlier and leaving everyone in the dark, and most importantly, I
apologize for letting many of you down.
I wish we could have done things differently, but even upon reflection, I believe we did what we had to do to protect the community and FetLife with the information we had when we made each decision along the way.
Before making any decisions, we consulted with multiple parties. We consulted with the team, partners, financial institutions, the NCSF (National Coalition for Sexual Freedom), the FSC (Free Speech Coalition), lawyers, and anyone
else we thought might have insight for us.
So, why did we make the announcement? Everything falls under one of three categories: financial risk, legal risk, and community risk.
Let's first talk quickly about the financial risk and get it out of the way because I don't want it to detract from the high priority issues i.e. the legal and community risks. A merchant account is what allows us to process credit
cards on FetLife. The ads you see on FetLife covers the cost of approximately 1/2 the cost of our servers and bandwidth -- that's it. Hence, without a merchant account, FetLife runs at a loss every month -- and we are not talking a couple of dollars a
month, we are talking significant losses.
Last Tuesday we got a notice that one of our merchant accounts was shutting us down. One of the card companies contacted them directly and told the bank to stop processing for us. The bank asked for more information, but the only
thing they could get from the card company was that part of it had to do with blood, needles, and vampirism.
Three days later, we get another notice, this time from our other merchant account. They got a similar call from the same card company, and they were asked to close our account. This time they were told it was for Illegal or
Hence we can no longer process credit cards on FetLife and will most likely not be able to for a while.
The Legal Risk
There are numerous things at play here:
A highly publicized rape case in Australia involving a member of the community; An organization that participated in the anti-porn bill that wants to see sites like FetLife taken off the internet; Talk of reviving the obscenity task
force in the US; The Digital Economy bill in the UK that's being debated currently; BPjM in Germany; and We've been one of the most liberal, if not the most liberal, adult site on the web which makes us the perfect target; We can put our heads in the
sand, but that is both naive and irresponsible. All of the above have real legal risks attached to them with potentially equally real consequences. Maybe not to you directly but it does to FetLife, the team behind FetLife, and myself.
The Community Risk
The one thing that bonds us all together is our love for the kinky community. Without the kinky community, without sites like FetLife, many of us would not have a place to call home, a place in which we are accepted and understood,
and dare I say a place in which we feel free to be ourselves.
If we hope to win the war, if we want our society to be more accepting of us, then we can't give them a reason to vilify us. People always need someone to blame, and we need to stop making ourselves the easy target.
Both FetLife and the NCSF believe that the proposed changes will give us the opportunity to flourish as a community while better protecting ourselves from outside attack.
With the help of the NCSF, lawyers, partners, and merchant providers, we came up with the following pillars that will make up our guidelines:
Nothing non-consensual (abduction, rape, etc.)
Nothing that impairs consent (drugs, alcohol, etc.)
No permanent or lasting damage (snuff, lacerations, deep cutting, etc.)
No hate speech (Nazi roleplay, race play, etc.)
Nothing that falls under obscenity (incest, etc. )
We hope to be able to publish our new content guidelines shortly as well as implement changes to caretaking so that we don't ever find ourselves in a similar situation again.
French authorities ordered the blockage or removal of more than 2,700 websites in 2016, Interior Minister Bruno Le Roux announced. He said
that his government has requested blocks for 834 websites and that 1,929 more be pulled from search engines' results as part of the fight against child pornographic and terrorist content. He said:
To face an extremely serious terror threat, we've given ourselves unprecedented means to reinforce the efficacy of our actions.
Perhaps to obscure censorship details, Le Roux unhelpfully didn't detail any stats on what type of websites were blocked.
French authorities can block sites without a judge's order under a 2011 law that was brought into effect in after jihadist attacks killed 17 people at a satirical magazine and a kosher supermarket.
Virtual reality headset manufacturer Oculus have announced that all games made available on its Oculus Store must have an age
classification determined using tools from the International Age Rating Coalition (IARC). The company writes in a blog post:
We're committed to helping everyone on the Oculus platform make well-informed purchasing decisions. That's why we are now utilizing the International Age Rating Coalition (IARC) to give people trusted and familiar ratings for all Oculus experiences.
Moving forward, all titles in the Oculus Store will need to show age and content ratings assigned through the IARC rating process. This change will make it easier for developers to get age and content ratings for your app from multiple territories
simultaneously. It also provides consumers a consistent set of familiar and trusted ratings that reflect their own cultural norms regarding content and age-appropriateness.
In order to give people consistent ratings no matter where they live, all titles in the Oculus Store must have IARC assigned ratings. New titles submitted to the store will receive an automatic prompt to obtain their rating through IARC by answering a
simple set of questions. IARC will provide a rating for each applicable region and rating authority at the conclusion of the questionnaire. The ratings will then be automatically applied to the title. Existing titles will need to complete the IARC rating
process no later than March 1, 2017 to avoid removal from the Oculus Store.
Last year, a coalition of over 70 social justice groups and individuals released a list of demands to
Facebook founder and CEO, Mark Zuckerberg, asking him to address their concerns over Facebook's use of censorship in compliance with law enforcement.
Several organizations reported on activists whose facebook accounts were censored while covering the civilian uprisings in Charlotte, NC. Other incidents include the removal of live footage from anti Dakota Access Pipeline protests, the temporary
disabling of Palestinian journalists' accounts, and reports that Facebook sent data to help police track and surveil protesters in Ferguson, MO and Baltimore, MD. , Reem Suleiman, campaigner at SumOfUs said:
We're still in the dark about how Facebook censors users and collaborates voluntarily with law enforcement. Facebook needs to come clean with the hundreds of thousands of people asking for transparency and public accountability.
Brandi Collins, Campaign Director for Color Of Change said:
Social media platforms like Facebook are a powerful tool for Black people to draw attention to injustices our community faces That's why we're so concerned that a powerful company like Facebook has been quick to silence Black voices by censoring
individual Facebook users at the request of law enforcement. We recognize Facebook is under pressure from law enforcement and the company has a responsibility to protect its users' freedom of expression. Unfortunately, each time we've tried to engage
Facebook around these issues, our suggestions have been dismissed or ignored. We will continue to publicly call for an overhaul of Facebook's current policies and practices until the company refuses to enable the censorship of Black communities.
Although the group is calling on Facebook to censor its own activists less, the coalition wrote to Facebook to ask for its opponents to be censored more:
At the same time, harassment and threats directed at activists based ont heir race, religion, and sexual orientation is thriving on Facebook. Many of thesea ctivists have reported such harassment and threats by users and pages on Facebook only to be told
that they don't violate Facebook's Community Standards. Similar experiences have been reported by Facebook users from a variety of communities, yet your recent response indicates you are adequately addressing the problem. We disagree.
Liberty is launching a landmark legal challenge to the extreme mass surveillance powers in the Government's new Investigatory Powers Act -- which lets the
state monitor everybody's web history and email, text and phone records, and hack computers, phones and tablets on an industrial scale.
Liberty is seeking a High Court judicial review of the core bulk powers in the so-called Snoopers' Charter -- and calling on the public to help it take on the challenge by donating v
ia crowdfunding platform CrowdJustice
Martha Spurrier, Director of Liberty, said:
Last year, this Government exploited fear and distraction to quietly create the most extreme surveillance regime of any democracy in history. Hundreds of thousands of people have since called for this Act's repeal because they see it for what it is -- an
unprecedented, unjustified assault on our freedom.
We hope anybody with an interest in defending our democracy, privacy, press freedom, fair trials, protest rights, free speech and the safety and cybersecurity of everyone in the UK will support this crowdfunded challenge, and make 2017 the year we
reclaim our rights.
The Investigatory Powers Act passed in an atmosphere of shambolic political opposition last year, despite the Government failing to provide any evidence that such indiscriminate powers were lawful or necessary to prevent or detect crime.
Liberty will seek to challenge the lawfulness of the following powers, which it believes breach the public's rights:
Bulk hacking -- the Act lets police and agencies access, control and alter electronic devices like computers, phones and tablets on an industrial scale, regardless of whether their owners are suspected of involvement
in crime -- leaving them vulnerable to further attack by hackers.
Bulk interception -- the Act allows the state to read texts, online messages and emails and listen in on calls en masse, without requiring suspicion of criminal activity.
Bulk acquisition of everybody's communications data and internet history -- the Act forces communications companies and service providers to hand over records of everybody's emails, phone calls and texts and entire web
browsing history to state agencies to store, data-mine and profile at its will. This provides a goldmine of valuable personal information for criminal hackers and foreign spies.
Bulk personal datasets -- the Act lets agencies acquire and link vast databases held by the public or private sector. These contain details on religion, ethnic origin, sexuality, political leanings and health problems,
potentially on the entire population -- and are ripe for abuse and discrimination.
In a challenge to the Data Retention and Investigatory Powers Act (DRIPA) by MP Tom Watson, represented by Liberty, the CJEU ruled the UK Government was breaking the law by indiscriminately collecting and accessing the nation's internet activity
and phone records.
DRIPA forced communications companies to store records of everybody's emails, texts, phone calls and internet communications and let hundreds of public bodies grant themselves access with no suspicion of serious crime or independent sign-off.
Judges ruled the regime breached British people's rights because it:
Allowed indiscriminate retention of all communications data.
Did not restrict access to the purpose of preventing and detecting precisely defined serious crime.
Let police and public bodies authorise their own access, instead of requiring prior authorisation by a court or independent body.
Did not require that people be notified after their data had been accessed.
Did not require that the data be kept within the European Union.
DRIPA expired at the end of 2016 -- but its powers are replicated and vastly expanded in the Investigatory Powers Act, with no effort to counter the lack of safeguards found unlawful in the case.
For weeks, the German and international public sphere has been bombarded with a campaign
against so-called fake news. Now Der Spiegel is reporting that the government now wants to establish a Defence Centre against Misinformation , a type of censorship and propaganda agency.
The Defence Centre will be set up in the Federal Press Office under Steffen Seibert. The new centre is supposed to strengthen the political power of defence of the population and force social networks such as Facebook, Google and Twitter to
censor content posted by users.
The acceptance of a post-factual age would amount to political capitulation, an internal paper quoted by Der Spiegel said. The paper insisted that authentic political communication remains crucial for the 21 century as well.
Accordingly, wide-reaching measures would have to be formulated to deal with the disinformation campaign, fake news and the manipulation of public opinion.
The World Socialist Web Site notes:
In reality the plans for an Orwellian Truth Ministry have nothing to do with concerns about false news reports. Instead, the established parties, the state media and private media corporations fear that they are losing their monopoly on public opinion.
The Internet has provided millions of people with the possibility, for the first time, of obtaining access to information that has not been selected and filtered by the official media. This has been behind the fear in the media and political parties.
The ruling class is reacting to growing social tensions and political discontent in the same way it has in the past: with police, prosecution and the suppression of free speech.
Maybe German politicians are just panicking about the unpopularity of their free-for-all immigration and refugee policy.
Last year the state of California passed a new law that banned sites that offer paid subscriptions, and allow people to post CVs and bios, from
publishing individuals' ages. The law came into effect on 1st January 2017, and it is now being challenged by IMDb who have not taken down celebrity birthdays.
The state of California introduced the new law as a politically correct move against age-discrimination. Perhaps they would have done better to frame the birthday ban more in terms of privacy protections, date of birth is quite a key piece of information
enabling identity fraud.
MDb believes that the law is a violation of the First Amendment and it says the state has chosen instead to chill free speech and to undermine access to factual information of public interest rather than trying to tackle age-discrimination in a
more meaningful way. IMDb has now filed a lawsuit against the Californian law.
A bill allowing Israeli courts to force social media companies to remove content defined as incitement has passed its first reading in parliament.
The Facebook bill sponsored by ministers Gilad Erdan and Ayelet Shaked would allow Israeli courts to immediately order content taken down if it is deemed to pose a public, personal or state security risk and constitutes a criminal offense.
Facebook adheres to its own removal policy when it comes to online content and freedom of speech issues and has generally not removed as much as state censors would like.
Tehilla Shwartz Altshuler of the Israel Democracy Institute has criticized the Facebook bill as too broad. She commented that the bill will not solve the problem and will hurt freedom of expression for all.
Facebook has once again drawn sharp criticism over its censorship policies after the social media giant reportedly blocked a photo
of the historic naked statue of the sea god Neptune that stands in the Piazza del Nuttuno in Bologna, Italy.
Local writer Elisa Barbari said she chose a photograph of the 16th century 3.2-metre high bronze Renaissance statue of the sea god holding a trident to illustrate her Facebook page titled, Stories, curiosities and views of Bologna.
However, Facebook reportedly objected to the nude image of the iconic statue. In a statement, the social media company told Barbari:
The use of the image was not approved because it violates Facebook's guidelines on advertising. It presents an image with content that is explicitly sexual and which shows to an excessive degree the body, concentrating unnecessarily on body parts.
Inevitably when sufficient bad press is generated by Facebook's ludicrous aversion to trivial nudity, the company admitted that it had again made a ghastly mistake and grovelled:
Our team processes millions of advertising images each week, and in some instances we incorrectly prohibit ads. This image does not violate our ad policies. We apologise for the error and have let the advertiser know we are approving their ad.