Twitter is giving itself the facility to withhold content in specific countries, while keeping that content available for the rest of the world, the company has announced.
Until now, the only way for Twitter to censor content was to universally eliminate it from the site. This change means content deemed inappropriate by a specific government can be withheld locally, explains a blog post called The Tweets Still
When we receive a request from an authorized entity, we will act in accordance with appropriate laws and our terms of service, a Twitter rep told Mashable.
If and when content is withheld, affected users will be notified of either an account or tweet's censorship. Twitter will make that decision public on Chilling Effects, through an expanded partnership that charts Cease and Desist Notices.
Twitter's new approach to censoring tweets has users rallying around the hashtag #TwitterBlackout, a call to boycott the microblogging service.
The change lets Twitter withhold content on a country-by-country basis, when a government deems the tweets inappropriate. Rather than wholly removing the content from the site, it will now only be blocked locally.
Many users have expressed dissatisfaction with the change. Tweets have been streaming in, in various languages, all with the #TwitterBlackout hashtag.
Anonymous has also supported the blackout. One of its tweets read:
SPREAD THE WORD #TwitterBlackout I will not tweet for the whole of January 28th due to the new twitter censor rule #Twitter #J28?
Offsite: What Does Twitter's Country-by-Country Takedown System Mean for Freedom of Expression?
So what should Twitter users do? Keep Twitter honest. First, pay attention to the notices that Twitter sends and to the archive being created on Chilling Effects. If Twitter starts honoring court orders from India to take down tweets that are
offensive to the Hindu gods, or tweets that criticize the king in Thailand, we want to know immediately. Furthermore, transparency projects such as Chilling Effects allow activists to track censorship all over the world, which is the first step
to putting pressure on countries to stand up for freedom of expression and put a stop to government censorship.
What else? Circumvent censorship. Twitter has not yet blocked a tweet using this new system, but when it does, that tweet will not simply disappear---there will be a message informing you that content has been blocked due to your geographical
location. Fortunately, your geographical location is easy to change on the Internet. You can use a proxy or a Tor exit node located in another country. Read Write Web also suggests that you can circumvent per-country censorship by simply changing
the country listed in your profile.
Twitter CEO Dick Costolo took the stage at AllThingsD's media conference to defend the company's new censorship policies. He argued that Twitter's new policies allow for greater freedom of speech on the platform. Previously, when a government
demanded that Twitter remove a tweet or block a user, access to that content would be blocked from the entire world. Now, Twitter can hide the tweet or user from that individual country, but allow the rest of the world to see it. Costello
There's been no change in our stance or attitude or policy with respect to content on Twitte. What we announced is a greater capability we now have. Now, when we are issued a valid legal order in a country in which we operate, such as a DMCA
takedown notice, we are able to leave the content up for as many people around the world as possible, while still operating within the local law. You can't operate in these countries and choose the laws you want to abide by.
We don't proactively go do anything. This is purely a reactive capability to what we determine to be a valid and applicable legal order in a country in which we operate. We're fully blocked in Iran and China. And I don't see the current
environment in either country being one in which we could go and operate anytime soon.
The Thai government becomes the first to publicly endorse Twitter's decision to permit country-specific censorship of content
Thai information and communication technology minister, Jeerawan Boonperm, called Twitter's decision a welcome development and said the ministry already received good co-operation from internet companies such as Google and Facebook.
The Thai government would soon be contacting Twitter to discuss ways in which they can collaborate , she told the Bangkok Post.
Thailand has some of the most repressive censorship laws in the world, ranking it 153 out of 178 in Reporters Without Borders' 2011 Press Freedom Index. In particular these are used to target criticism of the monarchy. Lese-majeste laws include
punishments by up to 15 years in prison, but under Thailand's 2007 computer crimes act prosecutors have been able to increase sentences.
Thailand's endorsement could have profound ramifications across the region, said Sunai Phasuk of Human Rights Watch Thailand, while it already adds more damage to an already worrying trend in Thailand . Twitter gives space to different
opinions and views, and that is so important in a restricted society -- it gives people a chance to speak up, he said. But if this censorship is welcomed by Thailand, then other countries, with worse records for human rights and freedom of
speech, will find that they have an ally.
In early June, about three weeks before Beyonce's latest album came out, one of her songs, a collaboration with the rapper Andre 3000, made its way to the open seas of the Internet. Twitter recently published a batch of data that sheds light
on the leak and provides insight into how Twitter censors information on the Internet.
It began when a website called RapUp published a link to the song, Party . Someone tweeted the link and lots of people retweeted it. From the perspective of Beyonce's record label, Columbia, this was not cool. So Columbia turned to
a London-based contractor called Web Sheriff, which sent a takedown request to Twitter. It contained a list of over 100 of those copyright-infringing tweets and retweets. Twitter wrote back quickly: We have removed the reported materials from
Twitter has removed thousands of tweets from its site over the years, and last month, it published the more than 4,000 takedown requests that have floated into its inbox since 2009.
A request for an injunction to stop Twitter users from alerting drivers to police roadblocks, radar traps and drunk-driving checkpoints could make Brazil the first country to take Twitter up on its plan to censor content at governments' requests.
Twitter unveiled plans last month that would allow country-specific censorship of tweets that might break local laws.
As far as we know this is the first time that a country has attempted to take Twitter up on their country-by-country take down, Eva Galperin of the San Francisco-based Electronic Frontier Foundation said: Twitter has given these
countries the tool and now Brazil has chosen to use it, she said.
Carlos Eduardo Rodrigues Alves, a spokesman for the federal prosecutor's office, said the injunction request was filed Monday. He said a judge was expected to announce in the next few days whether he will issue the order against Twitter users.
Twitter is taking its first step towards censorship.
Chief Executive Dick Costolo who was speaking to the Financial Times, described his frustration in tackling the problem of horrifying abuse. Irresponsible twitter-users apparently find the site ideal for expressing all kinds of extremist,
racist and sexist opinions.
To stop the hate speech anarchy, Twitter is considering starting off by blocking the very possibility of replies from so-called non-authoritative users, marked out by the absence of a profile picture, followers or bio information,
as FT.com reports. This is the first step, but there might be more to come.
However, the company's management is concerned that by installing any kinds of selective measures, they may put an end to the unique Twitter-style freedom of tweets that has helped Arab revolutions.
The reason we want to allow pseudonyms is there are lots of places in the world where it's the only way you'd be able to speak freely, FT quotes Dick Costolo as saying. Twitter is basically the last harbor of anonymity, as it does
not have to be linked with such powerful database platforms as Facebook and Google. Silencing trolls may hit those revolutionary users as well.
In February 2012, Twitter introduced a policy that enables individual tweets and accounts to be blocked on a country-by-country basis. If a government submits a court order to Twitter, asking for a tweet or account to be blocked, Twitter will
comply. But the blocking will only occur in the country in question , to users throughout the rest of the world, the affected content will look no different.
This past October, Twitter enacted this policy for the first time to block tweets from the account of the German extreme right-wing group, Besseres Hannover. The German government has formally banned and seized the assets of the group, and some
of its members have been charged with inciting racial hatred and creating a criminal organization.
The group announced that it would challenge the blocking in court, but as things stand, Twitter's move to block the group's tweets was in accordance with local German law.
Twitter's general counsel, Alex MacGillivray, announced the issue on Twitter and linked to a copy of the request from German police to block the @hannoverticker account in Germany.
Twitter has announced new censorship rules related to tweets deemed to be abusive. Twitter explains in a blog post:
First, we are making two policy changes, one related to prohibited content, and one about how we enforce certain policy violations. We are updating our violent threats policy so that the prohibition is not limited to direct, specific threats
of violence against others but now extends to threats of violence against others or promot[ing] violence against others. Our previous policy was unduly narrow and limited our ability to act on certain kinds of threatening behavior.
The updated language better describes the range of prohibited content and our intention to act when users step over the line into abuse.
On the enforcement side, in addition to other actions we already take in response to abuse violations (such as requiring users to delete content or verify their phone number), we're introducing an additional enforcement option that gives our
support team the ability to lock abusive accounts for specific periods of time. This option gives us leverage in a variety of contexts, particularly where multiple users begin harassing a particular person or group of people.
Second, we have begun to test a product feature to help us identify suspected abusive Tweets and limit their reach. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of
the account itself, and the similarity of a Tweet to other content that our safety team has in the past independently determined to be abusive. It will not affect your ability to see content that you've explicitly sought out, such as Tweets from
accounts you follow, but instead is designed to help us limit the potential harm of abusive content. This feature does not take into account whether the content posted or followed by a user is controversial or unpopular.
Twitter has introduced a new censorship system with the unlikely sounding capability to detect abusive tweets and suspend accounts without waiting for complaints to be flagged. Transgressions results in the senders receiving half-day suspensions.
The company has refused to provide details on specifically how the new system works, but using a combination of behavioral and keyword indicators, the filter flags posts it deems to be violations of Twitter's acceptable speech policy and issues
users suspensions of half a day during which they cannot post new globally accessible tweets and their existing tweets are visible only to followers.
From the platform that once called itself the free speech wing of the free speech party, these new tools mark an incredible turn of events. The anti-censorship ethic seems to have been lost in a failed attempt to sell the company after
prospective buyers were unhappy with the lack of censorship control over the platform.
Inevitably Twiiter has refused to provide even outline ideas of the indicators it is using, especially when it comes to the particular linguistic cues it is concerned with. While offering too much detail might give the upper hand to those who
would try to work around the new system, it is important for the broader community to have at least some understanding of the kinds of language flagged by Twitter's new tool so that they can try and stay within the rules.
It is also unclear why Twitter chose not to permit users to contest what they believe to be a wrongful suspension. Given that the feature is brand-new and bound to encounter plenty of unforeseen contexts where it could yield a wrong result, it is
surprising that Twitter chose not to provide a recovery mechanism where it could catch these before they become news.
And the first example of censorship was quick to follow. Many outlets this morning picked up on a frightening instance of the Twitter algorithm's new power to police not only the language we use but the thoughts we express. In this case a user
allegedly tweeted a response to a news report about comments made by Senator John McCain and argued that it was his belief that the senator was a traitor who had committed formal treason against the nation. Twitter did not respond to a
request for more information about what occurred in this case and if this was indeed the tweet that caused the user to be suspended, but did not dispute that the user had been suspended or that his use of the word traitor had factored
heavily into that suspension.
Twitter is continuing its campaign to add controls and warnings to tweets.
It now presents a warning when users click on a profile that may include sensitive content . The warning greys out the profile's tweets, bio and profile picture, but gives users the option to view the profile if they wish.
Twitter used to only mark individual tweets with a sensitivity warning, but has now expanded this to censor whole profiles unless users agree to view them.
The warning message given with the greyed out profile says:
Caution: This profile may include sensitive content. You're seeing this warning because they tweet sensitive images or language. Do you still want to view it?
Twitter did not publicly announce the new feature, and tweeters with profiles being greyed out are not informed by Twitter.
If you're looking to follow news and advocacy about an anticipated Vermont legislature vote this week on legalizing marijuana, a search for the latest tweets that use the combined terms Vermont and marijuana will for many Twitter
users yield zero results.
Same goes for searches for tweets using the terms pot, weed or cannabis. The latest results for jackass and jerk , words generally printed without censorship by news outlets, also yield a blank page with a
message claiming: Nothing came up for that search, which is a little weird. Maybe check what you searched for and try again.
The omissions are examples of a new censorship syste introduced by Twitter, with users required to opt out of a filter to see uncensored results.
Top results for restricted terms still appear, but results for the most recent posts and for photos, videos and news content tabs do not.
There was plenty of strong language flying around on Twitter in response to the Harvey Weinstein scandal. Twitter got a bit confused about who was harassing who, and ended up suspending Weinstein critic Rose McGowan for harassment. Twitter ended
up being boycotted over its wrong call, and so Twitter bosses have been banging their heads together to do something.
Wired has got hold of an email outline an expansion of content liable to Twitter censorship and also for more severe sanctions for errant tweeters. Twitter's head of safety policy wrote of new measures to rolled out in the coming weeks:
Our definition of "non-consensual nudity" is expanding to more broadly include content like upskirt imagery, "creep shots," and hidden camera content. Given that people appearing in this content often do not know the material
exists, we will not require a report from a target in order to remove it.
While we recognize there's an entire genre of pornography dedicated to this type of content, it's nearly impossible for us to distinguish when this content may/may not have been produced and distributed consensually. We would rather error on the
side of protecting victims and removing this type of content when we become aware of it.
Unwanted sexual advances
Pornographic content is generally permitted on Twitter, and it's challenging to know whether or not sexually charged conversations and/or the exchange of sexual media may be wanted. To help infer whether or not a conversation is consensual, we
currently rely on and take enforcement action only if/when we receive a report from a participant in the conversation.
We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement action when we receive a report from someone directly involved in the conversation.
Hate symbols and imagery (new)
We are still defining the exact scope of what will be covered by this policy. At a high level, hateful imagery, hate symbols, etc will now be considered sensitive media (similar to how we handle and enforce adult content and graphic violence).
More details to come.
Violent groups (new)
We are still defining the exact scope of what will be covered by this policy. At a high level, we will take enforcement action against organizations that use/have historically used violence as a means to advance their cause. More details to come
here as well
Tweets that glorify violence (new)
We already take enforcement action against direct violent threats ("I'm going to kill you"), vague violent threats ("Someone should kill you") and wishes/hopes of serious physical harm, death, or disease ("I hope someone
kills you"). Moving forward, we will also take action against content that glorifies ("Praise be to for shooting up. He's a hero!") and/or condones ("Murdering makes sense. That way they won't be a drain on social
services"). More details to come.
Offsite Article: Changes to the way that 'sensitive' content is defined and blocked from Twitter search
Twitter announced yesterday that it would begin removing verification badges for famous tweeters that it does not approve of. Not for what is tweeted, but for offline behaviour Twitter does not like.
The key phrase in Twitter's policy update is this one: Reasons for removal may reflect behaviors on and off Twitter. Before yesterday, the rules explicitly applied only to behavior on Twitter. From now on, holders of verified badges will be held
accountable for their behavior in the real world as well. Twitter has promised further information about the new censorship policy in due course.
Many questions remain unanswered. What will the company's review consist of? How will it examine users' offline behavior? Will it simply respond to reports, or will it actively look for violations? Will it handle the work with its existing team,
or will it expand its trust and safety team?
Twitter has immediately rescinded blue tick verification from accounts belonging to far-right activists, including Jason Kessler, a US white supremacist, and Tommy Robinson, founder of the English Defence League.
Offsite Comment: Twitter has turned its back on free speech
The platform plans to exercise ideological control over its users.
A new undercover video from a group of conservative investigative journalists appears to show Twitter staff and former employees talking about how they censor content they disagree with.
James O'Keefe, Project Veritas founder, posted a video showing an undercover reporter speaking to Abhinov Vadrevu, a former Twitter software engineer, at a San Francisco restaurant on January 3.
There, he discussed a technique referred to as shadow banning, which means that users' content is quietly blocked without them ever knowing about it. Their tweets would still appear to their followers, but it wouldn't appear in search results or
anywhere else on Twitter. So posters just think that no one is engaging with their content, when in reality, no one is seeing it.
Olinda Hassan, a policy manager for Twitter's Trust and Safety team, was filmed talking about development of a system for down ranking shitty people.
Another Twitter engineer claimed that staff already have tools to censor pro-Trump or conservative content. One Twitter engineer appeared to suggest that the social network was trying to ban, like, a way of talking. Anyone found to be aggressive
or negative will just vanish.
Every single conversation is going to be rated by a machine and the machine is going to say whether or not it's a positive thing or a negative thing, Twitter software engineer Steven Pierre was filmed on December 8 saying as he discussed the
development of an automated censure system.
In the latest undercover Project Veritas video investigation, eight current and former Twitter employees are on camera explaining steps the social media giant is taking to censor political content that they don't like.
Twitter has outlined further censorship measures in a blog post:
In March, we introduced our new approach to improve the health of the public conversation on Twitter. One important issue we've been working to address is what some might refer to as "trolls." Some troll-like behavior is fun, good and
humorous. What we're talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search. Some of these accounts and Tweets violate our
policies, and, in those cases, we take action on them. Others don't but are behaving in ways that distort the conversation.
To put this in context, less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what's reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large --
and negative -- impact on people's experience on Twitter. The challenge for us has been: how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?
A New Approach
Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we're tackling issues of behaviors that distort and detract
from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we're able to improve the health of the conversation, and
everyone's experience on Twitter, without waiting for people who use Twitter to report potential issues to us.
There are many new signals we're taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that
repeatedly Tweet and mention accounts that don't follow them, or behavior that might indicate a coordinated attack. We're also looking at how accounts are connected to those that violate our rules and how they interact with each other.
These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn't violate our policies, it will remain on Twitter, and will be available if you click on
"Show more replies" or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.
In our early testing in markets around the world, we've already seen this new approach have a positive impact, resulting in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. That means fewer people are seeing
Tweets that disrupt their experience on Twitter.
Our work is far from done. This is only one part of our work to improve the health of the conversation and to make everyone's Twitter experience better. This technology and our team will learn over time and will make mistakes. There will be false
positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. We'll continue to be open and honest about the mistakes we make and the progress we are making. We're encouraged by the results we've seen so
far, but also recognize that this is just one step on a much longer journey to improve the overall health of our service and your experience on it.
The radio host and colourful conspiracy theorist Alex Jones has been permanently censored by Twitter.
One month after it distinguished itself from the rest of the tech industry by declining to bar the rightwing shock jock its platform, Twitter fell in line with the other major social networks in banning Jones.
Twitter justified the censorship saying:
We took this action based on new reports of Tweets and videos posted yesterday that violate our abusive behavior policy, in addition to the accounts' past violations. We will continue to evaluate reports we receive regarding other accounts
potentially associated with @realalexjones or @infowars and will take action if content that violates our rules is reported or if other accounts are utilized in an attempt to circumvent their ban.
PayPal is the latest tech company to ban Infowars. Paypal told PC<ag:
We undertook an extensive review of the Infowars sites, and found instances that promoted hate or discriminatory intolerance against certain communities and religions, which run counter to our core value of inclusion.
InfoWars said PayPal gave it 10 days to find an alternate payment provider before terminating the service. PayPal didn't cite the specific instances of hate speech, but Infowars says the content involved criticism of Islam and opposition to
transgenderism being to taught children in schools.
Twitter is consulting its users about new censorship rules banning 'dehumanising speech', in which people are compared to animals or objects. It said language that made people seem less than human had repercussions.
The social network already has a hateful-conduct policy but it is implemented discriminately allowing some types of insulting language to remain online. For example, countless tweets describing middle-aged white men as gammon can be found on the
At present it bans insults based on a person's: race ethnicity nationality sexual orientation sex gender religious beliefs age disability medical condition but there is an unwritten secondary rule which means that the prohibition excludes groups
not favoured under the conventions of political correctness.
Twitter said it intended to prohibit dehumanising language towards people in an identifiable group because some researchers claim it could lead to real-world violence. Asked whether calling men gammon would count as dehumanising speech, the
company said it would first seek the views of its members. Twitter's announcement reads in part:
For the last three months, we have been developing a new policy to address dehumanizing language on Twitter. Language that makes someone less than human can have repercussions off the service, including normalizing serious violence. Some of this
content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity,
religious affiliation, age, disability, or serious disease), but there are still Tweets many people consider to be abusive, even when they do not break our rules. Better addressing this gap is part of our work to serve a healthy public
With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target. Many scholars have examined
the relationship between dehumanization and violence. For example, Susan Benesch has described dehumanizing language as a hallmark of dangerous speech, because it can make violence seem acceptable, and Herbert Kelman has posited that
dehumanization can reduce the strength of restraining forces against violence.
witter's critics are now using the hashtag #verifiedhate to highlight examples of what they believe to be bias in what the platform judges to be unacceptable. The gammon insult gained popularity after a collage of contributors to the BBC's
Question Time programme - each middle-aged, white and male - was shared along with the phrase Great Wall of Gammon in 2017.
The scope of identifiable groups covered by the new rules will be decided after a public consultation that will run until 9 October.
Ps before filling in the consultation form, note that it was broken for me and didn't accept my submission. For the record, Melon Farmer tried to submit the comment:
This is yet another policy that restricts free speech. As always, the vagueness of the rules will allow Twitter, or its moderators, to arbitrarily apply its own morality anyway. But not to worry, the richness of language will always enable
people to dream up new ways to insult others.