Our goal is to help ensure that you're viewing content that's relevant to you, and not inadvertently coming across content that isn't. Here are a few things we came up with:
Stricter standard for mature content - While videos featuring pornographic images or sex acts are always removed from the site when they're flagged, we're tightening the standard for what is considered sexually suggestive. Videos with
sexually suggestive (but not prohibited) content will be age-restricted, which means they'll be available only to viewers who are 18 or older.
Demotion of sexually suggestive content and profanity - Videos that are considered sexually suggestive, or that contain profanity, will be algorithmically demoted on our Most Viewed, Top Favourited, and other browse pages. The
classification of these types of videos is based on a number of factors, including video content and descriptions. In testing, we've found that out of the thousands of videos on these pages, only several each day are automatically demoted for
being too graphic or explicit. However, those videos are often the ones which end up being repeatedly flagged by the community as being inappropriate.
Improved thumbnails - To make sure your thumbnail represents your video, your choices will now be selected algorithmically.
More accurate video information - Our Community Guidelines have always prohibited folks from attempting to game view counts by entering misleading information in video descriptions, tags, titles, and other metadata. We remain serious about
enforcing these rules. Remember, violations of these guidelines could result in removal of your video and repeated violations will lead to termination of your account.
In recent months, long-time users of video-sharing website YouTube have noticed that the Google-owned site's definition of acceptable content has narrowed considerably.
In addition to its longstanding campaign to crack down on illegally copied material, in September the site outlawed videos depicting drug abuse and last week tightened its guidelines further to restrict profanity and sexually suggestive content.
In other words, before the money wagons roll in, some law and order needs to be imposed.
YouTube have increased the range of activities that are barred to include, amongst other things, invasions of privacy.
If a video you've recorded features people who are readily identifiable and who haven't consented to being filmed, there's a chance they'll file a privacy complaint seeking its removal, say its new guidelines: Don't post other people's
personal information, including phone numbers, addresses, credit card numbers, and government IDs. We're serious about keeping our users safe and suspend accounts that violate people's privacy.
It also said that material designed to harass people was not welcome. If you wouldn't say it to someone's face, don't say it on YouTube, say the new guidelines: And if you're looking to attack, harass, demean, or impersonate others, go
The new guidelines also seek to govern the behaviour of people reacting to videos: Users shouldn't feel threatened when they're on YouTube. Don't leave threatening comments on other people's videos.
Youtube has been introduced a new tier of censorship designed to restrict the audience for videos deemed to be inappropriate or offensive to some audiences.
The site is now putting videos into a limited state if they are deemed controversial enough to be considered objectionable, but not hateful, pornographic or violent enough to be banned altogether.
This policy was announced several months ago but has come into force in the past week, prompting anger among members of the YouTube community.
YouTube defines Limited Videos as follows:
Our Community Guidelines prohibit hate speech that either promotes violence or has the primary purpose of inciting hatred against individuals or groups based on certain attributes. YouTube also prohibits content intended to recruit for terrorist
organizations, incite violence, celebrate terrorist attacks, or otherwise promote acts of terrorism. Some borderline videos, such as those containing inflammatory religious or supremacist content without a direct call to violence or a primary
purpose of inciting hatred, may not cross these lines for removal. Following user reports, if our review teams determine that a video is borderline under our policies, it may have some features disabled.
These videos will remain available on YouTube, but will be placed behind a warning message, and some features will be disabled, including comments, suggested videos, and likes. These videos are also not eligible for ads.
Having features disabled on a video will not create a strike on your account.
Videos which are put into a limited state cannot be embedded on other websites. They also cannot be easily published on social media using the usual share buttons and other users cannot comment on them. Crucially, the person who made the video
will no longer receive any payment.
Earlier this week, Julian Assange wrote:
'Controversial' but contract-legal videos [which break YouTube's terms and conditions] cannot be liked, embedded or earn [money from advertising revenue].
What's interesting about the new method deployed is that it is a clear attempt at social engineering. It isn't just turning off the ads. It's turning off the comments, embeds, etc too. Everything possible to strangle the reach without
US catholics have become an early victim of newly introduced censorship measure from YouTube presumably because their teaching is considered offensive due to politically incorrect attitudes towards gays and abortion. Catholic Online writes:
More media organizations are criticizing YouTube's increasingly oppressive soft censorship policies which are now eliminating mainstream news reports from the video sharing network. Many content creators on YouTube are losing millions in revenue
as the Google-owned firm reduces and cuts off payments in pursuit of profits and control.
YouTube is censoring content though various indirect means even if that content does not violate any terms of service. The Google-owned firm is removing content that it deems inappropriate or offensive, and is taking cues from the Southern
Poverty Law Center. The result seems to be a broad labeling of content, and the suppression of even mainstream news. Many of Catholic Online's bible readings have been caught up in YouTube's web of suppression, despite containing no commentary or
message other than the reading of the scriptures.
YouTube is not a government agency but a private platform, so it is free to ban or restrict content as it pleases them. Therefore, their policies, no matter how arbitrary, are not true censorship. However, the firm is practicing what some call
Soft censorship is any kind of activity that suppresses speech, particularly that which is true and accurate. It takes many forms. For example, broadcasting celebrity gossip in place of news is a form of soft censorship. Placing real news lower
in search results, preventing content from being shared on social media, or depriving media outlets of ad revenue for reporting on certain topics, are all common forms of soft censorship.
For some unknown reason, Catholic Online has also been targeted by these policies. Saints videos and daily readings are the most common targets. None of this content can be considered objectionable by any means, and none of it infringes on
YouTube's terms and conditions. It is suspected that anti-Christian bigotry, such as that promoted by liberal extremist organizations like the Southern Poverty Law Center, are to blame.
The problem for content creators and media organizations is that there are few places for them to go. Most video viewing takes place on YouTube, and there are no video hosting sites as well known and widely used as YouTube. Other sites also
restrict content and some don't share revenues with content creators. This makes YouTube a monopoly; they are literally the only show in town.
The time has come for governments around the world to recognize that Facebook, Google, and YouTube control the public forum. If freedom of speech is to be protected, then these firms must be compelled to abide by free speech rules.
YouTube's algorithms, which are used to censor and demonetize videos on the platform, are killing its creators, according to a report.
Most of the initial censorship is left to algorithms, [which probably flag that a video should be censored as soon as it detects something politically incorrect], which presumably leads to the overcensorship underpinning the complaints].
Creators complain that YouTube has set up a slow and inefficient appeals system to counter cases of unfair censorship. Ad-disabled videos on YouTube must get 1,000 views in the span of seven days just to qualify for a review.
This approach hurts smaller YouTube channels, because it removes the ability for creators to make money on the most important stage of a YouTube video's life cycle: the first seven days, the report explains. Typically, videos receive 70% or more
of their views in the first seven days, according to multiple creators.
Some of the platform's most popular creators, are saying that the majority of their videos are being affected, dramatically reducing their revenue. Last week, liberal interviewer Dave Rubin, who has interviewed dozens of prominent political
figures, announced that a large percentage of his videos had been demonetized, cutting him off from being able to make money on the millions of views he typically gets, perhaps due to the politically incorrect leanings of his guests, eg Ex-Muslim
Ayaan Hirsi Ali, former Minnesota Governor Jesse Ventura, feminist activist and scholar Christina Hoff Sommers, and Larry King.
YouTube issued a response saying little, except that they hope the algorithms get better over time.
Prager University, a nonprofit that creates educational videos with conservative slants, has filed a lawsuit against YouTube and its parent company, Google, alleging that the company is censoring its content.
PragerU claims that more than three dozen of its videos have been restricted by YouTube over the past year. As a result, those who browse YouTube in restricted mode -- including many college and high school students -- are prevented from viewing
the content. Furthermore, restricted videos cannot earn any ad revenue.
PragerU says that by limiting access to their videos without a clear reason, YouTube has infringed upon PragerU's First Amendment rights.
YouTube has restricted edgy content in order to protect advertisers' brands. A number of advertisers told Google that they did not want their brand to be associated with edgy content. Google responded by banning all advertising from videos
claimed to contain edgy content. It keeps the brands happy but it has decimated many an online small business.
YouTube has announced an extension of its age restriction policy for parody videos using children's characters but with inappropriate themes
The new policy was announced on Thursday and will see age restrictions apply on content featuring inappropriate use of family entertainment characters like unofficial videos depicting Peppa Pig. The company already had a policy that rendered such
videos ineligible for advertising revenue, in the hope that doing would reduce the motivation to create them in the first place. Juniper Downs, , YouTube's director of policy explained:
Earlier this year, we updated our policies to make content featuring inappropriate use of family entertainment characters ineligible for monetisation,We're in the process of implementing a new policy that age restricts this content in the
YouTube main app when flagged. Age-restricted content is automatically not allowed in YouTube Kids. The YouTube team is made up of parents who are committed to improving our apps and getting this right.
Age-restricted videos can't be seen by users who aren't logged in, or by those who have entered their age as below 18 on both the site and the app. More importantly, they also don't show up on YouTube Kids, a separate app aimed at parents who
want to let their children under 13 use the site unsupervised.
Google makes their internal processes difficult to track by design, but the author of a report By Karlaplan states that these changes are fairly recent, suspected to have been implemented on the 30th of August -- the changes having only been
discovered in late October.
However, until the publication of this document , little other than anecdotal evidence was presented with complaints from YouTube content creators.
Through extensive analysis of the YouTube Data API and other sources, Karlaplan found that YouTube tags demonetized videos according to both severity and type of sensitive content -- neither of which is transparent to the uploader.
The report also notes that videos are more likely to be hidden from viewers if their likely viewership is low. Perhaps as higher viewership videos may be more likely to be appealed, or more likely to be spotted as examples of censorship and hence
generate bad publicity for Google.
Google have published an information page that is quite useful in detailing which videos get censored. Google outlines two levels of sensitivity that advertisers can select when not wanting to be associated with sensitive content. Google
While the Standard content filter excludes the most inappropriate content, it doesn't exclude everything that a particular advertiser may find objectionable. The Sensitive content categories allow you to opt out of additional content that many
advertisers find inappropriate. Eg:
Tragedy and conflict
Standard: Excludes graphic footage of combat or war
Sensitive: Excludes the above plus footage of soldiers marching with weapons
Sensitive social issues
Standard: Excludes videos intended to elicit a response about controversial issues
Sensitive: Excludes the above plus news commentary about controversial issues
Sexually suggestive content
Standard: Excludes videos about sex or sexual products
Sensitive: Excludes the above plus music videos with suggestive themes
Sensational and shocking
Standard: Excludes videos of disasters or accidents that show casualties or death
Sensitive: Excludes the above plus videos of moderate disasters or accidents that show minimal casualties or harm
Profanity and rough language
Standard: Excludes videos with frequent use of profanity
Sensitive: Excludes the above plus videos with profanity that has been bleeped out
Google is escalating its campaign of internet censorship, announcing that it will expand its workforce of human censors to over 10,000. The censors' primary focus will be videos and other content on YouTube, but will work across Google to censor
content and train its automated systems, which remove videos at a rate four times faster than its human employees.
Human censors have already reviewed over 2 million videos since June. YouTube has already removed over 150,000 videos, 50 percent of which were removed within two hours of upload. The company is working to accelerate the rate of takedown through
machine-learning from manual censorship.
YouTube CEO Susan Wojcicki explained the move in an official blog post:
Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly
2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future. We are also taking aggressive action on comments, launching new comment moderation tools and in some cases
shutting down comments altogether. In the last few weeks we've used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments. Our teams also work closely with NCMEC, the IWF,
and other child safety organizations around the world to report predatory behavior and accounts to the correct law enforcement agencies.
We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.
At the same time, we are expanding the network of academics, industry groups and subject matter experts who we can learn from and support to help us better understand emerging issues.
We will use our cutting-edge machine learning more widely to allow us to quickly and efficiently remove content that violates our guidelines. In June we deployed this technology to flag violent extremist content for human review and we've seen
Since June we have removed over 150,000 videos for violent extremism.
Machine learning is helping our human reviewers remove nearly five times as many videos than they were previously.
Today, 98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms.
Our advances in machine learning let us now take down nearly 70 percent of violent extremist content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed.
Since we started using machine learning to flag violent and extremist content in June, the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.
The conservative US news website, the Daily Caller, has revealed that Google has recruited several social justice organisations to assist in the censorship of videos on YouTube.
The Daily Caller notes:
The Southern Poverty Law Center is assisting YouTube in policing content on their platform. The left-wing nonprofit -- which has more recently come under fire for labeling legitimate conservative organizations as hate groups -- is one of the
more than 100 nongovernment organizations (NGOs) and government agencies in YouTube's Trusted Flaggers program.
The SPLC and other program members help police YouTube for extremist content, ranging from so-called hate speech to terrorist recruiting videos.
All of the groups in the program have confidentiality agreements. A handful of YouTube's Trusted Flaggers, including the Anti-Defamation League and No Hate Speech, a European organization, have gone public with their participation in the
program. The vast majority of the groups in the program have remained hidden behind their confidentiality agreements.
YouTube public policy director Juniper Downs said the third-party groups work closely with YouTube's employees to crack down on extremist content in two ways:
First, the flaggers are equipped with digital tools allowing them to mass flag content for review by YouTube personnel. Second, the partner groups act as guides to YouTube's content monitors and engineers designing the algorithms policing the
video platform but may lack the expertise needed to tackle a given subject.
We work with over 100 organizations as part of our Trusted Flagger program and we value the expertise these organizations bring to flagging content for review. All trusted flaggers attend a YouTube training to learn about our policies and
enforcement processes. Videos flagged by trusted flaggers are reviewed by YouTube content moderators according to YouTube's Community Guidelines. Content flagged by trusted flaggers is not automatically removed or subject to any differential
policies than content flagged from other users.
Nasim Najafi Aghdam, the woman who allegedly opened fire at YouTube's headquarters in a suburb of San Francisco, injuring three before killing herself, was apparently furious with the video website because it had stopped paying her for her clips.
No evidence had been found linking her to any individuals at the company where she allegedly opened fire on Tuesday.
Two of the three shooting victims from the incident were released from hospital on Tuesday night. A third, is currently in serious condition.
Aghdam's online profile shows she was a vegan activist who ran a website called NasimeSabz.com, meaning Green Breeze in Persian, where she posted about Persian culture and veganism, as well as long passages critical of YouTube .
Her father, Ismail Aghdam, told the Bay Area News Group from his San Diego home on Tuesday that she was angry with the Google-owned site because it had stopped paying her for videos she posted on the platform, and that he had warned the police
that she might be going to the company's headquarters.
TruNews is a 'YouTube channel run by the outlandish evangelist Rick Wiles. It has just been targetted by Google's censorship policies nad has been kicked into the unsearchable long grass.
Perhaps banned for being 'fake news' but in reality it is a little too unbelievable to even count as 'fake'.
freethinker.co.uk offer an amusing description of why the channel has been censored:
Why? Because Wiles's broadcasts are so damned nutty they serve as a warning to viewers that this is what happens when people's brain's are running on Jesus.
Of course, Wiles is even more miffed. He alludes to Google not following its 'don't be evil' mantra:
I have warned for years that a spirit of Nazism is rising up inside the USA. The new Nazis are here. America is on the verge of a French Revolution-style upheaval during which leftist mobs will seek to execute Christians and conservatives in
order to purge American society.
But this isn't the only example of Google being 'evil'.
video from YouTube titled YouTube Admits Not Notifying Subscribers & Screwing With Algorithms
Jimmy Dore notes that independent news sites often no longer qualify for monetisation, they are booted into the unsearchable long grass (as noted by TruNews) and now Google no longer informs subscribers when new videos are added. He contends that
the powers that be want news videos from mainstream media to be the dominant news source for YouTube viewers.