Our goal is to help ensure that you're viewing content that's relevant to you, and not inadvertently coming across content that isn't. Here
are a few things we came up with:
Stricter standard for mature content - While videos featuring pornographic images or sex acts are always removed from the site when they're flagged, we're tightening the standard for what is considered sexually suggestive. Videos with sexually
suggestive (but not prohibited) content will be age-restricted, which means they'll be available only to viewers who are 18 or older.
Demotion of sexually suggestive content and profanity - Videos that are considered sexually suggestive, or that contain profanity, will be algorithmically demoted on our Most Viewed, Top Favourited, and other browse pages. The classification of
these types of videos is based on a number of factors, including video content and descriptions. In testing, we've found that out of the thousands of videos on these pages, only several each day are automatically demoted for being too graphic or
explicit. However, those videos are often the ones which end up being repeatedly flagged by the community as being inappropriate.
Improved thumbnails - To make sure your thumbnail represents your video, your choices will now be selected algorithmically.
More accurate video information - Our Community Guidelines have always prohibited folks from attempting to game view counts by entering misleading information in video descriptions, tags, titles, and other metadata. We remain serious about enforcing
these rules. Remember, violations of these guidelines could result in removal of your video and repeated violations will lead to termination of your account.
In recent months, long-time users of video-sharing website YouTube have noticed that the Google-owned site's definition of acceptable
content has narrowed considerably.
In addition to its longstanding campaign to crack down on illegally copied material, in September the site outlawed videos depicting drug abuse and last week tightened its guidelines further to restrict profanity and sexually suggestive content.
In other words, before the money wagons roll in, some law and order needs to be imposed.
YouTube have increased the range of activities that are barred to include, amongst other things, invasions of privacy.
If a video you've recorded features people who are readily identifiable and who haven't consented to being filmed, there's a chance they'll file a privacy complaint seeking its removal, say its new guidelines: Don't post other people's personal
information, including phone numbers, addresses, credit card numbers, and government IDs. We're serious about keeping our users safe and suspend accounts that violate people's privacy.
It also said that material designed to harass people was not welcome. If you wouldn't say it to someone's face, don't say it on YouTube, say the new guidelines: And if you're looking to attack, harass, demean, or impersonate others, go
The new guidelines also seek to govern the behaviour of people reacting to videos: Users shouldn't feel threatened when they're on YouTube. Don't leave threatening comments on other people's videos.
Youtube has been introduced a new tier of censorship designed to restrict the audience for videos deemed to be
inappropriate or offensive to some audiences.
The site is now putting videos into a limited state if they are deemed controversial enough to be considered objectionable, but not hateful, pornographic or violent enough to be banned altogether.
This policy was announced several months ago but has come into force in the past week, prompting anger among members of the YouTube community.
YouTube defines Limited Videos as follows:
Our Community Guidelines prohibit hate speech that either promotes violence or has the primary purpose of inciting hatred against individuals or groups based on certain attributes. YouTube also prohibits content intended to recruit for terrorist
organizations, incite violence, celebrate terrorist attacks, or otherwise promote acts of terrorism. Some borderline videos, such as those containing inflammatory religious or supremacist content without a direct call to violence or a primary
purpose of inciting hatred, may not cross these lines for removal. Following user reports, if our review teams determine that a video is borderline under our policies, it may have some features disabled.
These videos will remain available on YouTube, but will be placed behind a warning message, and some features will be disabled, including comments, suggested videos, and likes. These videos are also not eligible for ads.
Having features disabled on a video will not create a strike on your account.
Videos which are put into a limited state cannot be embedded on other websites. They also cannot be easily published on social media using the usual share buttons and other users cannot comment on them. Crucially, the person who made the video
will no longer receive any payment.
Earlier this week, Julian Assange wrote:
'Controversial' but contract-legal videos [which break YouTube's terms and conditions] cannot be liked, embedded or earn [money from advertising revenue].
What's interesting about the new method deployed is that it is a clear attempt at social engineering. It isn't just turning off the ads. It's turning off the comments, embeds, etc too. Everything possible to strangle the reach without
US catholics have become an early victim of newly introduced censorship measure from YouTube presumably because their teaching is considered offensive due to politically incorrect attitudes towards gays and abortion. Catholic Online writes:
More media organizations are criticizing YouTube's increasingly oppressive soft censorship policies which are now eliminating mainstream news reports from the video sharing network. Many content creators on YouTube are losing millions in revenue
as the Google-owned firm reduces and cuts off payments in pursuit of profits and control.
YouTube is censoring content though various indirect means even if that content does not violate any terms of service. The Google-owned firm is removing content that it deems inappropriate or offensive, and is taking cues from the Southern Poverty
Law Center. The result seems to be a broad labeling of content, and the suppression of even mainstream news. Many of Catholic Online's bible readings have been caught up in YouTube's web of suppression, despite containing no commentary or message
other than the reading of the scriptures.
YouTube is not a government agency but a private platform, so it is free to ban or restrict content as it pleases them. Therefore, their policies, no matter how arbitrary, are not true censorship. However, the firm is practicing what some call
Soft censorship is any kind of activity that suppresses speech, particularly that which is true and accurate. It takes many forms. For example, broadcasting celebrity gossip in place of news is a form of soft censorship. Placing real news lower in
search results, preventing content from being shared on social media, or depriving media outlets of ad revenue for reporting on certain topics, are all common forms of soft censorship.
For some unknown reason, Catholic Online has also been targeted by these policies. Saints videos and daily readings are the most common targets. None of this content can be considered objectionable by any means, and none of it infringes on
YouTube's terms and conditions. It is suspected that anti-Christian bigotry, such as that promoted by liberal extremist organizations like the Southern Poverty Law Center, are to blame.
The problem for content creators and media organizations is that there are few places for them to go. Most video viewing takes place on YouTube, and there are no video hosting sites as well known and widely used as YouTube. Other sites also
restrict content and some don't share revenues with content creators. This makes YouTube a monopoly; they are literally the only show in town.
The time has come for governments around the world to recognize that Facebook, Google, and YouTube control the public forum. If freedom of speech is to be protected, then these firms must be compelled to abide by free speech rules.
YouTube's algorithms, which are used to censor and demonetize videos on the platform, are killing its creators, according
to a report.
Most of the initial censorship is left to algorithms, [which probably flag that a video should be censored as soon as it detects something politically incorrect], which presumably leads to the overcensorship underpinning the complaints].
Creators complain that YouTube has set up a slow and inefficient appeals system to counter cases of unfair censorship. Ad-disabled videos on YouTube must get 1,000 views in the span of seven days just to qualify for a review.
This approach hurts smaller YouTube channels, because it removes the ability for creators to make money on the most important stage of a YouTube video's life cycle: the first seven days, the report explains. Typically, videos receive 70% or more
of their views in the first seven days, according to multiple creators.
Some of the platform's most popular creators, are saying that the majority of their videos are being affected, dramatically reducing their revenue. Last week, liberal interviewer Dave Rubin, who has interviewed dozens of prominent political
figures, announced that a large percentage of his videos had been demonetized, cutting him off from being able to make money on the millions of views he typically gets, perhaps due to the politically incorrect leanings of his guests, eg Ex-Muslim
Ayaan Hirsi Ali, former Minnesota Governor Jesse Ventura, feminist activist and scholar Christina Hoff Sommers, and Larry King.
YouTube issued a response saying little, except that they hope the algorithms get better over time.
Prager University, a nonprofit that creates educational videos with conservative slants, has filed a lawsuit against YouTube and its
parent company, Google, alleging that the company is censoring its content.
PragerU claims that more than three dozen of its videos have been restricted by YouTube over the past year. As a result, those who browse YouTube in restricted mode -- including many college and high school students -- are prevented from viewing
the content. Furthermore, restricted videos cannot earn any ad revenue.
PragerU says that by limiting access to their videos without a clear reason, YouTube has infringed upon PragerU's First Amendment rights.
YouTube has restricted edgy content in order to protect advertisers' brands. A number of advertisers told Google that they did not want their brand to be associated with edgy content. Google responded by banning all advertising from videos
claimed to contain edgy content. It keeps the brands happy but it has decimated many an online small business.
YouTube has announced an extension of its age restriction policy for parody videos using children's characters but with inappropriate themes
The new policy was announced on Thursday and will see age restrictions apply on content featuring inappropriate use of family entertainment characters like unofficial videos depicting Peppa Pig. The company already had a policy that rendered such
videos ineligible for advertising revenue, in the hope that doing would reduce the motivation to create them in the first place. Juniper Downs, , YouTube's director of policy explained:
Earlier this year, we updated our policies to make content featuring inappropriate use of family entertainment characters ineligible for monetisation,We're in the process of implementing a new policy that age restricts this content in the YouTube
main app when flagged. Age-restricted content is automatically not allowed in YouTube Kids. The YouTube team is made up of parents who are committed to improving our apps and getting this right.
Age-restricted videos can't be seen by users who aren't logged in, or by those who have entered their age as below 18 on both the site and the app. More importantly, they also don't show up on YouTube Kids, a separate app aimed at parents who want
to let their children under 13 use the site unsupervised.
Google makes their internal processes difficult to track by design, but the author of a report By Karlaplan states that these changes are fairly
recent, suspected to have been implemented on the 30th of August -- the changes having only been discovered in late October.
However, until the publication of this document , little other than anecdotal evidence was presented with complaints from YouTube content creators.
Through extensive analysis of the YouTube Data API and other sources, Karlaplan found that YouTube tags demonetized videos according to both severity and type of sensitive content -- neither of which is transparent to the uploader.
The report also notes that videos are more likely to be hidden from viewers if their likely viewership is low. Perhaps as higher viewership videos may be more likely to be appealed, or more likely to be spotted as examples of censorship and hence
generate bad publicity for Google.
Google have published an information page that is quite useful in detailing which videos get censored. Google outlines two levels of sensitivity that advertisers can select when not wanting to be associated with sensitive content. Google explains:
While the Standard content filter excludes the most inappropriate content, it doesn't exclude everything that a particular advertiser may find objectionable. The Sensitive content categories allow you to opt out of additional content that many
advertisers find inappropriate. Eg:
Tragedy and conflict
Standard: Excludes graphic footage of combat or war
Sensitive: Excludes the above plus footage of soldiers marching with weapons
Sensitive social issues
Standard: Excludes videos intended to elicit a response about controversial issues
Sensitive: Excludes the above plus news commentary about controversial issues
Sexually suggestive content
Standard: Excludes videos about sex or sexual products
Sensitive: Excludes the above plus music videos with suggestive themes
Sensational and shocking
Standard: Excludes videos of disasters or accidents that show casualties or death
Sensitive: Excludes the above plus videos of moderate disasters or accidents that show minimal casualties or harm
Profanity and rough language
Standard: Excludes videos with frequent use of profanity
Sensitive: Excludes the above plus videos with profanity that has been bleeped out
Google is escalating its campaign of internet censorship, announcing that it will expand its workforce of human censors to over 10,000. The
censors' primary focus will be videos and other content on YouTube, but will work across Google to censor content and train its automated systems, which remove videos at a rate four times faster than its human employees.
Human censors have already reviewed over 2 million videos since June. YouTube has already removed over 150,000 videos, 50 percent of which were removed within two hours of upload. The company is working to accelerate the rate of takedown through
machine-learning from manual censorship.
YouTube CEO Susan Wojcicki explained the move in an official blog post:
Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly
2 million videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future. We are also taking aggressive action on comments, launching new comment moderation tools and in some cases
shutting down comments altogether. In the last few weeks we've used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments. Our teams also work closely with NCMEC, the IWF,
and other child safety organizations around the world to report predatory behavior and accounts to the correct law enforcement agencies.
We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.
At the same time, we are expanding the network of academics, industry groups and subject matter experts who we can learn from and support to help us better understand emerging issues.
We will use our cutting-edge machine learning more widely to allow us to quickly and efficiently remove content that violates our guidelines. In June we deployed this technology to flag violent extremist content for human review and we've seen
Since June we have removed over 150,000 videos for violent extremism.
Machine learning is helping our human reviewers remove nearly five times as many videos than they were previously.
Today, 98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms.
Our advances in machine learning let us now take down nearly 70 percent of violent extremist content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed.
Since we started using machine learning to flag violent and extremist content in June, the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.