Google has announced that it now going to use its AI technology to detect YouTube videos that it would like to see as restricted to adults. In addition it announced that it would be requiring hard ID to verify that EU based users are over 18. (Surely
Google should be the last company on the planet where users would be willing to send there ID to). Google writes:
Today, our Trust & Safety team applies age-restrictions when, in the course of reviewing content, they encounter a
video that isn't appropriate for viewers under 18. Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions. Uploaders
can appeal the decision if they believe it was incorrectly applied. For creators in the YouTube Partner Program, we expect these automated age-restrictions to have little to no impact on revenue, as most of these videos also violate our
advertiser-friendly guidelines and therefore have limited or no ads.
To make sure the experience is consistent, viewers attempting to access age-restricted videos on most third-party websites will be redirected to YouTube where
they must sign-in and be over 18 to view it. This will help ensure that, no matter where a video is discovered, it will only be viewable by the appropriate audience.
Because our use of technology will result in more videos being
age-restricted, our policy team took this opportunity to revisit where we draw the line for age-restricted content. After consulting with experts and comparing ourselves against other global content rating frameworks, only minor adjustments were
necessary. Our policy pages have been updated to reflect these changes. All the changes outlined above will roll out over the coming months.
Expanding Age-verification in Europe
In line with upcoming
regulations, like the European Union's Audiovisual Media Services Directive (AVMSD), we will also be introducing a new age verification step over the next few months. As part of this process some European users may be asked to provide additional proof of
age when attempting to watch mature content. If our systems are unable to establish that a viewer is above the age of 18, we will request that they provide a valid ID or credit card to verify their age. We've built our age-verification process in keeping
with Google's Privacy and Security Principles.
We understand that many are turning to YouTube at this time to find content that is both educational and entertaining. We will continue to update our products and our policies with
features that make sure when they do, they find content that is age-appropriate.
Prager University (PragerU) is a right wing group that creates videos explaining a right wing perspective to political issues.
YouTube didn't much care for the content and shunted the videos up a 'restricted mode' back alley.
the censorship in court but have just lost their case. First Amendment rights in the US bans the state from censoring free speech but this protection does not extended to private companies. PragerU had tried to argue that Google has become so integral to
American life that it should be treated like a state institution.
The Ninth Circuit Court of Appeals on Wednesday affirmed that YouTube, a Google subsidiary, is a private platform and thus not subject to the First Amendment. In making that
determination, the Court also rejected a plea from a conservative content maker that sued YouTube in hopes that the courts would force it to behave like a public utility.
Headed by conservative radio host Dennis Prager, PragerU alleged in its suit
against YouTube that the video hosting platform violated PragerU's right to free speech when it placed a portion of the nonprofit's clips on Restricted Mode, an optional setting that approximately 1.5 percent of YouTube users select so as not to see
content with mature themes.
Writing for the appeals court, Circuit Judge Margaret McKeown said YouTube was a private forum despite its ubiquity and public accessibility, and hosting videos did not make it a state actor for purposes of the First
YouTube has posted on its blog outlining a recent changes to the moderation of comments about videos posted. YouTube writes:
Addressing toxic comments
We know that the comment section is an important
place for fans to engage with creators and each other. At the same time, we heard feedback that comments are often where creators and viewers encounter harassment. This behavior not only impacts the person targeted by the harassment, but can also have a
chilling effect on the entire conversation.
To combat this we remove comments that clearly violate our policies -- over 16 million in the third quarter of this year , specifically due to harassment.The policy updates we've
outlined above will also apply to comments, so we expect this number to increase in future quarters.
Beyond comments that we remove, we also empower creators to further shape the conversation on their channels and have a variety
of tools that help. When we're not sure a comment violates our policies, but it seems potentially inappropriate, we give creators the option to review it before it's posted on their channel. Results among early adopters were promising -- channels that
enabled the feature saw a 75% reduction in user flags on comments. Earlier this year, we began to turn this setting on by default for most creators.
We've continued to fine tune our systems to make sure we catch truly toxic
comments, not just anything that's negative or critical, and feedback from creators has been positive. Last week we began turning this feature on by default for YouTube's largest channels with the site's most active comment sections and will roll out to
most channels by the end of the year. To be clear, creators can opt-out, and if they choose to leave the feature enabled they still have ultimate control over which held comments can appear on their videos. Alternatively, creators can also ignore held
comments altogether if they prefer.
YouTube has been censoring cryptocurrency-related content with a new wave of rule enforcements, according to several hosts. Since 23rd December, the site has been deleting individual videos from cryptocurrency channels. Some hosts have also been given
warnings and strikes, which temporarily prevent them from uploading content.
YouTube has not publicly stated that crypto videos are against its rules, meaning that users must read between the lines to deduce what is being targeted.
YouTube creator, Chris Dunn, has noted that his own videos were removed on the grounds that they were responsible for the sale of regulated goods and contained harmful and dangerous content.
Many YouTube hosts are now considering moving to
decentralized and uncensorable video platforms, such as PeerTube, LBRY, BitChute, and DTube. Incidentally, Twitter is also planning to create a decentralized media platform.
hundreds of videos was an 'error'
YouTube said today that its
removal of hundreds of crypto-related video sites earlier this week was an 'error'. YouTuve told Decrypt that the sites have since been put back online. However, a quick check today indicated that none had yet been restored. YouTube spouted:
With the massive volume of videos on our site, sometimes we make the wrong call. When it's brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it.
Offsite Update: After the dust has settled YouTube re-censors the crypto channels
After being heavily fined for child privacy issues about personalised advertising on YouTube, Google is trying to get its house in order. It will soon be rolling out new rules that prevent the profiling of younger viewers for advertising purposes.
restrictions on personalised advertising will negatively affect the livelihoods of many YouTube creators. It is pretty clear that Peppa Pig videos will be deemed off limits for personalised adverts, but a more difficult question is what about more
general content that appeals to adults and children alike?
YouTube is demanding clearer guidelines about this situation from the government internet privacy censors of the Federal Trade Commission (FTC). The law underpinning the requirements is known
as COPPA [the Children's Online Privacy Protection Act]. YouTube wrote to the FTC asking:
We believe there needs to be more clarity about when content should be considered primarily child-directed
are also writing to the FTC out of fear that the changes and vague guidance could destroy their channels.
The FTC has responded by initiating a public consultation.
In comments filed with the FTC Monday , YouTube invoked arguments raised by
creators, writing that adult users also engage with videos that could traditionally be considered child-directed, like crafting videos and content focused on collecting old toys:
Sometimes, content that isn't intentionally
targeting kids can involve a traditional kids activity, such as DIY, gaming and art videos. Are these videos 'made for kids,' even if they don't intend to target kids? This lack of clarity creates uncertainty for creators.
By the way
of a comparison, the British advert censors at ASA has a basic rule that if the proportion of kids watching is greater than 25% of the total audience then child protection rules kick in. Presumably the figure 25% is about what one expect for content that
appeals to all ages equally.
We know there's a difference between real-world violence and scripted or simulated violence -- such as what you see in movies, TV shows, or video games -- so we want to make sure we're enforcing our
violent or graphic content policies consistently.
Starting on 2nd December, scripted or simulated violent content found in video games will be
treated the same as other types of scripted content.
What does this mean for Gaming Creators?
Future gaming uploads that include scripted or simulated violence may be approved instead of being age-restricted.
There will be fewer restrictions for violence in gaming, but this policy will still maintain our high bar to protect audiences
from real-world violence.
We may still age-restrict content if violent or gory imagery is the sole focus of the video. For instance, if the video focuses entirely on the most graphically violent part of a video game.
The US authorities came down heavily on Google for YouTube's violations of the 1998 US children's data privacy law called COPPA. This ended up with Google handing over $170 million in settlement of claims from the US FTC (Federal Trade Commission).
COPPA restricts operators of websites and online services from collecting the personal information of under-13 users without parental permission. The definition of personal information includes personal identifiers used in cookies to profile internet
users for targeted advertising purposes.
So now YouTube has announced new procedures starting 1st January 2010. All content creators will have to designate whether or not each of their videos is directed to children (aka kid-directed aka
child-directed) by checking a box during the upload process. Checking that box will prevent the video from running personalized ads. This rule applies retrospectively so all videos will have to be reviewed and flagged accordingly.
It is probably
quite straightforward to identify children's videos, but creators are worried about more general videos for people of all ages that also appeal to kids.
And of course there are massive concerns for all those creators affected about revenues decreasing
as adverts switch from personalised to general untargeted ads.
tubefilter.com ran a small
experiment suggesting that revenues will drop between 60 and 90% for videos denies targeted advertising.
And of course this will have a knock on to the viability of producing videos for a young audience. No doubt the small creators will be hit
hardest, leaving the market more open for those that can make up the shortfall by working at scale.
Google have announced potentially far reaching new policies about kids' videos on YouTube. A Google blog post explains:
An update on kids and data protection on YouTube
From its earliest days, YouTube
has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased. We've been taking a hard look at areas where we can do more to address this,
informed by feedback from parents, experts, and regulators, including COPPA concerns raised by the U.S. Federal Trade Commission and the New York Attorney General that we are addressing with a settlement announced today.
data practices for children's content on YouTube
We are changing how we treat data for children's content on YouTube. Starting in about four months, we will treat data from anyone watching children's content on YouTube as
coming from a child, regardless of the age of the user. This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this
content entirely, and some features will no longer be available on this type of content, like comments and notifications. In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and
we'll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.
Improvements to YouTube Kids
continue to recommend parents use YouTube Kids if they plan to allow kids under 13 to watch independently. Tens of millions of people use YouTube Kids every week but we want even more parents to be aware of the app and its benefits. We're increasing our
investments in promoting YouTube Kids to parents with a campaign that will run across YouTube. We're also continuing to improve the product. For example, we recently raised the bar for which channels can be a part of YouTube Kids, drastically reducing
the number of channels on the app. And we're bringing the YouTube Kids experience to the desktop.
Investing in family creators
We know these changes will have a significant business impact on family
and kids creators who have been building both wonderful content and thriving businesses, so we've worked to give impacted creators four months to adjust before changes take effect on YouTube. We recognize this won't be easy for some creators and are
committed to working with them through this transition and providing resources to help them better understand these changes.
We are also going to continue investing in the future of quality kids, family and educational content. We
are establishing a $100 million fund, disbursed over three years, dedicated to the creation of thoughtful, original children's content on YouTube and YouTube Kids globally.
Today's changes will allow us to better protect kids and
families on YouTube, and this is just the beginning. We'll continue working with lawmakers around the world in this area, including as the FTC seeks comments on COPPA . And in the coming months, we'll share details on how we're rethinking our overall
approach to kids and families, including a dedicated kids experience on YouTube.
After a long introduction about how open and diverse YouTube is, CEO Susan Wojcick gets down to the nitty gritty of how YouTube censorship works. SHe writes in a blog:
Problematic content represents a fraction of one percent of the
content on YouTube and we're constantly working to reduce this even further. This very small amount has a hugely outsized impact, both in the potential harm for our users, as well as the loss of faith in the open model that has enabled the rise of your
creative community. One assumption we've heard is that we hesitate to take action on problematic content because it benefits our business. This is simply not true -- in fact, the cost of not taking sufficient action over the long term results in lack of
trust from our users, advertisers, and you, our creators. We want to earn that trust. This is why we've been investing significantly over the past few years in the teams and systems that protect YouTube. Our approach towards responsibility involves four
We REMOVE content that violates our policy as quickly as possible. And we're always looking to make our policies clearer and more effective, as we've done with pranks and challenges , child safety , and hate speech just this year.
We aim to be thoughtful when we make these updates and consult a wide variety of experts to inform our thinking, for example we talked to dozens of experts as we developed our updated hate speech policy. We also report on the removals we make in our
quarterly Community Guidelines enforcement report. I also appreciate that when policies aren't working for the creator community, you let us know. One area we've heard loud and clear needs an update is creator-on-creator harassment. I said in my last
letter that we'd be looking at this and we will have more to share in the coming months.
We RAISE UP authoritative voices when people are looking for breaking news and information, especially during breaking news moments. Our breaking and top news shelves are available in 40 countries and we're continuing to expand
We REDUCE the spread of content that brushes right up against our policy line. Already, in the U.S. where we made changes to recommendations earlier this year, we've seen a 50% drop of views from recommendations to this type of
content, meaning quality content has more of a chance to shine. And we've begun experimenting with this change in the UK, Ireland, South Africa and other English-language markets.
And we set a higher bar for what channels can make money on our site, REWARDING trusted, eligible creators. Not all content allowed on YouTube is going to match what advertisers feel is suitable for their brand, we have to be sure
they are comfortable with where their ads appear. This is also why we're enabling new revenue streams for creators like Super Chat and Memberships. Thousands of channels have more than doubled their total YouTube revenue by using these new tools in
addition to advertising.
A little while ago there was an issue on YouTube about parody videos using well known children's cartoons as a baseline for adult humour. The videos were not in themselves outside of what YouTube allows but were not suitable for the child audience of the
original shows. YouTube has now responded as follws:
Content that contains mature or violent themes that explicitly targets younger minors and families in the title, description and/or tags will no longer be allowed on the platform. This content was
previously age-restricted, but today we're updating our child safety policies to better protect the family experience.
What content will be removed?
We're removing misleading family content, including videos that target younger
minors and families, that contain sexual themes, violence, obscene, or other mature themes not suitable for young audiences. Here are some examples of content that will be removed:
A video with tags like "for children" featuring family friendly cartoons engaging in inappropriate acts like injecting needles.
Videos with prominent children's nursery rhymes targeting younger minors and families in the video's
title, description or tags, that contain adult themes such as violence, sex, death, etc.
Videos that explicitly target younger minors and families with phrasing such as "for kids" or "family fun" in the video's title,
description and/or tags that contain vulgar language.
What content will be age-restricted?
Content that is meant for adults and not targeting younger minors and families won't be removed, but it may be age-restricted. If you create adult content that could be confused as family entertainment, make
sure your titles, descriptions, and tags match the audience you are targeting. Remember you can age restrict your content upon upload if it's intended for mature audiences. Here is an example of content that may still be allowed on YouTube but will be
Adult cartoons with vulgar language and/or violence that is explicitly targeted at adults.
A YouTube chief has proposed giving precedence to mainstream media over indie creators
The company's chief product officer Neal Mohan claims that the platform has grown so much that it now needs new rules to regulate bad actors. Amid the recent
observations of YouTube's biased censorship, the company announced it will crackdown further on what it calls racist content and disinformation. Mohan said:
YouTube has now grown to a big city. More bad actors have
come into place. And just like in any big city, you need a new set of rules and laws and kind of regulatory regime.
We want to make sure that YouTube remains an open platform because that's where a lot of the magic comes from,
even though there may be some opinions and voices on the platform that I don't agree with, that you don't agree with.
Mohan suggested that positive discrimination could be applied to authoritative sources like traditional media outlets such as AFP or CNN or BBC or the AP or whoever, raising an issue already mentioned by
the independent channels that made YouTube what it is today: their content is often obscured by search results and their subscribers miss the new content, while corporate media (that ironically is often a competitor to YouTube) is already being heavily
promoted by YouTube.
YouTube has announced new censorship rules for videos featuring pranks and challenges. Google writes in a blog post:
YouTube is home to many beloved viral challenges and pranks, like Jimmy Kimmel's Terrible Christmas Presents
prank or the water bottle flip challenge. That said, we've always had policies to make sure what's funny doesn't cross the line into also being harmful or dangerous. Our Community Guidelines prohibit content that encourages dangerous activities that are
likely to result in serious harm, and today clarifying what this means for dangerous challenges and pranks.
Q: What exactly are you clarifying related to challenges?
We've updated our external guidelines to
make it clear that challenges like the Tide pod challenge or the Fire challenge, that can cause death and/or have caused death in some instances, have no place on YouTube.
Q: What exactly are you clarifying related to pranks?
We've made it clear that our policies prohibiting harmful and dangerous content also extend to pranks with a perceived danger of serious physical injury. We don't allow pranks that make victims believe they're in serious physical
danger 203 for example, a home invasion prank or a drive-by shooting prank. We also don't allow pranks that cause children to experience severe emotional distress, meaning something so bad that it could leave the child traumatized for life.
Q: What are examples of pranks that cause children severe emotional distress?
We've worked directly with child psychologists to develop guidelines around the types of pranks that cross this line. Examples
include, the fake death of a parent or severe abandonment or shaming for mistakes.
Q: Can I appeal strikes related to dangerous challenges and pranks?
Yes, you can appeal the strike if you think the video
content doesn't violate Community Guidelines.
Q: How long is the grace period for me to review and clean up content?
The next two months -- during this time challenges and pranks that violate Community
Guidelines will be removed but the channel will not receive a strike. Additionally, content posted prior to these enforcement updates may be removed, but will not receive a strike.
TruNews is a 'YouTube channel run by the outlandish evangelist Rick Wiles. It has just been targetted by Google's censorship policies nad has been kicked into the unsearchable long grass.
Perhaps banned for being 'fake news' but in reality it is a
little too unbelievable to even count as 'fake'. freethinker.co.uk offer an amusing description of why the channel has been censored:
Why? Because Wiles's broadcasts are so damned nutty they serve as a warning
to viewers that this is what happens when people's brain's are running on Jesus.
Of course, Wiles is even more miffed. He alludes to Google not following its 'don't be evil' mantra:
I have warned
for years that a spirit of Nazism is rising up inside the USA. The new Nazis are here. America is on the verge of a French Revolution-style upheaval during which leftist mobs will seek to execute Christians and conservatives in order to purge American
But this isn't the only example of Google being 'evil'.
See video from YouTube titled YouTube Admits Not Notifying Subscribers & Screwing With Algorithms
Jimmy Dore notes that independent
news sites often no longer qualify for monetisation, they are booted into the unsearchable long grass (as noted by TruNews) and now Google no longer informs subscribers when new videos are added. He contends that the powers that be want news videos from
mainstream media to be the dominant news source for YouTube viewers.
Nasim Najafi Aghdam, the woman who allegedly opened fire at YouTube's headquarters in a suburb of San Francisco, injuring three before killing herself, was apparently furious with the video website because it had stopped paying her for her clips.
evidence had been found linking her to any individuals at the company where she allegedly opened fire on Tuesday.
Two of the three shooting victims from the incident were released from hospital on Tuesday night. A third, is currently in serious
Aghdam's online profile shows she was a vegan activist who ran a website called NasimeSabz.com, meaning Green Breeze in Persian, where she posted about Persian culture and veganism, as well as long passages critical of YouTube .
Her father, Ismail Aghdam, told the Bay Area News Group from his San Diego home on Tuesday that she was angry with the Google-owned site because it had stopped paying her for videos she posted on the platform, and that he had warned the police that she might be going to the company's headquarters.
The conservative US news website, the Daily Caller, has revealed that Google has recruited several social justice organisations to assist in the censorship of videos on YouTube.
The Daily Caller notes:
Poverty Law Center is assisting YouTube in policing content on their platform. The left-wing nonprofit -- which has more recently come under fire for labeling legitimate conservative organizations as hate groups -- is one of the more than 100
nongovernment organizations (NGOs) and government agencies in YouTube's Trusted Flaggers program.
The SPLC and other program members help police YouTube for extremist content, ranging from so-called hate speech to terrorist
All of the groups in the program have confidentiality agreements. A handful of YouTube's Trusted Flaggers, including the Anti-Defamation League and No Hate Speech, a European organization, have gone public with
their participation in the program. The vast majority of the groups in the program have remained hidden behind their confidentiality agreements.
YouTube public policy director Juniper Downs said the third-party groups work closely
with YouTube's employees to crack down on extremist content in two ways:
First, the flaggers are equipped with digital tools allowing them to mass flag content for review by YouTube personnel. Second, the partner
groups act as guides to YouTube's content monitors and engineers designing the algorithms policing the video platform but may lack the expertise needed to tackle a given subject.
We work with over 100 organizations as part of our
Trusted Flagger program and we value the expertise these organizations bring to flagging content for review. All trusted flaggers attend a YouTube training to learn about our policies and enforcement processes. Videos flagged by trusted flaggers are
reviewed by YouTube content moderators according to YouTube's Community Guidelines. Content flagged by trusted flaggers is not automatically removed or subject to any differential policies than content flagged from other users.
Google is escalating its campaign of internet censorship, announcing that it will expand its workforce of human censors to over 10,000. The censors' primary focus will be videos and other content on YouTube, but will work across Google to censor content
and train its automated systems, which remove videos at a rate four times faster than its human employees.
Human censors have already reviewed over 2 million videos since June. YouTube has already removed over 150,000 videos, 50 percent of which were
removed within two hours of upload. The company is working to accelerate the rate of takedown through machine-learning from manual censorship.
YouTube CEO Susan Wojcicki explained the move in an official blog post:
reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content. Since June, our trust and safety teams have manually reviewed nearly 2 million
videos for violent extremist content, helping train our machine-learning technology to identify similar videos in the future. We are also taking aggressive action on comments, launching new comment moderation tools and in some cases shutting down
comments altogether. In the last few weeks we've used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments. Our teams also work closely with NCMEC, the IWF, and other child
safety organizations around the world to report predatory behavior and accounts to the correct law enforcement agencies.
We will continue the significant growth of our teams into next year, with the goal of bringing the total
number of people across Google working to address content that might violate our policies to over 10,000 in 2018.
At the same time, we are expanding the network of academics, industry groups and subject matter experts who we can
learn from and support to help us better understand emerging issues.
We will use our cutting-edge machine learning more widely to allow us to quickly and efficiently remove content that violates our guidelines. In June we deployed
this technology to flag violent extremist content for human review and we've seen tremendous progress.
Since June we have removed over 150,000 videos for violent extremism.
Machine learning is helping our human reviewers remove nearly five times as many videos than they were previously.
Today, 98 percent of the videos we remove for violent extremism are flagged by our machine-learning algorithms.
Our advances in machine learning let us now take down nearly 70 percent of violent extremist
content within eight hours of upload and nearly half of it in two hours and we continue to accelerate that speed.
Since we started using machine learning to flag violent and extremist content in June, the technology has
reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess.
Google makes their internal processes difficult to track by design, but the author of a report By Karlaplan states that these changes are fairly recent, suspected to have been implemented on the 30th of August -- the changes having only been discovered
in late October.
However, until the publication of this document , little other than anecdotal evidence was presented with complaints from YouTube content creators.
Through extensive analysis of the YouTube Data API and other sources, Karlaplan
found that YouTube tags demonetized videos according to both severity and type of sensitive content -- neither of which is transparent to the uploader.
The report also notes that videos are more likely to be hidden from viewers if their likely
viewership is low. Perhaps as higher viewership videos may be more likely to be appealed, or more likely to be spotted as examples of censorship and hence generate bad publicity for Google.
Google have published an information page that is quite
useful in detailing which videos get censored. Google outlines two levels of sensitivity that advertisers can select when not wanting to be associated with sensitive content. Google explains:
While the Standard content
filter excludes the most inappropriate content, it doesn't exclude everything that a particular advertiser may find objectionable. The Sensitive content categories allow you to opt out of additional content that many advertisers find inappropriate. Eg:
Tragedy and conflict
Standard: Excludes graphic footage of combat or war
Sensitive: Excludes the above plus footage of soldiers marching with weapons
Sensitive social issues
Standard: Excludes videos intended to elicit a response about controversial issues
Sensitive: Excludes the above plus news commentary about controversial issues
Sexually suggestive content
Standard: Excludes videos about sex or sexual products
Sensitive: Excludes the above plus music videos with suggestive themes
Sensational and shocking
Standard: Excludes videos of disasters or accidents that show casualties or death
Sensitive: Excludes the above plus videos of moderate disasters or accidents that show
minimal casualties or harm
Profanity and rough language
Standard: Excludes videos with frequent use of profanity
Sensitive: Excludes the above plus videos with profanity that has been bleeped out
YouTube has announced an extension of its age restriction policy for parody videos using children's characters but with inappropriate themes
The new policy was announced on Thursday and will see age restrictions apply on content featuring
inappropriate use of family entertainment characters like unofficial videos depicting Peppa Pig. The company already had a policy that rendered such videos ineligible for advertising revenue, in the hope that doing would reduce the motivation to create
them in the first place. Juniper Downs, , YouTube's director of policy explained:
Earlier this year, we updated our policies to make content featuring inappropriate use of family entertainment characters ineligible for
monetisation,We're in the process of implementing a new policy that age restricts this content in the YouTube main app when flagged. Age-restricted content is automatically not allowed in YouTube Kids. The YouTube team is made up of parents who are
committed to improving our apps and getting this right.
Age-restricted videos can't be seen by users who aren't logged in, or by those who have entered their age as below 18 on both the site and the app. More importantly, they also
don't show up on YouTube Kids, a separate app aimed at parents who want to let their children under 13 use the site unsupervised.
Prager University, a nonprofit that creates educational videos with conservative slants, has filed a lawsuit against YouTube and its parent company, Google, alleging that the company is censoring its content.
PragerU claims that more than three
dozen of its videos have been restricted by YouTube over the past year. As a result, those who browse YouTube in restricted mode -- including many college and high school students -- are prevented from viewing the content. Furthermore, restricted videos
cannot earn any ad revenue.
PragerU says that by limiting access to their videos without a clear reason, YouTube has infringed upon PragerU's First Amendment rights.
YouTube has restricted edgy content in order to protect advertisers'
brands. A number of advertisers told Google that they did not want their brand to be associated with edgy content. Google responded by banning all advertising from videos claimed to contain edgy content. It keeps the brands happy but it has decimated
many an online small business.
YouTube's algorithms, which are used to censor and demonetize videos on the platform, are killing its creators, according to a report.
Most of the initial censorship is left to algorithms, [which probably flag that a video should be censored as soon
as it detects something politically incorrect], which presumably leads to the overcensorship underpinning the complaints].
Creators complain that YouTube has set up a slow and inefficient appeals system to counter cases of unfair censorship.
Ad-disabled videos on YouTube must get 1,000 views in the span of seven days just to qualify for a review.
This approach hurts smaller YouTube channels, because it removes the ability for creators to make money on the most important stage of a
YouTube video's life cycle: the first seven days, the report explains. Typically, videos receive 70% or more of their views in the first seven days, according to multiple creators.
Some of the platform's most popular creators, are saying that the
majority of their videos are being affected, dramatically reducing their revenue. Last week, liberal interviewer Dave Rubin, who has interviewed dozens of prominent political figures, announced that a large percentage of his videos had been demonetized,
cutting him off from being able to make money on the millions of views he typically gets, perhaps due to the politically incorrect leanings of his guests, eg Ex-Muslim Ayaan Hirsi Ali, former Minnesota Governor Jesse Ventura, feminist activist and
scholar Christina Hoff Sommers, and Larry King.
YouTube issued a response saying little, except that they hope the algorithms get better over time.
US catholics have become an early victim of newly introduced censorship measure from YouTube presumably because their teaching is considered offensive due to politically incorrect attitudes towards gays and abortion. Catholic Online writes:
More media organizations are criticizing YouTube's increasingly oppressive soft censorship policies which are now eliminating mainstream news reports from the video sharing network. Many content creators on YouTube are losing millions
in revenue as the Google-owned firm reduces and cuts off payments in pursuit of profits and control.
YouTube is censoring content though various indirect means even if that content does not violate any terms of service. The
Google-owned firm is removing content that it deems inappropriate or offensive, and is taking cues from the Southern Poverty Law Center. The result seems to be a broad labeling of content, and the suppression of even mainstream news. Many of Catholic
Online's bible readings have been caught up in YouTube's web of suppression, despite containing no commentary or message other than the reading of the scriptures.
YouTube is not a government agency but a private platform, so it is
free to ban or restrict content as it pleases them. Therefore, their policies, no matter how arbitrary, are not true censorship. However, the firm is practicing what some call soft censorship.
Soft censorship is any kind of
activity that suppresses speech, particularly that which is true and accurate. It takes many forms. For example, broadcasting celebrity gossip in place of news is a form of soft censorship. Placing real news lower in search results, preventing content
from being shared on social media, or depriving media outlets of ad revenue for reporting on certain topics, are all common forms of soft censorship.
For some unknown reason, Catholic Online has also been targeted by these
policies. Saints videos and daily readings are the most common targets. None of this content can be considered objectionable by any means, and none of it infringes on YouTube's terms and conditions. It is suspected that anti-Christian bigotry, such as
that promoted by liberal extremist organizations like the Southern Poverty Law Center, are to blame.
The problem for content creators and media organizations is that there are few places for them to go. Most video viewing takes
place on YouTube, and there are no video hosting sites as well known and widely used as YouTube. Other sites also restrict content and some don't share revenues with content creators. This makes YouTube a monopoly; they are literally the only show in
The time has come for governments around the world to recognize that Facebook, Google, and YouTube control the public forum. If freedom of speech is to be protected, then these firms must be compelled to abide by free speech
Youtube has been introduced a new tier of censorship designed to restrict the audience for videos deemed to be inappropriate or offensive to some audiences.
The site is now putting videos into a limited state if they are deemed controversial enough to
be considered objectionable, but not hateful, pornographic or violent enough to be banned altogether.
This policy was announced several months ago but has come into force in the past week, prompting anger among members of the YouTube community.
YouTube defines Limited Videos as follows:
Our Community Guidelines prohibit hate speech that either promotes violence or has the primary purpose of inciting hatred against individuals or groups based on certain
attributes. YouTube also prohibits content intended to recruit for terrorist organizations, incite violence, celebrate terrorist attacks, or otherwise promote acts of terrorism. Some borderline videos, such as those containing inflammatory religious or
supremacist content without a direct call to violence or a primary purpose of inciting hatred, may not cross these lines for removal. Following user reports, if our review teams determine that a video is borderline under our policies, it may have some
These videos will remain available on YouTube, but will be placed behind a warning message, and some features will be disabled, including comments, suggested videos, and likes. These videos are also not eligible
Having features disabled on a video will not create a strike on your account.
Videos which are put into a limited state cannot be embedded on other websites. They also cannot be easily published on
social media using the usual share buttons and other users cannot comment on them. Crucially, the person who made the video will no longer receive any payment.
Earlier this week, Julian Assange wrote:
'Controversial' but contract-legal videos [which break YouTube's terms and conditions] cannot be liked, embedded or earn [money from advertising revenue].
What's interesting about the new method deployed is that it is a clear attempt at social engineering. It isn't just turning off the ads. It's turning off the comments, embeds, etc too. Everything possible to strangle the
reach without deleting it.
YouTube have increased the range of activities that are barred to include, amongst other things, invasions of privacy.
If a video you've recorded features people who are readily identifiable and who haven't consented to being filmed, there's a
chance they'll file a privacy complaint seeking its removal, say its new guidelines: Don't post other people's personal information, including phone numbers, addresses, credit card numbers, and government IDs. We're serious about keeping our users
safe and suspend accounts that violate people's privacy.
It also said that material designed to harass people was not welcome. If you wouldn't say it to someone's face, don't say it on YouTube, say the new guidelines: And if you're
looking to attack, harass, demean, or impersonate others, go elsewhere.
The new guidelines also seek to govern the behaviour of people reacting to videos: Users shouldn't feel threatened when they're on YouTube. Don't leave threatening
comments on other people's videos.
In recent months, long-time users of video-sharing website YouTube have noticed that the Google-owned site's definition of acceptable content has narrowed considerably.
In addition to its longstanding campaign to crack down on illegally copied
material, in September the site outlawed videos depicting drug abuse and last week tightened its guidelines further to restrict profanity and sexually suggestive content.
In other words, before the money wagons roll in, some law and order needs
to be imposed.
Our goal is to help ensure that you're viewing content that's relevant to you, and not inadvertently coming across content that isn't. Here are a few things we came up with:
Stricter standard for mature content - While videos featuring pornographic images or sex acts are always removed from the site when they're flagged, we're tightening the standard for what is considered sexually suggestive. Videos with sexually
suggestive (but not prohibited) content will be age-restricted, which means they'll be available only to viewers who are 18 or older.
Demotion of sexually suggestive content and profanity - Videos that are considered sexually
suggestive, or that contain profanity, will be algorithmically demoted on our Most Viewed, Top Favourited, and other browse pages. The classification of these types of videos is based on a number of factors, including video content and
descriptions. In testing, we've found that out of the thousands of videos on these pages, only several each day are automatically demoted for being too graphic or explicit. However, those videos are often the ones which end up being repeatedly flagged by
the community as being inappropriate.
Improved thumbnails - To make sure your thumbnail represents your video, your choices will now be selected algorithmically.
More accurate video information - Our Community
Guidelines have always prohibited folks from attempting to game view counts by entering misleading information in video descriptions, tags, titles, and other metadata. We remain serious about enforcing these rules. Remember, violations of these
guidelines could result in removal of your video and repeated violations will lead to termination of your account.