Your Daily Broadsheet

Latest news



 

Extracts Friends and Censors...

A Facebook Blueprint for Content Governance and Enforcement. By Mark Zuckerberg


Link Here 16th November 2018
Full story: Facebook Censorship...Facebook quick to censor

Mark Zuckerberg has been publishing a series of articles ddressing the most important issues facing Facebook. This is the second in the series. Here are a few selected extracts

Community Standards

The team responsible for setting these policies is global -- based in more than 10 offices across six countries to reflect the different cultural norms of our community. Many of them have devoted their careers to issues like child safety, hate speech, and terrorism, including as human rights lawyers or criminal prosecutors.

Our policy process involves regularly getting input from outside experts and organizations to ensure we understand the different perspectives that exist on free expression and safety, as well as the impacts of our policies on different communities globally. Every few weeks, the team runs a meeting to discuss potential changes to our policies based on new research or data. For each change the team gets outside input -- and we've also invited academics and journalists to join this meeting to understand this process. Starting today, we will also publish minutes of these meetings to increase transparency and accountability.

The team responsible for enforcing these policies is made up of around 30,000 people, including content reviewers who speak almost every language widely used in the world. We have offices in many time zones to ensure we can respond to reports quickly. We invest heavily in training and support for every person and team. In total, they review more than two million pieces of content every day. We issue a transparency report with a more detailed breakdown of the content we take down.

For most of our history, the content review process has been very reactive and manual -- with people reporting content they have found problematic, and then our team reviewing that content. This approach has enabled us to remove a lot of harmful content, but it has major limits in that we can't remove harmful content before people see it, or that people do not report.

Accuracy is also an important issue. Our reviewers work hard to enforce our policies, but many of the judgements require nuance and exceptions. For example, our Community Standards prohibit most nudity, but we make an exception for imagery that is historically significant. We don't allow the sale of regulated goods like firearms, but it can be hard to distinguish those from images of paintball or toy guns. As you get into hate speech and bullying, linguistic nuances get even harder -- like understanding when someone is condemning a racial slur as opposed to using it to attack others. On top of these issues, while computers are consistent at highly repetitive tasks, people are not always as consistent in their judgements.

The vast majority of mistakes we make are due to errors enforcing the nuances of our policies rather than disagreements about what those policies should actually be. Today, depending on the type of content, our review teams make the wrong call in more than 1 out of every 10 cases.

Proactively Identifying Harmful Content

The single most important improvement in enforcing our policies is using artificial intelligence to proactively report potentially problematic content to our team of reviewers, and in some cases to take action on the content automatically as well.

This approach helps us identify and remove a much larger percent of the harmful content -- and we can often remove it faster, before anyone even sees it rather than waiting until it has been reported.

Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence -- and because of the multi-billion dollar annual investments we can now fund. To be clear, the state of the art in AI is still not sufficient to handle these challenges on its own. So we use computers for what they're good at -- making basic judgements on large amounts of content quickly -- and we rely on people for making more complex and nuanced judgements that require deeper expertise.

In training our AI systems, we've generally prioritized proactively detecting content related to the most real world harm. For example, we prioritized removing terrorist content -- and now 99% of the terrorist content we remove is flagged by our systems before anyone on our services reports it to us. We currently have a team of more than 200 people working on counter-terrorism specifically.

Some categories of harmful content are easier for AI to identify, and in others it takes more time to train our systems. For example, visual problems, like identifying nudity, are often easier than nuanced linguistic challenges, like hate speech. Our systems already proactively identify 96% of the nudity we take down, up from just close to zero a few years ago. We are also making progress on hate speech, now with 52% identified proactively. This work will require further advances in technology as well as hiring more language experts to get to the levels we need.

In the past year, we have prioritized identifying people and content related to spreading hate in countries with crises like Myanmar. We were too slow to get started here, but in the third quarter of 2018, we proactively identified about 63% of the hate speech we removed in Myanmar, up from just 13% in the last quarter of 2017. This is the result of investments we've made in both technology and people. By the end of this year, we will have at least 100 Burmese language experts reviewing content.

Discouraging Borderline Content

One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.

ur research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average -- even when they tell us afterwards they don't like the content.

This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible.

The category we're most focused on is click-bait and misinformation. People consistently tell us these types of content make our services worse -- even though they engage with them. As I mentioned above, the most effective way to stop the spread of misinformation is to remove the fake accounts that generate it. The next most effective strategy is reducing its distribution and virality.

Interestingly, our research has found that this natural pattern of borderline content getting more engagement applies not only to news but to almost every category of content. For example, photos close to the line of nudity, like with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don't come within our definition of hate speech but are still offensive.

This pattern may apply to the groups people join and pages they follow as well. This is especially important to address because while social networks in general expose people to more diverse views, and while groups in general encourage inclusion and acceptance, divisive groups and pages can still fuel polarization. To manage this, we need to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join.

One common reaction is that rather than reducing distribution, we should simply move the line defining what is acceptable. In some cases this is worth considering, but it's important to remember that won't address the underlying incentive problem, which is often the bigger issue. This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content.

Building an Appeals Process

Any system that operates at scale will make errors, so how we handle those errors is important. This matters both for ensuring we're not mistakenly stifling people's voices or failing to keep people safe, and also for building a sense of legitimacy in the way we handle enforcement and community governance.

We began rolling out our content appeals process this year. We started by allowing you to appeal decisions that resulted in your content being taken down. Next we're working to expand this so you can appeal any decision on a report you filed as well. We're also working to provide more transparency into how policies were either violated or not.

...Read the full article from facebook.com

 

 

Offsite Article: PayPal's corporate censorship...


Link Here 16th November 2018
Why are left-wingers demanding that Silicon Valley police political opinions? By Fraser Myers

See article from spiked-online.com

 

 

Offsite Article: Partly free...


Link Here 16th November 2018
Detailed report on Internet censorship laws in South Korea

See article from lawless.tech

 

 

Suffocating European livelihoods at the behest of big business...

Julia Reda outlines amendments to censorship machines and link tax as the upcoming internet censorship law gets discussed by the real bosses of the EU


Link Here 15th November 2018
Full story: Copyright in the EU...Copyright law for Europe

The closed-door trilogue efforts to finalise the EU Copyright Directive continue. The Presidency of the Council, currently held by Austria, has now circulated among the EU member state governments a new proposal for a compromise between the differing drafts currently on the table for the controversial Articles 11 and 13.

Under this latest proposal, both upload filters and the link tax would be here to stay -- with some changes for the better, and others for the worse.

Upload filters/Censorshipmachines

Let's recall: In its final position, the European Parliament had tried its utmost to avoid specifically mentioning upload filters, in order to avoid the massive public criticism of that measure. The text they ended up with, however, was even worse: It would make online platforms inescapably liable for any and all copyright infringement by their users, no matter what action they take. Not even the strictest upload filter in the world could possibly hope to catch 100% of unlicensed content.

This is what prompted YouTube's latest lobbying efforts in favour of upload filters and against the EP's proposal of inescapable liability. Many have mistaken this as lobbying against Article 13 as a whole -- it is not. In Monday's Financial Times, YouTube spelled out that they would be quite happy with a law that forces everyone else to build (or, presumably, license from them) what they already have in place: Upload filters like Content ID.

In this latest draft, the Council Presidency sides with YouTube, going back to rather explicitly prescribing upload filters. The Council proposes two alternative options on how to phrase that requirement, but they match in effect:

Platforms are liable for all copyright infringements committed by their users, EXCEPT if they

  • cooperate with rightholders

  • by implementing effective and proportionate steps to prevent works they've been informed about from ever going online determining which steps those are must take into account suitable and effective technologies

  • Under this text, wherever upload filters are possible, they must be implemented: All your uploads will require prior approval by error-prone copyright bots .

On the good side, the Council Presidency seems open to adopting the Parliament's exception for platforms run by small and micro businesses . It also takes on board the EP's better-worded exception for open source code sharing platforms like GitHub.

On the bad side, Council rejects Parliament's efforts for a stronger complaint mechanism requiring reviews by humans and an independent conflict resolution body. Instead it takes on board the EP's insistence that licenses taken out by a platform don't even have to necessarily cover uses of these works by the users of that platform. So, for example, even if YouTube takes out a license to show a movie trailer, that license could still prevent you as an individual YouTuber from using that trailer in your own uploads.

Article 11 Link tax

On the link tax, the Council is mostly sticking to its position: It wants the requirement to license even short snippets of news articles to last for one year after an article's publication, rather than five, as the Parliament proposed.

In a positive development, the Council Presidency adopts the EP's clarification that at least the facts included in news articles as such should not be protected. So a journalist would be allowed to report on what they read in another news article, in their own words.

Council fails to clearly exclude hyperlinks -- even those that aren't accompanied by snippets from the article. It's not uncommon for the URLs of news articles themselves to include the article's headline. While the Council wants to exclude insubstantial parts of articles from requiring a license, it's not certain that headlines count as insubstantial. (The Council's clause allowing acts of hyperlinking when they do not constitute communication to the public would not apply to such cases, since reproducing the headline would in fact constitute such a communication to the public.)

The Council continues to want the right to only apply to EU-based news sources -- which could in effect mean fewer links and listings in search engines, social networks and aggregators for European sites, putting them at a global disadvantage.

However, it also proposes spelling out that news sites may give out free licenses if they so choose -- contrary to the Parliament, which stated that listing an article in a search engine should not be considered sufficient payment for reproducing snippets from it.

 

 

Er...it's easy, just claim it transgresses 'community guidelines'...

Facebook will train up French censors in the art of taking down content deemed harmful


Link Here 15th November 2018
Full story: Facebook Censorship...Facebook quick to censor

The French President, Emmanuel Macron has announced a plan to effectively embed French state censors with Facebook to learn more about how to better censor the platform. He announced a six-month partnership with Facebook aimed at figuring out how the European country should police hate speech on the social network.

As part of the cooperation both sides plan to meet regularly between now and May, when the European election is due to be held. They will focus on how the French government and Facebook can work together to censor content deemed 'harmful'. Facebook explained:

It's a pilot program of a more structured engagement with the French government so that both sides can better understand the other's challenges in dealing with the issue of hate speech online. The program will allow a team of regulators, chosen by the Elysee, to familiarize [itself] with the tools and processes set up by Facebook to fight against hate speech. The working group will not be based in one location but will travel to different Facebook facilities around the world, with likely visits to Dublin and California. The purpose of this program is to enable regulators to better understand Facebook's tools and policies to combat hate speech and, for Facebook, to better understand the needs of regulators.

 

 

Rainbow 6 Siege...

Games developers announce will remove sex and gambling references worldwide so as to comply with Chinese censorship requirements


Link Here 13th November 2018
In order to prepare Rainbow 6 Siege for expansion into China, Ubisoft announced that it will be making some global censor cuts to the game's visuals to remove gore and references to sex and gambling.

In a blog post, Ubisoft explained:

A Single, Global Version

We want to explain why these changes are coming to the global version of the game, as opposed to branching and maintaining two parallel builds. We want to streamline our production time to increase efficiency

By maintaining a single build, we are able to reduce the duplication of work on the development side. This will allow us to be more agile as a development team, and address issues more quickly.

Ubisoft provided examples of their censorship:

  • Icons featuring knives become fists
  • Icons featuring skulls are replaced
  • Skulls in artwork are fleshed out into faces
  • Images of slot machines are removed
  • Blood spatters are removed from a Chinese landscape painting
  • Strip club neon nudity is removed

 

 

Offsite Article: The Potential Unintended Consequences of Article 13...


Link Here 13th November 2018
Full story: Copyright in the EU...Copyright law for Europe
Susan Wojcicki, CEO of YouTube explains how the EU's copyright rewrite will destroy the livelihood of a huge number of Europeans

See article from youtube-creators.googleblog.com

 

 

A modern swear box...

Its probably not a good idea to leave much money in a Skype or XBox Live account as Microsoft can now seize it if they catch you using a vaguely offence word


Link Here 8th November 2018
Full story: Microsoft Censorship Rules...For Microsoft services, XBoX, Skype, OneDrive…
Microsoft has just inflicted a new 'code of conduct' that prohibits customers communicating nudity, bestiality, pornography, offensive language, graphic violence and criminal activity, whilst allowing Microsoft to steal the money in your account.

If users are found to have shared, or be in possession of, these types of content, Microsoft can suspend or ban the particular user and remove funds or balance on the associated account.

It also appears that Microsoft reserves the right to view user content to investigate violations to these terms. This means it has access to your message history and shared files (including on OneDrive, another Microsoft property) if it thinks you've been sharing prohibited material.

Unsurprisingly, few users are happy that Microsoft is willing to delve through their personal data.

Microsoft has not made it clear if it will automatically detect and censor prohibited content or if it will reply on a reporting system. On top of that, Microsoft hasn't clearly defined its vague terms. Nobody is clear on what the limit on offensive language is.

 

 

Creeping about your life...

Facebook friend suggestion: Ms Tress who visits your husband upstairs at your house for an hour every Thursday afternoon whilst you are at work


Link Here 8th November 2018
Full story: Facebook Privacy...Facebook criticised for discouraging privacy
Facebook has files a patent that describes a method of using the devices of Facebook app users to identify various wireless signals from the devices of other users.

It explains how Facebook could use those signals to measure exactly how close the two devices are to one another and for how long, and analyses that data to infer whether it is likely that the two users have met. The patent also suggests the app could record how often devices are close to one another, the duration and time of meetings, and can even use its gyroscope and accelerometer to analyse movement patterns, for example whether the two users may be going for a jog, smooching or catching a bus together.

Facebook's algorithm would use this data to analyse how likely it is that the two users have met, even if they're not friends on Facebook and have no other connections to one another. This might be based on the pattern of inferred meetings, such as whether the two devices are close to one another for an hour every Thursday, and an algorithm would determine whether the two users meeting was sufficiently significant to recommend them to each other and/or friends of friends.

I don't suppose that Facebook can claim this patent though as police and the security services have no doubt been using this technique for years.

 

 

Hell to pay...

Satanic temple sues Netflix with a copyright claim over a statue of Baphomet


Link Here 8th November 2018
The Satanic Temple in Salem, Massachusetts is suing Netflix and producers Warner Brothers over a statue of the goat-headed deity Baphomet that appears in the TV series Chilling Adventures of Sabrina .

The temple is claiming that Netflix and Warners are violating the copyright and trademark of the temple's own Baphomet statue, which it built several years ago.

Historically, the androgynous deity has been depicted with a goat's head on a female body, but The Satanic Temple created this statue with Baphomet having a male chest an idea that was picked up by Netflix.

The Temple is seeking damages of at least $50 million for copyright infringement, trademark violation and injury to business reputation. In the Sabrina storyline, the use of the statue as the central focal point of the school associated with evil, cannibalism and possibly murder is injurious to TST's business, the Temple says in its suit.

 

 

Campiagn: Contract for the Web...

Tim Berners-Lee launches campaign to defend a free and open internet


Link Here 7th November 2018
Speaking at the Web Summit conference in Lisbon, Tim Berners-Lee, inventor of the World Wide Web, has launched a campaign to persuade governments, companies and individuals to sign a Contract for the Web with a set of principles intended to defend a free and open internet.

Contract for the Web CORE PRINCIPLES

The web was designed to bring people together and make knowledge freely available. Everyone has a role to play to ensure the web serves humanity. By committing to the following principles, governments, companies and citizens around the world can help protect the open web as a public good and a basic right for everyone.

GOVERNMENTS WILL
  • Ensure everyone can connect to the internet so that anyone, no matter who they are or where they live, can participate actively online.
  • Keep all of the internet available, all of the time so that no one is denied their right to full internet access.
  • Respect people's fundamental right to privacy so everyone can use the internet freely, safely and without fear.
COMPANIES WILL
  • Make the internet affordable and accessible to everyone so that no one is excluded from using and shaping the web.
  • Respect consumers' privacy and personal data so people are in control of their lives online.
  • Develop technologies that support the best in humanity and challenge the worst so the web really is a public good that puts people first.
CITIZENS WILL
  • Be creators and collaborators on the web so the web has rich and relevant content for everyone.
  • Build strong communities that respect civil discourse and human dignity so that everyone feels safe and welcome online.
  • Fight for the web so the web remains open and a global public resource for people everywhere, now and in the future.
We commit to uphold these principles and to engage in a deliberative process to build a full "Contract for the Web", which will set out the roles and responsibilities of governments, companies and citizens. The challenges facing the web today are daunting and affect us in all our lives, not just when we are online. But if we work together and each of us takes responsibility for our actions, we can protect a web that truly is for everyone.

See more from fortheweb.webfoundation.org

 

 

Updated: An unwanted gift...

Social media site Gab censored by internet companies


Link Here 5th November 2018
Gab, the social media site that prides itself as being uncensored, has been forced offline by its service providers after it became clear that the alleged Pittsburgh shooter Robert Bowers had a history of anti-semitic postings on the site.

Formed in August 2016 after Twitter began cracking down on hate speech on its social network, Gab describes itself as a free speech website and nothing more. But the platform has proved popular among the alt-right and far right, including the man accused of opening fire on a synagogue in Pennsylvania on Saturday, killing 11.

In the hours following the attack, when the suspect's postings were discovered on the site, Gab's corporate partners abandoned it one by one. PayPal and Stripe, two of the company's payment providers, dropped it, arguing that it breached policies around hate speech.

Cloud-hosting company Joyent also withdrew service on Sunday, giving Gab 24 hours notice of its suspension, as did GoDaddy, the site's domain registrar, which provides the Gab.com address. Both companies said the site had breached their terms of service.

Gab responded in a statement:

We have been systematically no-platformed by App Stores, multiple hosting providers, and several payment processors, the company said in a statement posted to its site. We have been smeared by the mainstream media for defending free expression and individual liberty for all people and for working with law enforcement to ensure that justice is served for the horrible atrocity committed in Pittsburgh.

Update: A new home

5th November 2018. See  article from engadget.com

Gab is back online following censorship in the wake of the anti-Semitic shooting at a Pittsburgh synagogue. The social network had been banned by its hosting provider Joyent and domain registrar GoDaddy, and blacklisted by other services such as PayPal , Stripe and Shopify.

Now, Gab has come back online and has found a new hosting provider in Epik. According to a blog post published on November 3rd, Epik CEO Robert Monster spoke out against the idea of digital censorship and decided to provide hosting privileges to Gab because he looks forward to partnering with a young, and once brash, CEO who is courageously doing something that looks useful.

 

 

It would prevent them from censoring conservative voices...

Google claims that is impractical to require it to implement US constitutional free speech


Link Here 2nd November 2018
Full story: Google Censorship...Google censors adult material froms its websites
Prior to Google's bosses being called in to answer for its policy to silence conservative voices, it has filed a statement to court saying that even if it does discriminate on the basis of political viewpoints. It said:

Not only would it be wrong to compel a private company to guarantee free speech in the way that government censorship is forbidden by the Constitution, but it would also have disastrous practical consequences.

Google argued that the First Amendment appropriately limits the government's ability to censor speech, but applying those limitations to private online platforms would undermine important content regulation. If they are bound by the same First Amendment rules that apply to the government, YouTube and other service providers would lose much of their ability to protect their users against offensive or objectionable content -- including pornography, hate speech, personal attacks, and terrorist propaganda.

 


melonfarmers icon
 

Top

Home

Index

Links

Email
 

US

World

Media

Info

UK
 

Film Cuts

Nutters

Liberty
 

Cutting Edge

Shopping

Sex News

Sex+Shopping

Advertise
 



US

Americas

International

World Campaigns
 

UK

West Europe

Middle East

Africa
 

East Europe

South Asia

Asia Pacific

Australia
 


Adult DVD+VoD

Online Shop Reviews
 

Online Shops

New  & Offers
 
Sex Machines
Fucking Machines
Adult DVD Empire
Adult DVD Empire
Simply Adult
30,000+ items in stock
Low prices on DVDs and sex toys
Simply Adult
Hot Movies
Hot Movies