mong other things, Amazon Prime provides a good many of their digital videos available to stream for free. Well, until now anyway. Many indie horror filmmakers are having their videos removed from the Prime service in an apparent new policy on the part
Amazon says it is cracking down on extreme content and is sending out emails to film makers to explain the new censorship policy.
Here is an example email supplied by Scott Schirmer in regards to his film Harvest Lake :
Amazon Video Direct periodically revises our content policy in order to improve the Amazon Video customer experience. Effective March 1, 2017, Amazon Video Direct will no longer allow titles containing persistent or graphic sexual or violent acts,
gratuitous nudity and/or erotic themes ('adult content') to be offered as Included with Prime or Free with Pre-Roll Ad .
We have identified the following titles within your catalog which contain adult content:
In alignment with our new policy, the Included with Prime and/or Free with Pre-Roll Ad offers will be removed from these titles on March 1, 2017.
For any title to remain available to customers with an Included with Prime or Free with Pre-Roll Ad offer, its content including cover images, metadata, and/or video content must be free of persistent or graphic sexual or violent acts,
gratuitous nudity and/or erotic themes.
A politically correct Californian law targeting age discrimination has failed to win the immediate approval of a judge. The law requires date of births or age
to be withheld from documents and publications used for job recruitment. One high profile consequence is that the Internet Movie Database (IMDb) would be banned from including age information in the profiles of stars and crew. This has led to the
challenge of the law on grounds of unconstitutional censorship.
This week's ruling does not look good for the Californian law as the judge decided that birthday prohibition shall not apply until the full legal challenge is decided. District Judge Vince Chhabria ruled:
[I]t's difficult to imagine how AB 1687 could not violate the First Amendment. The statute prevents IMDb from publishing factual information (information about the ages of people in the entertainment industry) on its website for public consumption. This
is a restriction of non-commercial speech on the basis of content.
To be sure, the government has identified a compelling goal -- preventing age discrimination in Hollywood. But the government has not shown how AB 1687 is 'necessary' to advance that goal. In fact, it's not clear how preventing one mere website from
publishing age information could meaningfully combat discrimination at all. And even if restricting publication on this one website could confer some marginal antidiscrimination benefit, there are likely more direct, more effective, and less
speech-restrictive ways of achieving the same end.
Chhabria held that -- because the law restricts IMDb's speech rights -- the site is suffering irreparable harm and enjoined the government from enforcing the law pending the resolution of this lawsuit.
Twitter has introduced a new censorship system with the unlikely sounding capability to detect abusive tweets and suspend accounts
without waiting for complaints to be flagged. Transgressions results in the senders receiving half-day suspensions.
The company has refused to provide details on specifically how the new system works, but using a combination of behavioral and keyword indicators, the filter flags posts it deems to be violations of Twitter's acceptable speech policy and issues users
suspensions of half a day during which they cannot post new globally accessible tweets and their existing tweets are visible only to followers.
From the platform that once called itself the free speech wing of the free speech party, these new tools mark an incredible turn of events. The anti-censorship ethic seems to have been lost in a failed attempt to sell the company after prospective
buyers were unhappy with the lack of censorship control over the platform.
Inevitably Twiiter has refused to provide even outline ideas of the indicators it is using, especially when it comes to the particular linguistic cues it is concerned with. While offering too much detail might give the upper hand to those who would try
to work around the new system, it is important for the broader community to have at least some understanding of the kinds of language flagged by Twitter's new tool so that they can try and stay within the rules.
It is also unclear why Twitter chose not to permit users to contest what they believe to be a wrongful suspension. Given that the feature is brand-new and bound to encounter plenty of unforeseen contexts where it could yield a wrong result, it is
surprising that Twitter chose not to provide a recovery mechanism where it could catch these before they become news.
And the first example of censorship was quick to follow. Many outlets this morning picked up on a frightening instance of the Twitter algorithm's new power to police not only the language we use but the thoughts we express. In this case a user allegedly
tweeted a response to a news report about comments made by Senator John McCain and argued that it was his belief that the senator was a traitor who had committed formal treason against the nation. Twitter did not respond to a request for more
information about what occurred in this case and if this was indeed the tweet that caused the user to be suspended, but did not dispute that the user had been suspended or that his use of the word traitor had factored heavily into that suspension.
A congressman ahs introduced a law bill demanding that visitors to America hand over URLs to their social network accounts.
Representatve Jim Banks says his proposed rules, titled the Visa Investigation and Social Media Act (VISA) of 2017, require visa applicants to provide their social media handles to immigration officials. Banks said:
We must have confidence that those entering our country do not intend us harm. Directing Homeland Security to review visa applicants' social media before granting them access to our country is common sense. Employers vet job candidates this way, and I
think it's time we do the same for visa applicants.
Right now, at the US border you can be asked to give up your usernames by border officers. You don't have to reveal your public profiles, of course. However, if you're a non-US citizen, border agents don't have to let you in, either. Your devices can be
seized and checked, and you can be put on a flight back, if you don't cooperate.
Banks' proposed law appears to end any uncertainty over whether or not non-citizens will have their online personas vetted: if the bill is passed, visa applicants will be required to disclose their online account names so they can be scrutinized for any
unwanted behavior. For travellers on visa-waiver programs, revealing your social media accounts is and will remain optional, but again, being allowed into the country is optional, too.
Banks did not say how his bill would prevent hopefuls from deleting or simply not listing any accounts that may be unfavorable.
The Register reports that the bill is unlikely to progress.
Secretary of Homeland Security John Kelly told Congress this week that the Department of Homeland Security is exploring the
possibility of asking visa applicants not only for an accounting of what they do online, but for full access to their online accounts. In a hearing in the House of Representatives, Kelly said:
We want to say for instance, What sites do you visit? And give us your passwords. So that we can see what they do on the internet. And this might be a week, might be a month. They may wait some time for us to vet. If they don't want to give us
that information then they don't come. We may look at their204we want to get on their social media with passwords. What do you do? What do you say? If they don't want to cooperate, then they don't come in.
TechCrunch' s Devin Coldewey pointed out, asking people to surrender passwords would raise "obvious" privacy and security problems. But beyond privacy and security, the proposed probing of online accounts204including social media and
other communications platforms204would, if implemented, be a major threat to free expression.
The company speaks of providing tools to get such speech removed in a blog post:
Recently, many passionate users have reached out to us regarding instances of hate speech across our network. Language that offends, threatens, or insults groups solely based on race, color, gender, religion, national origin, sexual orientation, or other
traits is against our network terms and has no place on the Disqus network. Hate speech is the antithesis of community and an impediment to the type of intellectual discussion that we strive to facilitate.
We know that language published on our network does not exist within a vacuum. It has the power to reach billions of people, change opinions and incite action. Hate speech is a threat, not only to those it targets, but to constructive discourse of all
forms across all communities. Hate speech creates fear, deters participation in public debate, and hinders diversity of thoughts and opinions.
We have the opportunity and the responsibility to combat hate speech on our network. Our goal is to foster environments where users can express their diverse opinions without the fear of experiencing hate speech. We persistently remove content that
contains hate speech or that otherwise violates our terms and policies . However, we know that simply reactively removing hate speech is not sufficient. That is why we are dedicated to building tools for readers and publishers to combat hate speech, and
are open to partnering with other organizations who share our goal.
We recently released several features to help readers and publishers better control offensive and otherwise unwanted content. User Blocking and User Flagging allow users to block and report other users who are violating our terms of service. Our new
moderation panel makes it easier for publishers to identify and moderate comments based on user reputation .
Currently, we are working on improved tools to help publishers effectively prevent troublesome users from returning to their sites. And as we get smarter about identifying hate speech, we are working on ways to automatically remove it from our network.
As an organization, Disqus firmly stands against hate speech in all forms. To recap, in an effort to combat hate speech both on and off our network, we are making the following commitments:
We will enforce our terms of service by removing hate speech and harassment on our network. To report hate speech and other abusive behavior, please follow these instructions .
We will invest in new features for publishers and readers to better manage hate speech. We hope to talk more about this soon.
To support this philosophy, we will also be supporting organizations that are equipped to fight hate speech outside of Disqus. We are exploring several options and plan to dedicate portions of our advertising profits to fight hate speech.
Wikipedia editors have voted to ban the Daily Mail as a source for the website in all but exceptional circumstances after claiming the
newspaper was generally unreliable .
The move is highly unusual for the online encyclopaedia, which rarely puts in place a blanket ban on publications and which still allows links to move obvious sources of 'fake news' such as Kremlin backed news organisation Russia Today, and Fox News.
The Wikimedia Foundation, which runs Wikipedia but does not control its editing processes, said in a statement that volunteer editors on English Wikipedia had discussed the reliability of the Mail since at least early 2015. The fundation said:
This means that the Daily Mail will generally not be referenced as a 'reliable source' on English Wikipedia, and volunteer editors are encouraged to change existing citations to the Daily Mail to another source deemed reliable by the community.
Some editors opposed the move saying the Daily Mail was sometimes reliable, that historically its record may have been better, and that there were other publications that were also unreliable. Opponents also pointed to inaccurate stories in other
respected publications, and suggested the proposed ban was driven by a dislike of the publication.
However, the fact of the matter is that the DE Bill
gives the BBFC (the regulator, TBC) the power to block any pornographic website that doesn't use age verification tools. It can even block websites that publish pornography that doesn't fit their guidelines of taste and acceptability - which are
significantly narrower than what is legal, and certainly narrower than what is viewed as acceptable by US websites.
A single video of "watersports" or whipping produces marks, for instance, would be enough for the BBFC to ban a website for every UK adult. The question is, how many sites does the regulator want to block, and how many can it block?
Parliament has been told that the regulator wants to block just a few, major websites, maybe 50 or 100, as an "incentive" to implement age checks. However, that's not what Clause 23 says. The "Age-verification regulator's power to direct
internet service providers to block access to material" just says that any site that fits the criteria can be blocked by an administrative request.
What could possibly go wrong?
Imagine, not implausibly, that some time after the Act is in operation, one of the MPs who pushed for this power goes and sees how it is working. This MP tries a few searches, and finds to their surprise that it is still possible to find websites that
are neither asking for age checks nor blocked.
While the first page or two of results under the new policy would find major porn sites that are checking, or else are blocked, the results on page three and four would lead to sites that have the same kinds of material available to anyone.
In short, what happens when MPs realise this policy is nearly useless?
They will, of course, ask for more to be done. You could write the Daily Mail headlines months in advance: BBFC lets kids watch porn .
MPs will ask why the BBFC isn't blocking more websites. The answer will come back that it would be possible, with more funding, to classify and block more sites, with the powers the BBFC has been given already. While individual review of millions of
sites would be very expensive, maybe it is worth paying for the first five or ten thousand sites to be checked. (And if that doesn't work, why not use machines to produce the lists?)
And then, it is just a matter of putting more cash the way of the BBFC and they can block more and more sites, to "make the Internet safe".
That's the point we are making. The power in the Digital Economy Bill given to the BBFC will create a mechanism to block literally millions of websites; the only real restraint is the amount of cash that MPs are willing to pour into the organisation.
The European Union agreed Tuesday on new rules allowing subscribers of online services in one E.U. country access to them
while traveling in another.
The new portability ruling is the first step of regulation under a drive by the European Commission to introduce a single digital market in Europe.
Announced in May 2015, the proposed Digital Single Market was met with full-throated opposition from Hollywood and Europe's movie and TV industry, which viewed it as a threat to its territory-by-territory licensing of movies and TV shows.
The European Commission, the European Parliament and the E.U.'s Council of Ministers all agreed to new laws which will allow consumers to fully use their online subscriptions to films, sports events, e-books, video games or music services when traveling
within the E.U.
The online service providers will have nine months to adapt to the new rules, which means will come into force by the beginning of 2018.