Politicians, about to vote in favor of mandatory upload filtering in Europe, get channel deleted by
YouTube's upload filtering.
French politicians of the former Front National are furious: their entire YouTube channel was just taken down by automatic filters at YouTube for alleged copyright violations. Perhaps this will cause them to reconsider next week's vote, which they
have announced they will support: the bill that will make exactly this arbitrary, political, and unilateral upload filtering mandatory all across Europe.
The French party Front National, now renamed Rassemblemant National (national rally point), which is one of biggest parties in France, have gotten their YouTube channel disappeared on grounds of alleged copyright violations. In an interview with
French Europe1, their leader Marine Le Pen calls the takedown arbitrary, political, and unilateral.
Europe is about to vote on new copyright law next week. Next Wednesday or Thursday. So let's disregard here for a moment that this happened to a party normally described as far-right, and observe that if it can happen to one of France's biggest
parties regardless of their policies, then it can happen to anyone for political reasons 204 or any other reason.
The broadcast named TVLibert39s is gone, described by YouTube as YouTube has blocked the broadcast of the newscast of Thursday, June 14 for copyright infringement.
Marine Le Pen was quoted as saying, This measure is completely false; we can easily assert a right of quotation [to illustrate why the material was well within the law to broadcast].
She's right. Automated upload filters do not take into account when you have a legal right to broadcast copyrighted material for one of the myriad of valid reasons. They will just assume that this such reasons never exist; if nothing else, to make
sure that the hosting platform steers clear of any liability. Political messages will be disappeared on mere allegations by a political opponent, just as might have happened here.
And yet, the Rassemblemant National is going to vote in favor of exactly this mandatory upload filtering. The horror they just described on national TV as arbitrary, political, and unilateral.
It's hard to illustrate clearer that Europe's politicians have absolutely no idea about the monster they're voting on next week.
The decisions to come will be unilateral, political, and arbitrary. Freedom of speech will be unilateral, political, and arbitrary. Just as Marine Le Pen says. Just as YouTube's Content ID filtering is today, as has just been illustrated.
The article mandating this unilateral, political, and arbitrary censorship is called Article 13 of the upcoming European Copyright bill, and it must be removed entirely. There is no fixing of automated censorship machines.
Privacy remains your own responsibility. So do your freedoms of speech, information, and expression.
Social media censor announced to tackle gang-related online content
The Home Secretary Sajid Javid has announced £1.38 million to strengthen the police's response to violent and gang-related online content.
Funding from the government's £40 million Serious Violence Strategy will be used to create a 20-strong team of police staff and officers tasked with disrupting and removing overt and covert gang-related online content.
The social media censor will proactively flag illegal and harmful online content for social media companies to take down. Hosted by the Metropolitan Police, the new capability will also prevent violence on our streets by identifying gang-related
messages generating the most risk and violence.
The move follows the Serious Violence Taskforce chaired by the Home Secretary urging social media companies to do more to take down these videos. The Home Secretary invited representatives from Facebook and Google to Monday's meeting to explain
the preventative action they are already taking against gang material hosted on their platforms.
Home Secretary Sajid Javid said:
Street gangs are increasingly using social media as a platform to incite violence, taunt each other and promote crime.
This is a major concern and I want companies such as Facebook and Google to do more.
We are taking urgent action and the new social media hub will improve the police's ability to identify and remove this dangerous content.
Duncan Ball, Deputy Assistant Commissioner of the Metropolitan Police Service and National Policing lead for Gangs, said:
Police forces across the country are committed to doing everything we can to tackle violent crime and the impact that it has on our communities. Through this funding we can develop a team that is a centre of expertise and excellence that will
target violent gangs and those plotting and encouraging violence online.
By working together with social media companies we will ensure that online material that glamourises murder, lures young people into a dangerous, violent life of crime, and encourages violence is quickly dealt with to cut off this outlet for
gangs and criminals.
Looking to the future we aim to develop a world class capability that will tackle the type of dangerous social media activity that promotes or encourages serious violence.
It is already an offence to incite, assist, or encourage violence online and the Home Office is focused towards building on the relationships made with social media providers to identify where we can take action relevant to tackling serious
David Kaye, the UN's Special Rapporteur on freedom of expression has now chimed in with a very thorough report, highlighting how Article 13 of the
Directive -- the part about mandatory copyright filters -- would be a disaster for free speech and would violate the UN's Declaration on Human Rights, and in particular Article 19 which says:
Everyone has the right to freedom of opinion and expression; the right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers.
As Kaye's report notes, the upload filters of Article 13 of the Copyright Directive would almost certainly violate this principle.
Article 13 of the proposed Directive appears likely to incentivize content-sharing providers to restrict at the point of upload user-generated content that is perfectly legitimate and lawful. Although the latest proposed versions of Article 13 do
not explicitly refer to upload filters and other content recognition technologies, it couches the obligation to prevent the availability of copyright protected works in vague terms, such as demonstrating best efforts and taking effective and
proportionate measures. Article 13(5) indicates that the assessment of effectiveness and proportionality will take into account factors such as the volume and type of works and the cost and availability of measures, but these still leave
considerable leeway for interpretation.
The significant legal uncertainty such language creates does not only raise concern that it is inconsistent with the Article 19(3) requirement that restrictions on freedom of expression should be provided by law. Such uncertainty would also raise
pressure on content sharing providers to err on the side of caution and implement intrusive content recognition technologies that monitor and filter user-generated content at the point of upload. I am concerned that the restriction of
user-generated content before its publication subjects users to restrictions on freedom of expression without prior judicial review of the legality, necessity and proportionality of such restrictions. Exacerbating these concerns is the reality
that content filtering technologies are not equipped to perform context-sensitive interpretations of the valid scope of limitations and exceptions to copyright, such as fair comment or reporting, teaching, criticism, satire and parody.
Kaye further notes that copyright is not the kind of thing that an algorithm can readily determine, and the fact-specific and context-specific nature of copyright requires much more than just throwing algorithms at the problem -- especially when a
website may face legal liability for getting it wrong.
The designation of such mechanisms as the main avenue to address users' complaints effectively delegates content blocking decisions under copyright law to extrajudicial mechanisms, potentially in violation of minimum due process guarantees under
international human rights law. The blocking of content -- particularly in the context of fair use and other fact-sensitive exceptions to copyright -- may raise complex legal questions that require adjudication by an independent and impartial
judicial authority. Even in exceptional circumstances where expedited action is required, notice-and-notice regimes and expedited judicial process are available as less invasive means for protecting the aims of copyright law.
In the event that content blocking decisions are deemed invalid and reversed, the complaint and redress mechanism established by private entities effectively assumes the role of providing access to remedies for violations of human rights law. I
am concerned that such delegation would violate the State's obligation to provide access to an effective remedy for violations of rights specified under the Covenant. Given that most of the content sharing providers covered under Article 13 are
profit-motivated and act primarily in the interests of their shareholders, they lack the qualities of independence and impartiality required to adjudicate and administer remedies for human rights violations. Since they also have no incentive to
designate the blocking as being on the basis of the proposed Directive or other relevant law, they may opt for the legally safer route of claiming that the upload was a terms of service violation -- this outcome may deprive users of even the
remedy envisioned under Article 13(7). Finally, I wish to emphasize that unblocking, the most common remedy available for invalid content restrictions, may often fail to address financial and other harms associated with the blocking of
He goes on to point that while large platforms may be able to deal with all of this, smaller ones are going to be in serious trouble:
I am concerned that the proposed Directive will impose undue restrictions on nonprofits and small private intermediaries. The definition of an online content sharing provider under Article 2(5) is based on ambiguous and highly subjective criteria
such as the volume of copyright protected works it handles, and it does not provide a clear exemption for nonprofits. Since nonprofits and small content sharing providers may not have the financial resources to establish licensing agreements with
media companies and other right holders, they may be subject to onerous and legally ambiguous obligations to monitor and restrict the availability of copyright protected works on their platforms. Although Article 13(5)'s criteria for effective
and proportionate measures take into account the size of the provider concerned and the types of services it offers, it is unclear how these factors will be assessed, further compounding the legal uncertainty that nonprofits and small providers
face. It would also prevent a diversity of nonprofit and small content-sharing providers from potentially reaching a larger size, and result in strengthening the monopoly of the currently established providers, which could be an impediment to the
right to science and culture as framed in Article 15 of the ICESCR.
Several video downloading and MP3 conversion tools have thrown in the towel this week, disabling all functionality following
legal pressure. Pickvideo.net states that it received a cease and desist order, while Video-download.co and EasyLoad.co reference the lawsuit against YouTube-MP3 as the reason for their decision.
The music industry sees stream ripping as one of the largest piracy threats. The RIAA, IFPI, and BPI showed that they're serious about the issue when they filed legal action against YouTube-MP3, the largest stream ripping site at the time.
This case eventually resulted in a settlement where the site, once good for over a million daily visitors, agreed to shut down voluntarily last year.
YouTube-MP3's demise was a clear victory for the music groups, which swiftly identified their next targets, putting them under pressure, both in public and behind the scenes.
This week this appears to have taken its toll on several stream ripping sites, which allowed users to download videos from YouTube and other platforms, with the option to convert files to MP3s. The targets include Pickvideo.net , Video-download.co
and Easyload.co , which all inform their users that they've thrown in the towel.
With several million visits per month, Pickvideo is the largest of the three. According to the site, they took the drastic measures following a cease -and-desist letter.
The UK Supreme Court has today ruled that trade mark holders are not able to compel ISPs to bear the cost of implementing orders to block websites
selling counterfeit goods.
Jim, Alex and Myles at the Supreme CourtOpen Rights Group acted as an intervener in this case. We argued that Internet service providers (ISPs) as innocent parties should not bear the costs of website blocking, and that this was a long-standing
principle of English law.
Jim Killock, Executive Director of Open Rights Group said:
This case is important because if ISPs paid the costs of blocking websites, the result would be an increasing number of blocks for relatively trivial reasons and the costs would be passed to customers.
While rights holders may want websites blocked, it needs to be economically rational to ask for this.
Solicitor in the case David Allen Green said:
I am delighted to have acted, through my firm Preiskel, successfully for the Open Rights Group in their intervention.
We intervened to say that those enforcing private rights on internet should bear the costs of doing so, not others. This morning, the UK Supreme Court held unanimously that the rights holders should bear the costs.
The main party to the case was BT who opposed being forced to pay for costs incurred in blocking websites. Now rights-holders must reimburse ISPs for the costs of blocking rights-infringing material.
Supreme Court judge Lord Sumption, one of five n the panel, ruled:
There is no legal basis for requiring a party to shoulder the burden of remedying an injustice if he has no legal responsibility for the infringement and is not a volunteer but is acting under the compulsion of an order of the court.
It follows that in principle the rights-holders should indemnify the ISPs against their compliance costs. Section 97A of the Copyright, Designs and Patents Act 1988 allows rights-holders to go to court and get a blocking order -- the question in
the current case is who stumps up for the costs of complying with that order?
Of course this no asks the question about who should pay for mass porn website blocking that will be needed when the BBFC porn censorship regime stats its work.
As Europe's latest copyright proposal
heads to a critical vote
on June 20-21, more than 70 Internet and computing luminaries have spoken out against a dangerous provision, Article 13, that would require Internet platforms to automatically filter uploaded content. The group, which includes Internet pioneer
Vint Cerf, the inventor of the World Wide Web Tim Berners-Lee, Wikipedia co-founder Jimmy Wales, co-founder of the Mozilla Project Mitchell Baker, Internet Archive founder Brewster Kahle, cryptography expert Bruce Schneier, and net neutrality
expert Tim Wu , wrote in a joint letter that was released today
By requiring Internet platforms to perform automatic filtering all of the content that their users upload, Article 13 takes an unprecedented step towards the transformation of the Internet, from an open platform for sharing and innovation, into a
tool for the automated surveillance and control of its users.
The prospects for the elimination of Article 13 have continued to worsen. Until late last month, there was the hope that that Member States (represented by the Council of the European Union) would find a compromise. Instead, their final
negotiating mandate doubled down on it.
The last hope for defeating the proposal now lies with the European Parliament. On June 20-21 the Legal Affairs (JURI) Committee will vote on the proposal. If it votes against upload filtering, the fight can continue in the Parliament's subsequent
negotiations with the Council and the European Commission. If not, then automatic filtering of all uploaded content may become a mandatory requirement for all user content platforms that serve European users. Although this will pose little
impediment to the largest platforms such as YouTube, which already uses its Content ID
system to filter content, the law will create an expensive barrier to entry for smaller platforms and startups, which may choose to establish or move their operations overseas in order to avoid the European law.
For those platforms that do establish upload filtering, users will find that their contributions--including video, audio, text, and even source code
--will be monitored and potentially blocked if the automated system detects what it believes to be a copyright infringement. Inevitably, mistakes will happen
. There is no way for an automated system to reliably determine when the use of a copyright work falls within a copyright limitation or exception under European law, such as quotation or parody.
Moreover, because these exceptions are not consistent across Europe, and because there is no broad fair use right as in the United States, many harmless uses of copyright works in memes, mashups, and remixes probably are technically
infringing even if no reasonable copyright owner would object. If an automated system monitors and filters out these technical infringements, then the permissible scope of freedom of expression in Europe will be radically curtailed, even without
the need for any substantive changes in copyright law.
The upload filtering proposal stems from a misunderstanding about the purpose of copyright
. Copyright isn't designed to compensate creators for each and every use of their works. It is meant to incentivize creators as part of an effort to promote the public interest in innovation and expression. But that public interest isn't served
unless there are limitations on copyright that allow new generations to build and comment on the previous contributions . Those limitations are both legal, like fair dealing, and practical, like the zone of tolerance for harmless uses. Automated
upload filtering will undermine both.
The authors of today's letter write:
We support the consideration of measures that would improve the ability for creators to receive fair remuneration for the use of their works online. But we cannot support Article 13, which would mandate Internet platforms to embed an automated
infrastructure for monitoring and censorship deep into their networks. For the sake of the Internet's future, we urge you to vote for the deletion of this proposal.
What began as a bad idea offered up to copyright lobbyists as a solution to an imaginary " value gap
" has now become an outright crisis for future of the Internet as we know it. Indeed, if those who created and sustain the operation of the Internet recognize the scale of this threat, we should all be sitting up and taking notice.
If you live in Europe or have European friends or family, now could be your last opportunity to avert the upload filter. Please take action by clicking the button below, which will take you to a campaign website where you can phone, email, or
Tweet at your representatives, urging them to stop this threat to the global Internet before it's too late.
The Committee to Protect Journalists condemned a new cybersecurity law passed today by Vietnam's National Assembly as a clear threat to press
freedom and called on the Vietnamese government immediately to repeal it.
The legislation, which goes into effect January 1, 2019, gives broad powers to government authorities to surveil the internet, including the ability to force international technology companies with operations in the country to reveal their users'
personal information and censor online information on demand, according to news reports said.
The law's vague and broad provisions ban any online posts deemed as opposing the State of the Socialist Republic of Vietnam, or which [offend] the nation, the national flag, the national anthem, great people, leaders, notable people and national
heroes, according to the reports. The same sources state that the law's Article 8 prohibits the use of the internet to distort history, deny revolutionary achievements or undermine national solidarity.
The law also prohibits disseminating online incorrect information which causes confusion among people, damages socio-economic activities [or] creates difficulties for authorities and those performing their duty, according to reports.
After January 1, 2019, companies will have 24 hours to remove content that the Information and Communications Ministry or the Public Security Ministry find to be in violation of the new law.
Shawn Crispin, CPJ's Southeast Asia representative said:
Vietnam's new cybersecurity law represents a grave danger to journalists and bloggers who work online and should be promptly repealed. We expect international technology companies to use their best efforts to uphold their stated commitment to a
free and open internet and user privacy and to resist any attempts to undermine those commitments.
Swiss voters will decide on Sunday whether to back a new gambling law designed to restrict online gambling to a state
monopoly or reject what opponents say amounts to internet censorship.
Recent polls indicate a clear majority plan support the new law, which has already been passed by both houses of parliament, and now is being put to a referendum.
The Swiss government says the Gambling Act updates legislation for the digital age. If approved by voters, the law would be among the strictest in Europe and would only allow casinos and gaming companies certified in Switzerland to operate,
including on the internet. This would enable Swiss companies for the first time to offer online gambling, but would basically block foreign-based companies from the market.
Bern also wants all of the companies' proceeds to be taxed in Switzerland, with revenues helping fund anti-addiction measures, as well as social security and sports and culture programmes.
The new law represents a windfall for Switzerland's casinos, which had put huge amounts of money into campaigning.
Opponents have slammed Bern for employing methods worthy of an authoritarian state, with a measure that they claim is censorship of the internet.
Swiss voters have overwhelmingly approved blocking foreign-based betting sites in a referendum on a new gambling law designed to create a local monopoly.
72.9% of voters came out in favor of the new gambling law.
The law, which is set to take effect next year, will be among the strictest in Europe, allowing only casinos and gaming companies certified in Switzerland to operate in the country, including on the internet.
It will enable Swiss companies for the first time to offer online gambling, but will basically block foreign-based companies from the market.
The pending update to the EU Copyright Directive is coming up for a committee vote on June 20 or 21 and a parliamentary vote
either in early July or late September. While the directive fixes some longstanding problems with EU rules, it creates much, much larger ones: problems so big that they threaten to wreck the Internet itself.
Under Article 13 of the proposal
, sites that allow users to post text, sounds, code, still or moving images, or other copyrighted works for public consumption will have to filter all their users' submissions against a database of copyrighted works. Sites will have to pay to
license the technology to match submissions to the database, and to identify near matches as well as exact ones. Sites will be required to have a process to allow rightsholders to update this list with more copyrighted works.
Even under the best of circumstances, this presents huge problems. Algorithms that do content-matching are frankly terrible at it. The Made-in-the-USA version of this is YouTube's Content ID system, which improperly flags legitimate works all the
time, but still gets flack from entertainment companies for not doing more.
There are lots of legitimate reasons for Internet users to upload copyrighted works. You might upload a clip from a nightclub (or a protest, or a technical presentation) that includes some copyrighted music in the background. Or you might just be
wearing a t-shirt with your favorite album cover in your Tinder profile. You might upload the cover of a book you're selling on an online auction site, or you might want to post a photo of your sitting room in the rental listing for your flat,
including the posters on the wall and the picture on the TV.
Wikipedians have even more specialised reasons to upload material: pictures of celebrities, photos taken at newsworthy events, and so on.
But the bots that Article 13 mandates will not be perfect. In fact, by design, they will be wildly imperfect.
Article 13 punishes any site that fails to block copyright infringement, but it won't punish people who abuse the system. There are no penalties for falsely claiming copyright over someone else's work, which means that someone could upload all of
Wikipedia to a filter system (for instance, one of the many sites that incorporate Wikpedia's content into their own databases) and then claim ownership over it on Twitter, Facebook and Wordpress, and everyone else would be prevented from quoting
Wikipedia on any of those services until they sorted out the false claims. It will be a lot easier to make these false claims that it will be to figure out which of the hundreds of millions of copyrighted claims are real and which ones are
pranks or hoaxes or censorship attempts.
Article 13 also leaves you out in the cold when your own work is censored thanks to a malfunctioning copyright bot. Your only option when you get censored is to raise an objection with the platform and hope they see it your way--but if they fail
to give real consideration to your petition, you have to go to court to plead your case.
Article 13 gets Wikipedia coming and going: not only does it create opportunities for unscrupulous or incompetent people to block the sharing of Wikipedia's content beyond its bounds, it could also require Wikipedia to filter submissions to the
encyclopedia and its surrounding projects, like Wikimedia Commons. The drafters of Article 13 have tried to carve
Wikipedia out of the rule
, but thanks to sloppy drafting, they have failed: the exemption is limited to "noncommercial activity". Every file on Wikipedia is licensed for commercial use.
Then there's the websites that Wikipedia relies on as references. The fragility and impermanence of links is already a serious problem for Wikipedia's crucial footnotes, but after Article 13 becomes law, any information hosted in the EU might
disappear--and links to US mirrors might become infringing--at any moment thanks to an overzealous copyright bot. For these reasons and many more,
the Wikimedia Foundation
has taken a public position condemning Article 13.
Speaking of references: the problems with the new copyright proposal don't stop there. Under Article 11, each member state will get to create a new copyright in news. If it passes, in order to link to a news website, you will either have to do so
in a way that satisfies the limitations and exceptions of all 28 laws, or you will have to get a license. This is fundamentally incompatible with any sort of wiki (obviously), much less Wikipedia.
It also means that the websites that Wikipedia relies on for its reference links may face licensing hurdles that would limit their ability to cite their own sources. In particular, news sites may seek to withhold linking licenses from critics who
want to quote from them in order to analyze, correct and critique their articles, making it much harder for anyone else to figure out where the positions are in debates, especially years after the fact. This may not matter to people who only pay
attention to news in the moment, but it's a blow to projects that seek to present and preserve long-term records of noteworthy controversies. And since every member state will get to make its own rules for quotation and linking, Wikipedia posts
will have to satisfy a patchwork of contradictory rules, some of which are already so severe that they'd ban any items in a "Further Reading" list unless the article directly referenced or criticized them.
The controversial measures in the new directive have been tried before. For example, link taxes were tried in Spain and Germany and
, and publishers don't want them
. Indeed, the only country to embrace this idea as workable is China
, where mandatory copyright enforcement bots have become part of the national toolkit for controlling public discourse.
Articles 13 and 11 are poorly thought through, poorly drafted, unworkable--and dangerous. The collateral damage they will impose on every realm of public life can't be overstated. The Internet, after all, is inextricably bound up in the daily
lives of hundreds of millions of Europeans and an entire constellation of sites and services
will be adversely affected by Article 13. Europe can't afford to place education, employment, family life, creativity, entertainment, business, protest, politics, and a thousand other activities at the mercy of unaccountable algorithmic filters.
If you're a European concerned about these proposals, here's a tool for contacting your MEP
Who is liable if a user posts copyrighted music to YouTube without authority? Is it the user or is it YouTube? The answer is of course that it is the user who would be held liable should copyright holders seek compensation. YouTube would be held
responsible only if they were informed of the infringement and refused to take it down.
This is the practical compromise that lets the internet work.
So what would happen if the government changed the liability laws so that YouTube was held liable for unauthorised music as soon as it was posted. There maybe millions of views before it was spotted. If YouTube were immediately liable they may
have to pay millions in court judgements against them.
There is lot of blather about YouTube having magic Artificial Intelligence that can detect copyrighted music and block it before it us uploaded. But this is nonsense, music is copyrighted by default, even a piece that has never been published and
is not held in any computer database.
YouTube does not have a database that contains all the licensing and authorisation, and who exactly is allowed to post copyrighted material. Even big companies lie, so how could YouTube really know what could be posted and what could not.
If the law were to be changed, and YouTube were held responsible for the copyright infringement of their posters, then the only possible outcome would be for YouTube to use its AI to detect any music at all and block all videos which contain
music. The only music allowed to be published would be from the music companies themselves, and even then after providing YouTube with paperwork to prove that they had the necessary authorisation.
So when the government speaks of changes to liability law they are speaking of a massive step up in internet censorship as the likely outcome.
In fact the censorship power of such liability tweaks has been proven in the US. The recently passed FOSTA law changed liability law so that internet companies are now held liable for user posts facilitating sex trafficking. The law was sold
as a 'tweak' just to take action against trafficking. But it resulted in the immediate and almost total internet censorship of all user postings facilitating adult consensual sex work, and a fair amount of personal small ads and dating services as
The rub was that sex traffickers do not in any way specify that their sex workers have been trafficked, their adverts are exactly the same as for adult consensual sex workers. With all the artificial intelligence in the world, there is no way that
internet companies can distinguish between the two.
When they are told they are liable for sex trafficking adverts, then the only possible way to comply is to ban all adverts or services that feature anything to do with sex or personal hook ups. Which is of course exactly what happened.
So when UK politicians speak of internet liability changes and sex trafficking then they are talking about big time, large scale internet censorship.
And Theresa May said today via a government press release as reported in the Daily Mail:
Web giants such as Facebook and Twitter must automatically remove vile abuse aimed at women, Theresa May will demand today.
The Prime Minister will urge companies to utilise the same technology used to take down terrorist propaganda to remove rape threats and harassment.
Speaking at the G7 summit in Quebec, Mrs May will call on firms to do more to tackle content promoting and depicting violence against women and girls, including illegal violent pornography.
She will also demand the automatic removal of adverts that are linked to people-trafficking.
May will argue they must ensure women can use the web without fear of online rape threats, harassment, cyberstalking, blackmail or vile comments.
She will say: We know that technology plays a crucial part in advancing gender equality and empowering women and girls, but these benefits are being undermined by vile forms of online violence, abuse and harassment.
What is illegal offline is illegal online and I am calling on world leaders to take serious action to deal with this, just like we are doing in the UK with our commitment to legislate on online harms such as cyber-stalking and harassment.
In a world that is being ripped apart by identitarian intolerance of everyone else, its seems particularly unfair that men should be expected to happily put up with the fear of online threats, harassment, cyberstalking, blackmail or vile comments.
Surely laws should be written so that all people are treated totally equally.
Online platforms need to take responsibility for the content they host. They need to proactively tackle harmful behaviours and content. Progress has been made in removing illegal content, particularly terrorist material, but more needs to be done
to reduce the amount of damaging content online, legal and illegal.
We are developing options for increasing the liability online platforms have for illegal content on their services. This includes examining how we can make existing frameworks and definitions work better, as well as what the liability regime
should look like in the long-run.
Terms and Conditions
Platforms use their terms and conditions to set out key information about who can use the service, what content is acceptable and what action can be taken if users don't comply with the terms. We know that users frequently break these rules. In
such circumstances, the platforms' terms state that they can take action, for example they can remove the offending content or stop providing services to the user. However, we do not see companies proactively doing this on a routine basis. Too
often companies simply do not enforce their own terms and conditions.
Government wants companies to set out clear expectations of what is acceptable on their platforms in their terms, and then enforce these rules using sanctions when necessary. By doing so, companies will be helping users understand what is and
We believe that it is right for Government to set out clear standards for social media platforms, and to hold them to account if they fail to live up to these. DCMS and Home Office will jointly work on the White Paper which will set out our
proposals for forthcoming legislation. We will focus on proposals which will bring into force real protections for users that will cover both harmful and illegal content and behaviours. In parallel, we are currently
assessing legislative options to modify the online liability regime in the UK, including both the smaller changes consistent with the EU's eCommerce directive, and the larger changes that may be possible when we leave the EU.
Is there an internet equivalent of a Freudian slip? Well if so, Google obliged with this listing on its Google
Perhaps Google should have pointed out that such a ban false information will also usefully silence all politicians from all parties in the run up to elections.
Politicians in France have proposed introducing a new law to fight fake news in the run up to an election next year. The draft law, designed to stop what the government calls manipulation of information in the run-up to elections, will be debated
in parliament Thursday with a view to it being put into action during next year's European parliamentary polls.
The idea for the bill came straight from President Emmanuel Macron , who was himself targeted during his 2017 campaign by online rumours that he was gay and had a secret bank account in the Bahamas.
Under the law, French authorities would be able to immediately halt the publication of information deemed to be false ahead of elections. Social networks would have to introduce measures allowing users to flag up false reports, pass their data on
such articles to authorities, and make public their efforts against fake news. And the law would authorise the state to take foreign broadcasters off the air if they were attempting to destabilise France - a measure seemingly aimed at Russian
state-backed outlet RT in particular.
The government claims measures will be built into the law to protect freedom of speech, with only reports that are manifestly false and that have gone viral - notably with the help of bots - taken down.
Update: French MPs criticise 'fake news' censorship law
The French government was accused by right and leftwing opponents of trying to create a form of thought police
and institute censorship, as parliament began debating Emmanuel Macron's proposed law to ban 'fake news' on the internet during election campaigns.
The draft law would allow political parties to complain about widely spread assertions deemed to be false or implausible and a French judge could immediately move to stop their publication.
President Macron has personally backed the censorship law after he complained his presidential campaign was targeted by online fake news rumours, including that he was gay and that he had a secret bank account in the Bahamas. He has claimed a law
was needed against the spread of 'fake news' in order to protect democracy.
The government wants the law to come into force before next spring's European parliament elections . It is likely to pass because Macron has a parliamentary majority.
The EU's plans to modernize copyright law in Europe are moving ahead. With a crucial vote coming up later this month, protests
from various opponents are on the rise as well. They warn that the proposed plans will result in Internet filters which threaten people's ability to freely share content online. According to Pirate Party MEP Julia Reda, these filters will hurt
regular Internet users, but also creators and businesses.
September 2016, the European Commission published its proposal for a modernized copyright law. Among other things, it proposed measures to require online services to do more to fight piracy.
Specifically, Article 13 of the proposed Copyright Directive will require online services to track down and delete pirated content, in collaboration with rightsholders.
The Commission stressed that the changes are needed to support copyright holders. However, many legal scholars , digital activists , politicians , and members of the public worry that they will violate the rights of regular Internet users.
Last month the EU Council finalized the latest version of the proposal. This means that the matter now goes to the Legal Affairs Committee of the Parliament (JURI), which must decide how to move ahead. This vote is expected to take place in two
Although the term filter is commonly used to describe Article 13, it is not directly mentioned in the text itself .
According to Pirate Party Member of Parliament (MEP) Julia Reda , the filter keyword is avoided in the proposal to prevent a possible violation of EU law and the Charter of Fundamental Rights. However, the outcome is essentially the same.
In short, the relevant text states that online services are liable for any uploaded content unless they take effective and proportionate action to prevent copyright infringements, identified by copyright holders. That also includes preventing
these files from being reuploaded.
The latter implies some form of hash filtering and continuous monitoring of all user uploads. Several companies, including Google Drive, Dropbox, and YouTube already have these types of filters, but many others don't.
A main point of critique is that the automated upload checks will lead to overblocking, as they are often ill-equipped to deal with issues such as fair use.
The proposal would require platforms to filter all uploads by their users for potential copyright infringements -- not just YouTube and Facebook, but also services like WordPress, TripAdvisor, or even Tinder. We know from experience that these
algorithmic filters regularly make mistakes and lead to the mass deletion of legal uploads, Julia Reda tells TF.
Especially small independent creators frequently see their content taken down because others wrongfully claim copyright on their works. There are no safeguards in the proposal against such cases of copyfraud.
Besides affecting uploads of regular Internet users and smaller creators, many businesses will also be hit. They will have to make sure that they can detect and prevent infringing material from being shared on their systems.
This will give larger American Internet giants, who already have these filters in place, a competitive edge over smaller players and new startups, the Pirate Party MEP argues.
It will make those Internet giants even stronger, because they will be the only ones able to develop and sell the filtering technologies necessary to comply with the law. A true lose-lose situation for European Internet users, authors and
businesses, Reda tells us.
Based on the considerable protests in recent days, the current proposal is still seen as a clear threat by many.
In fact, the save youri nternet
campaign, backed by prominent organizations such as Creative Commons, EFF, and Open Media, is ramping up again. They urge the European public to reach out to their Members of Parliament before it's too late.
Should Article 13 of the Copyright Directive proposal be adopted, it will impose widespread censorship of all the content you share online. The European Parliament is the only one that can step in and Save your Internet, they write.
The full Article 13 text includes some language to limit its scope. The nature and size of online services must be taken into account, for example. This means that a small and legitimate niche service with a few dozen users might not be directly
liable if it operates without these anti-piracy measures.
Similarly, non-profit organizations will not be required to comply with the proposed legislation, although there are calls from some member states to change this.
In addition to Article 13, there is also considerable pushback from the public against Article 11, which is regularly referred to as the link tax .
At the moment, several organizations are planning a protest day next week, hoping to mobilize the public to speak out. A week later, following the JURI vote, it will be judgment day.
If they pass the Committee the plans will progress towards the final vote on copyright reform next Spring. This also means that they'll become much harder to stop or change. That has been done before, such as with ACTA, but achieving that type of
momentum will be a tough challenge.
Lawmakers in Russia's State Duma have adopted a final draft of legislation that imposes fines on violations of Russia's ban on Internet
anonymizers that grant access to online content blocked by the state internet censor.
According to the bill, individuals who break the law will face fines of 5,000 rubles ($80), officials will face fines up to 50,000 rubles ($800), and legal entities could be fined up to 700,000 rubles ($11,230).
Internet search engines will also be required to connect to the Federal State Information System, which will list the websites banned in Russia. Failure to connect to this system can result in fines up to 300,000 rubles ($4,800).
Russia's law on VPN services and Internet anonymizers entered force on November 1, 2017. The Federal Security Agency and other law enforcement agencies are authorized to designate websites and online services that violate Russia's Internet
Open Rights Group today released figures that show that High Court injunctions are being improperly administrated by ISPs and rights holders.
A new tool added to its blocked.org.uk
project examines over 1,000 domains blocked under the UK's 30 injunctions against over 150 services,
ORG found 37% of those domains are blocked in error, or without any legal basis. The majority of the domains blocked are parked domains, or no longer used by infringing services. One Sci-Hub domain is blocked without an injunction, and a likely
trademark infringing site, is also blocked without an injunction.
However, the list of blocked domains is believed to be around 2,500 domains, and is not made public, so ORG are unable to check for all possible mistakes.
Jim Killock, Executive Director of Open Rights Group said:
It is not acceptable for a legal process to result in nearly 40% maladministration. These results show a great deal of carelessness.
We expect ISPs and rights holders to examine our results and remove the errors we have found as swiftly as possible.
We want ISPs to immediately release lists of previously blocked domains, so we can check blocks are being removed by everyone.
Rights holders must make public exactly what is being blocked, so we can be ascertain how else these extremely wide legal powers are being applied.
ORG's conclusions are:
The administration process of adding and subtracting domains to be blocked is very poor
Keeping the lists secret makes it impossible to check errors
Getting mistakes corrected is opaque. The ISP pages suggest you go to court.
Spotify recently shared a new policy around hate content and conduct. And while we believe our intentions were good, the language
was too vague, we created confusion and concern, and didn't spend enough time getting input from our own team and key partners before sharing new guidelines.
It's important to note that our policy had two parts. The first was related to promotional decisions in the rare cases of the most extreme artist controversies. As some have pointed out, this language was vague and left too many elements open to
interpretation. We created concern that an allegation might affect artists' chances of landing on a Spotify playlist and negatively impact their future. Some artists even worried that mistakes made in their youth would be used against them.
That's not what Spotify is about. We don't aim to play judge and jury. We aim to connect artists and fans 203 and Spotify playlists are a big part of how we do that. Our playlist editors are deeply rooted in their respective cultures, and their
decisions focus on what music will positively resonate with their listeners. That can vary greatly from culture to culture, and playlist to playlist. Across all genres, our role is not to regulate artists. Therefore, we are moving away from
implementing a policy around artist conduct.
The second part of our policy addressed hate content. Spotify does not permit content whose principal purpose is to incite hatred or violence against people because of their race, religion, disability, gender identity, or sexual orientation. As
we've done before, we will remove content that violates that standard. We're not talking about offensive, explicit, or vulgar content 203 we're talking about hate speech.
We will continue to seek ways to impact the greater good and further the industry we all care so much about. We believe Spotify has an opportunity to help push the broader music community forward through conversation, collaboration and action.
We're committed to working across the artist and advocacy communities to help achieve that.
Instagram has censored the hashtag #stripper and several related keywords that dancers use to find each other and organize
online. Now, sex workers are taking to social media to spread the word, decry censorship, and suggest workarounds.
Currently, when you search Instagram for #stripper or #strippers, you are given a preview of just a couple top posts in the category. But if you click through to view the entire hashtag, the following message appears:
Recent posts from #strippers are currently hidden because the community has reported some content that may not meet Instagram's community guidelines.
The same thing was reportedly happening until very recently with a handful of related hashtags, including #yesastripper, #stripperstyle, and #stripperlife--but those appear to be back in action, demonstrating how quickly the sex work community has
to adapt and change.
Instagram has yet to comment about the censorship, but is surely because of the recent US internet censorship law FOSTA. This would make Instagram responsible should any posts to #stripper be used to facilitate sex trafficking. As Instagram is
unable to vet all such postings for possible traffcking then the only practical option is to ban all posts about sex work.
By Thursday morning, Instagram had apparently backed down, telling Jezebel that, the hashtag #stripper can again be used and seen by the community in the spirit in which they are intended.
Instagram sent a statement on Thursday effectively rescinding the ban:
The safety of our community is our number one priority and we spend a lot of time thinking about how we can create a safe and open environment for everyone, Instagram said in the statement. This includes constantly monitoring hashtag behavior by
using a variety of different signals, including community member reports. Access to recent posts and following hashtags are sometimes restricted based on content being posted with those hashtags. The hashtag #stripper can again be used and seen
by the community in the spirit in which they are intended.
The Advertising Standards Authority (ASA) has requested search engines Google and Bing remove some listings and ads for ticketing site Viagogo which controversially features several misleading sales ploys.
ASA today judged the site to be misleading consumers by failing to be transparent about fees, wrongfully using the term official site to suggest it was an authorised ticket agent and falsely claiming it could 100% guarantee entry to events.
The ASA had previously issued a warning to Viagogo about editing such claims on its website and advertising content. However, ASA chief executive Guy Parker said it failed to respond by the 29 May deadline.
The ASA has now referred the Geneva-based company to National Trading Standards (NTS). In addition, it issued requests to search engines Google and Bing to remove any links which would take a consumer through to a page containing non-compliant
NTS has since opened an investigation into Viagogo, which could see the company issued fines or face legal action against staff.
Meanwhile, digital minister Margot James has also urged consumers to boycott the company.
A demonstration in Moscow against the Russian government's effort to block the messaging app Telegram quickly morphed on Monday
into a protest against President Vladimir Putin, with thousands of participants chanting against the Kremlin's increasingly restrictive censorship regime.
The key demand of the rally, with the hashtag #DigitalResistance, was that the Russian internet remain free from government censorship.
One speaker, Sergei Smirnov, editor in chief of Mediazona, an online news service , asked the crowd. Is he to blame for blocking Telegram? The crowd responded with a resounding Yes!
Telegram is just the first step, Smirnov continued. If they block Telegram, it will be worse later. They will block everything. They want to block our future and the future of our children.
Russian authorities blocked Telegram after not being provided with decryption keys. The censors also briefly blocked thousands other websites sharing hosting facilities with Telegram in the hop of pressurising the hosts into taking down Telegram.
The censorship effort has provoked anger and frustration far beyond the habitual supporters of the political opposition, especially in the business sector, where the collateral damage continues to hurt the bottom line. There has been a flood of
complaints on Twitter and elsewhere that the government broke the internet.
Russia's Internet commissioner, Dmitry Marinichev, is calling on the Attorney General's Office to investigate the legality and validity of Roskomnadzor's actions against Telegram, arguing that the federal censor has caused undue harm to the
country's business interests, by blocking millions of IP addresses in its campaign against the instant messenger, and disrupting hundreds of other online services.
Marinichev's suggestion is mentioned in the annual report submitted to Vladimir Putin by Russian Entrepreneurs' Rights Commissioner Boris Titov.
Alexander Zharov, the head of Russia's state internet censor, Roskomnadzor, has said that the government's decision to
block the instant messenger Telegram is justified because federal agents have reliably established that all recent terrorist attacks in Russia and the near abroad were coordinated through Telegram.
Zharov also accused Telegram of using other online services as human shields by redirecting its traffic to their servers and forcing Roskomnadzor to disrupt a wide array of websites, when it cuts access to the new IP addresses Telegram adopts.
Zharov claimed that Telegram's functionality has degraded by 15 to 30% in Russia, due to Roskomnadzor's blocking efforts.
Zharov added that the Federal Security Service has expressed similar concerns about the push-to-talk walkie-talkie app Zello, which Roskomnadzor banned in April 2017.
Update: Apple asked to block Telegram from its app store
The secure messaging app Telegram was banned in Russia back in April, but so far, it's still available in the Russian version of
Apple's App Store. Russia is now asking Apple to remove the app from the App Store. In a supposedly legally binding letter to Apple, authorities say they're giving the company one month to comply before they enforce punishment for violations.
Despite Russian censorship efforts so far, the majority of users in Russia are still accessing the app, the Kremlin's censorship arm Roskomnadzor announced yesterday. Only 15 to 30% of Telegram's operations have been disrupted so far.
Russian internet censors also say they are in talks with Google to ban the app from Google Play.
Beginning on May 10, Spotify users will no longer be able to find R. Kelly 's music on any of the streaming service's editorial or algorithmic playlists. Under the terms of a new public hate content and hateful conduct policy Spotify is
putting into effect, the company will no longer promote the R&B singer's music in any way, removing his songs from flagship playlists like RapCaviar, Discover Weekly or New Music Friday, for example, as well as its other genre- or mood-based
"We are removing R. Kelly's music from all Spotify owned and operated playlists and algorithmic recommendations such as Discover Weekly," Spotify told Billboard in a statement. "His music will still be available on the
service, but Spotify will not actively promote it. We don't censor content because of an artist's or creator's behavior, but we want our editorial decisions -- what we choose to program -- to reflect our values. When an artist or creator does
something that is especially harmful or hateful, it may affect the ways we work with or support that artist or creator."
Over the past several years, Kelly has been accused by multiple women of sexual violence, coercion and running a "sex cult," including two additional women who came forward to Buzzfeed this week. Though he has never been convicted of a
crime, he has come under increasing scrutiny over the past several weeks, particularly with the launch of the #MuteRKelly movement at the end of April. Kelly has vociferously defended himself , saying those accusing him are an "attempt to
distort my character and to destroy my legacy." And while RCA Records has thus far not dropped Kelly from his recording contract, Spotify has distanced itself from promoting his music.
Earlier this month, Swedish streaming giant Spotify announced, that it would be introducing a policy on Hate Content and Hateful
Conduct . The company left the policy intentionally vague, which allowed Spotify to remove artists from its playlists at will. When we are alerted to content that violates our policy, we may remove it (in consultation with rights holders) or
refrain from promoting or playlisting it on our service, the company's PR team wrote in a statement at the time. They added that R. Kelly -- who, over the course of his career, has been repeatedly accused of sexual misconduct -- would be among
Now, following a backlash from artists and label executives, Bloomberg reports that Spotify has decided to back off the policy a little. That means restoring the rapper XXXTentacion's music to its playlists, despite that he was charged with
battering a pregnant woman.
Part of the blowback has to do with the broad scope of the company's content policy, which seemed to leave the door open to policing artists' personal lives and conduct. We've also thought long and hard about how to handle content that is not hate
content itself, but is principally made by artists or other creators who have demonstrated hateful conduct personally. So, in some circumstances, when an artist or creator does something that is especially harmful or hateful (for example, violence
against children and sexual violence), it may affect the ways we work with or support that artist or creator.
Spotify says R Kelly will remain banned from its playlists.
Age verification has been hanging over us for several years now - and has now been put back to the end of 2018 after enforcement was originally planned to start last month.
I'm enormously encouraged by how many people took the opportunity to speak up and reply to the BBFC consultation on the new regulations .
Over 500 people submitted a response using the tool provided by the Open Rights Group , emphasising the need for age verification tech to be held to robust privacy and security standards.
I'm told that around 750 consultation responses were received by the BBFC overall, which means that a significant majority highlighted the regulatory gap between the powers of the BBFC to regulate adult websites, and the powers of the Information
Commissioner to enforce data protection rules.
A woman has been convicted for performing offensive songs that included lyrics denying the Holocaust.
Alison Chabloz sang her compositions at a meeting of the far-right London Forum group.
A judge at Westminster Magistrates' Court found Chabloz had violated laws criminalising offence and intended to insult Jewish people.
District judge John Zani delayed her sentencing until 14 June but told the court: On the face of it this does pass the custody threshold.
Chabloz, a Swiss-British dual national, had uploaded tunes to YouTube including one defining the Nazi death camp Auschwitz as a theme park just for fools and the gas chambers a proven hoax. The songs remain available on YouTube.
The songs were partly set to traditional Jewish folk music, with lyrics like: Did the Holocaust ever happen? Was it just a bunch of lies? Seems that some intend to pull the wool over our eyes.
Adrian Davies, defending, previously told the judge his ruling would be a landmark one, setting a precedent on the exercise of free speech.
But Judge Zani said Chabloz failed by some considerable margin to persuade the court that her right to freedom of speech should provide her with immunity from prosecution. He said:
I am entirely satisfied that she will have intended to insult those to whom the material relates. Having carefully considered all evidence received and submissions made, I am entirely satisfied that the prosecution has proved beyond reasonable
doubt that the defendant is guilty.
Chabloz was convicted of two counts of causing an offensive, indecent or menacing message to be sent over a public communications network after performing two songs at a London Forum event in 2016. As there wa nothing indecent or menacing in the
songs, Chabloz was convicted for an offensive message.
See The Britisher for an eloquent and passionate defence of free speech.
Pornhub, the dominant force amongst the world's porn websites, has sent a challenge to the BBFC's porn censorship regime by offering a free workaround to any porn viewer who would prefer to hide their tracks rather then open themselves up to the
dangers of offering up their personal ID to age verifiers.
And rather bizarrely Pornhub are one of the companies offering age verification services to porn sites who want to comply with UK age verification requirements.
Pornhub describes its VPN service with references to UK censorship:
Browse all websites anonymously and without restrictions.
VPNhub helps you bypass censorship while providing secure and private access to Internet. Access all of your favorite websites without fear of being monitored.
Hide your information and surf the Internet without a trace.
Enjoy the pleasure of protection with VPNhub. With full data encryption and guaranteed anonymity, go with the most trusted VPN to protect your privacy anywhere in the world.
Free and Unlimited
Enjoy totally free and unlimited bandwidth on your device of choice.
Culture Secretary Matt Hancock has issued to the following press release from the Department for Digital, Culture, Media
New laws to make social media safer
New laws will be created to make sure that the UK is the safest place in the world to be online, Digital Secretary Matt Hancock has announced.
The move is part of a series of measures included in the government's response to the Internet Safety Strategy green paper, published today.
The Government has been clear that much more needs to be done to tackle the full range of online harm.
Our consultation revealed users feel powerless to address safety issues online and that technology companies operate without sufficient oversight or transparency. Six in ten people said they had witnessed inappropriate or harmful content online.
The Government is already working with social media companies to protect users and while several of the tech giants have taken important and positive steps, the performance of the industry overall has been mixed.
The UK Government will therefore take the lead, working collaboratively with tech companies, children's charities and other stakeholders to develop the detail of the new legislation.
Matt Hancock, DCMS Secretary of State said:
Digital technology is overwhelmingly a force for good
across the world and we must always champion innovation and change for the better. At the same time I have been clear that we have to address the Wild West elements of the Internet through legislation, in a way that supports innovation. We
strongly support technology companies to start up and grow, and we want to work with them to keep our citizens safe.
People increasingly live their lives through online platforms so it's more important than ever that people are safe and parents can have confidence they can keep their children from harm. The measures we're taking forward today will help make
sure children are protected online and balance the need for safety with the great freedoms the internet brings just as we have to strike this balance offline.
DCMS and Home Office will jointly work on a White Paper with other government departments, to be published later this year. This will set out legislation to be brought forward that tackles a range of both legal and illegal harms, from
cyberbullying to online child sexual exploitation. The Government will continue to collaborate closely with industry on this work, to ensure it builds on progress already made.
Home Secretary Sajid Javid said:
Criminals are using the internet to further their exploitation and abuse of children, while terrorists are abusing these platforms to recruit people and incite atrocities. We need to protect our communities from these heinous crimes and vile
propaganda and that is why this Government has been taking the lead on this issue.
But more needs to be done and this is why we will continue to work with the companies and the public to do everything we can to stop the misuse of these platforms. Only by working together can we defeat those who seek to do us harm.
The Government will be considering where legislation will have the strongest impact, for example whether transparency or a code of practice should be underwritten by legislation, but also a range of other options to address both legal and illegal
We will work closely with industry to provide clarity on the roles and responsibilities of companies that operate online in the UK to keep users safe.
The Government will also work with regulators, platforms and advertising companies to ensure that the principles that govern advertising in traditional media -- such as preventing companies targeting unsuitable advertisements at children -- also
apply and are enforced online.
It seems that the latest call for internet censorship is driven by some sort revenge for having been snubbed by the
The culture secretary said he does not have enough power to police social media firms after admitting only four of 14 invited to talks showed up.
Matt Hancock told the BBC it had given him a big impetus to introduce new laws to tackle what he has called the internet's Wild West culture.
He said self-policing had not worked and legislation was needed.
He told BBC One's Andrew Marr Show , presented by Emma Barnett, that the government just don't know how many children of the millions using using social media were not old enough for an account and he was very worried about age
verification. He told the programme he hopes we get to a position where all users of social media users has to have their age verified.
Two government departments are working on a White Paper expected to be brought forward later this year. Asked about the same issue on ITV's Peston on Sunday , Hancock said the government would be legislating in the next couple of years
because we want to get the details right.
Update: Internet safety just means internet censorship
This week, Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, announced the launch of a consultation on
new legislative measures to clean up the Wild West elements of the Internet. In response, music group BPI says the government should use the opportunity to tackle piracy with advanced site-blocking measures, repeat infringer policies, and new
responsibilities for service providers.
This week, the Government published its response to the Internet Safety Strategy green paper , stating unequivocally that more needs to be done to tackle online harm. As a result, the Government will now carry through with its threat to introduce
new legislation, albeit with the assistance of technology companies, children's charities and other stakeholders.
While emphasis is being placed on hot-button topics such as cyberbullying and online child exploitation, the Government is clear that it wishes to tackle the full range of online harms. That has been greeted by UK music group BPI with a request
that the Government introduces new measures to tackle Internet piracy.
In a statement issued this week, BPI chief executive Geoff Taylor welcomed the move towards legislative change and urged the Government to encompass the music industry and beyond. He said:
This is a vital opportunity to protect consumers and boost the UK's music and creative industries. The BPI has long pressed for internet intermediaries and online platforms to take responsibility for the content that they promote to users.
Government should now take the power in legislation to require online giants to take effective, proactive measures to clean illegal content from their sites and services. This will keep fans away from dodgy sites full of harmful content and
prevent criminals from undermining creative businesses that create UK jobs.
The BPI has published four initial requests, each of which provides food for thought.
The demand to establish a new fast-track process for blocking illegal sites is not entirely unexpected, particularly given the expense of launching applications for blocking injunctions at the High Court.
The BPI has taken a large number of actions against individual websites -- 63 injunctions are in place against sites that are wholly or mainly infringing and whose business is simply to profit from criminal activity, the BPI says.
Those injunctions can be expanded fairly easily to include new sites operating under similar banners or facilitating access to those already covered, but it's clear the BPI would like something more streamlined. Voluntary schemes, such as the one
in place in Portugal , could be an option but it's unclear how troublesome that could be for ISPs. New legislation could solve that dilemma, however.
Another big thorn in the side for groups like the BPI are people and entities that post infringing content. The BPI is very good at taking these listings down from sites and search engines in particular (more than 600 million requests to date) but
it's a game of whac-a-mole the group would rather not engage in.
With that in mind, the BPI would like the Government to impose new rules that would compel online platforms to stop content from being re-posted after it's been taken down while removing the accounts of repeat infringers.
Thirdly, the BPI would like the Government to introduce penalties for online operators who do not provide transparent contact and ownership information. The music group isn't any more specific than that, but the suggestion is that operators of
some sites have a tendency to hide in the shadows, something which frustrates enforcement activity.
Finally, and perhaps most interestingly, the BPI is calling on the Government to legislate for a new duty of care for online intermediaries and platforms. Specifically, the BPI wants effective action taken against businesses that use the Internet
to encourage consumers to access content illegally.
While this could easily encompass pirate sites and services themselves, this proposal has the breadth to include a wide range of offenders, from people posting piracy-focused tutorials on monetized YouTube channels to those selling fully-loaded
Kodi devices on eBay or social media.
Overall, the BPI clearly wants to place pressure on intermediaries to take action against piracy when they're in a position to do so, and particularly those who may not have shown much enthusiasm towards industry collaboration in the past.
Legislation in this Bill, to take powers to intervene with respect to operators that do not co-operate, would bring focus to the roundtable process and ensure that intermediaries take their responsibilities seriously, the BPI says.
Democrats in the United States House of Representatives have gathered 90 of the 218 signatures they'll need to force a vote on
whether or not to roll back net neutrality rules, while Federal Communications Commission Chair Ajit Pai has already predicted that the House effort will fail and large telecommunications companies publicly expressed their anger at last
Wednesday's Senate vote to keep the Obama-era open internet rules in place.
Led by Pai, a Donald Trump appointee, the FCC voted 3-2 along party lines in December to scrap the net neutrality regulations, effectively creating an internet landscape dominated by whichever companies can pay the most to get into the online fast
Telecommunications companies could also choose to block some sites simply based on their content, a threat to which the online porn industry would be especially vulnerable, after five states have either passed or are considering legislation
labeling porn a public health hazard.
While the House Republican leadership has taken the position that the net neutrality issue should not even come to a vote, on May 17 Pennsylvania Democrat Mike Doyle introduced a discharge petition that would force the issue to the House floor. A
discharge petition needs 218 signatures of House members to succeed in forcing the vote. As of Monday morning, May 21, Doyle's petition had received 90 signatures . The effort would need all 193 House Democrats plus 25 Republicans to sign on, in
order to bring the net neutrality rollback to the House floor.
For its updated news application, Google is claiming it is using artificial intelligence as part of an effort to weed out
disinformation and feed users with viewpoints beyond their own filter bubble.
Google chief Sundar Pichai, who unveiled the updated Google News earlier this month, said the app now surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. It marks Google's latest
effort to be at the centre of online news and includes a new push to help publishers get paid subscribers through the tech giant's platform.
In reality Google has just banned news from the likes of the Daily Mail whilst all the 'trusted sources' are just the likes of the politically correct papers such as the Guardian and Independent.
According to product chief Trystan Upstill, the news app uses the best of artificial intelligence to find the best of human intelligence - the great reporting done by journalists around the globe. While the app will enable users to get
personalised news, it will also include top stories for all readers, aiming to break the so-called filter bubble of information designed to reinforce people's biases.
Nicholas Diakopoulos, a Northwestern University professor specialising in computational and data journalism, said the impact of Google's changes remain to be seen. Diakopoulos said algorithmic and personalised news can be positive for engagement
but may only benefit a handful of news organisations. His research found that Google concentrates its attention on a relatively small number of publishers, it's quite concentrated. Google's effort to identify and prioritise trusted news
sources may also be problematic, according to Diakopoulos. Maybe it's good for the big guys, or the (publishers) who have figured out how to game the algorithm, he said. But what about the local news sites, what about the new news sites that don't
have a long track record?
I tried it out and no matter how many times I asked it not to provide stories about the royal wedding and the cup final, it just served up more of the same. And indeed as Diakopoulos said, all it wants to do is push news stories from the
politically correct papers, most notably the Guardian. I can't see it proving very popular. I'd rather have an app that feeds me what I actually like, not what I should like.