Amazon has banned a book by Tommy Robinson. Mohammed's Koran: Why Muslims kill for Islam which he co-authored with Peter McLoughlin has now been removed from the store. According to McLoughlin the book was removed from the Amazon
database last month, and even second hand versions cannot now be sold. Despite scathing reviews the author said it was the No.1 best-selling exegesis of the Koran.
Amazon joins a long list of internet giants that have banned Tommy Robinson with only YouTube currently giving him a platform.
Robinson has accused major companies and media outlets, including the BBC , of censorship for removing his content which he claims should be protected under freedom of speech. He wrote:
This is the twenty-first century equivalent of the Nazis taking out the books from university libraries and burning them.
A spokesman for Amazon said:
As a bookseller, we provide our customers with access to a variety of viewpoints, including books that some customers may find objectionable. That said, we reserve the right not to sell certain inappropriate content.
It is difficult to see how such censorship will soothe a divided society. Surely it will mean that people leaning towards progressive politics will see less that opposes their viewpoint. But on the other side of the coin decisions like this will add to
the anger of substantial numbers of people sympathetic to Tommy Robinson's views. They will likely feel that the silencing of Tommy Robinson is equivalent to the silencing of his supporters.
The Counter-Terrorism Internet Referral Unit (CTIRU) was set up in 2010 by ACPO (and run by the Metropolitan Police) to remove unlawful terrorist material content from the Internet, with a specific focus on UK based material.
CTIRU works with internet platforms to identify content which breaches their terms of service and requests that they remove the content.
CTIRU also compile a list of URLs for material hosted outside the UK which are blocked on networks of the public estate.
As of December 2017, CTIRU is linked to the removal of 300,000 pieces of illegal terrorist material from the internet
Censor or not censor?
The CTIRU consider its scheme to be voluntary, but detailed notification under the e-Commerce Directive has legal effect, as it may strip the platform of liability protection. Platforms may have "actual knowledge" of
potentially criminal material, if they receive a well-formed notification, with the result that they would be regarded in law as the publisher from this point on.
At volume, any agency will make mistakes. The CTIRU is said to be reasonably accurate: platforms say they decline only 20 or 30% of material. That shows considerable scope for errors. Errors could unduly restrict the speech of
individuals, meaning journalists, academics, commentators and others who hold normal, legitimate opinions.
A handful of CTIRU notices have been made public via the Lumen transparency project. Some of these show some very poor decisions to send a notification. In one case, UKIP Voices, an obviously fake, unpleasant and defamatory blog
portraying the UKIP party as cartoon figures but also vile racists and homophobes, was considered to be an act of violent extremism. Two notices were filed by the CTIRU to have it removed for extremism. However, it is hard to see that the site could fall
within the CTIRU's remit as the site's content is clearly fictional.
In other cases, we believe the CTIRU had requested removal of extremist material that had been posted in an academic or journalistic context.
Some posters, for instance at wordpress.com, are notified by the service's owners, Automattic, that the CTIRU has asked for content to be removed. This affords a greater potential for a user to contes tor object to requests. However,
the CTIRU is not held to account for bad requests. Most people will find it impossible to stop the CTIRU from making requests to remove lawful material, which might still be actioned by companies, despite the fact that the CTIRU would be attempting to
remove legal material, which is clearly beyond its remit.
When content is removed, there is no requirement to notify people viewing the content that it has been removed because it may be unlawful or what those laws are, nor that the police asked for it to be removed. There is no advice to
people that may have seen the content or return to view it again about the possibility that the content may have been intended to draw them into illegal and dangerous activities, nor are they given advice about how to seek help.
There is also no external review, as far as we are aware. External review would help limit mistakes. Companies regard the CTIRU as quite accurate, and cite a 70 or 80% success rate in their applications. That is potentially a lot of
requests that should not have been filed, however, and that might not have been accepted if put before a legally-trained and independent professional for review.
As many companies will perform little or no review, and requests are filed to many companies for the same content, which will then sometimes be removed in error and sometimes not, any errors at all should be concerning.
Crime or not crime?
The CTIRU is organised as part of a counter-terrorism programme, and claim its activities warrant operating in secrecy, including rejecting freedom of information requests on the grounds of national security and detection and
prevention of crime.
However, its work does not directly relate to specific threats or attempt to prevent crimes. Rather, it is aimed at frustrating criminals by giving them extra work to do, and at reducing the availability of material deemed to be
Taking material down via notification runs against the principles of normal criminal investigation. Firstly, it means that the criminal is "tipped off" that someone is watching what they are doing. Some platforms forward
notices to posters, and the CTIRU does not suggest that this is problematic.
Secondly, even if the material is archived, a notification results in destruction of evidence. Account details, IP addresses and other evidence normally vital for investigations is destroyed.
This suggests that law enforcement has little interest in prosecuting the posters of the content at issue. Enforcement agencies are more interested in the removal of content, potentially prioritised on political rather than law
enforcement grounds, as it is sold by politicians as a silver bullet in the fight against terrorism.
Beyond these considerations, because there is an impact on free expression if material is removed, and because police may make mistakes, their work should be seen as relating to content removal rather than as a secretive matter.
Little is know about the CTIRU's work, but it claims to be removing up to 100,000 "pieces of content" from around 300 platforms annually. This statistic is regularly quoted to parliament, and is given as an indication of the
irresponsibility of major platforms to remove content. It has therefore had a great deal of influence on the public policy agenda.
However, the statistic is inconsistent with transparency reports at major platforms, where we would expect most of the takedown notices to be filed. The CTIRU insists that its figure is based on individual URLs removed. If so, much
further analysis is needed to understand the impact of these URL removals, as the implication is that they must be hosted on small, relatively obscure services.
Additionally, the CTIRU claims that there are no other management statistics routinely created about its work. This seems somewhat implausible, but also, assuming it is true, negligent. For instance, the CTIRU should know its success
and failure rate, or the categorisation of the different organisations or belief systems it is targeting. An absence of collection of routine data implies that the CTIRU is not ensuring it is effective in its work. We find this position, produced in
response to our Freedom of Information requests, highly surprising and something that should be of interest to parliamentarians.
Lack of transparency increases the risks of errors and bad practice at the CTIRU, and reduces public confidence in its work. Given the government's legitimate calls for greater transparency on these matters at platforms, it should
apply the same standards to its own work.
Both government and companies can improve transparency at the CTIRU. The government should provide specific oversight, much in the same way as CCTV and Biometrics have a Commissioner. Companies should publish notifications, redacted
if necessary, to the Lumen database or elsewhere. Companies should make the full notifications available for analysis to any suitably-qualified academic, using the least restrictive agreements practical.
Tommy Robinsonm has been permanently banned from Facebook and sister website Instagram. In a blogpost, Facebook said:
When ideas and opinions cross the line and amount to hate speech that may create an environment of intimidation and exclusion for certain groups in society -- in some cases with potentially dangerous offline implications -- we take
action. Tommy Robinson's Facebook page has repeatedly broken these standards, posting material that uses dehumanizing language and calls for violence targeted at Muslims. He has also behaved in ways that violate our policies around organized hate.
Robinson is already banned from Twitter and the decision to cut him off from Instagram and Facebook will leave him reliant on YouTube as the only major online platform to provide him with a presence.
The ban comes a month after Facebook issued a final written warning against Robinson, warning him that he would be removed from its platform permanently if he continued to break the company's hate speech policies.
Mainstream outlets have struggled to deal with Robinson. When he was interviewed by Sky News last year, Robinson responded b uploading an unedited video of the discussion showing that Sky News did in fact mislead viewers by mixing and matching
questions to answers to make Robinson look bad. The video became an online success and was shared far more widely online than the original interview.
Robinson adopted a similar tactic with the BBC's Panorama, which is investigating the far-right activist. Two weeks ago, Robinson agreed to be interviewed by the programme, only to turn the tables on reporter John Sweeney by revealing he had sent an
associate undercover to film the BBC reporter.
Several other accounts were removed from Facebook on Tuesday, including one belonging to former Breitbart London editor Raheem Kassam.
We received complaints following the third party release of secretly recorded material related to a BBC Panorama investigation.
BBC Panorama is investigating Tommy Robinson, whose real name is Stephen Yaxley-Lennon. The BBC strongly rejects any suggestion that our journalism is faked or biased. Any programme we broadcast will adhere to the BBC's strict
editorial guidelines. BBC Panorama's investigation will continue.
John Sweeney made some offensive and inappropriate remarks whilst being secretly recorded, for which he apologises. The BBC has a strict expenses policy and the drinks bill in this video was paid for in full by John.
Offsite Comment: Why Tommy Robinson should not be banned
Facebook has restored several RT-linked pages a week after it blocked them without prior notice. The pages were only freed-up after their administrators posted data about their management and funding.
The Facebook pages of InTheNow, Soapbox, Back Then and Waste-Ed -- all operated by the Germany-based company Maffick Media were made accessible again as of Monday evening.
Facebook said in a statement at the time of the ban that it wants the pages' administrators to reveal their ties to Russia to their audience in the name of greater transparency. Facebook's measure was taken following a CNN report, which ludicrously
accused the pages of concealing their ties to the Kremlin, even though their administrators had never actually made a secret of their relations to Ruptly and RT. In fact RT is very blatantly, a propaganda channel supporting Russia.
Maffick CEO Anissa Naouai revealed that the social media giant agreed to unblock the pages, but only after their administration updated our 'About' section, in a manner NO other page has been required to do. The accounts now indeed feature information
related to their funding and management, visible under the pages' logos.
YouTube is continuing take down drill music videos at the request of London police. The Metropolitan Police has continually argued that the underground rap genre is partly responsible, linked a spate of knife attacks to violent lyrics.
As of last month, the police had requested the removal of 129 videos, of which the music sharing platform deleted 102. This purge has escalated since May last year at which point the Press Association reported that police had requested 50 to 60 videos
be removed over the course of two years and Youtube, in response, deleted 30. Some of the videos that were removed later resurfaced on Pornhub.
Mike West heads a London police unit that has compiled a database of around 1,900 drill videos that he told the Press Association, generate purely a violent retaliatory response.
Last month police closed a landmark case against Skengdo and AM, two of the biggest names in the UK drill scene. The duo pled guilty to breaching a gang injunction by performing their song Attempted 1.0 during a sold out concert at Koko, London. They
received a suspended nine-month jail sentence, making it the first time in British history that an artist has been sentenced to prison for performing a song.
A musician found guilty of broadcasting grossly offensive anti-Semitic songs has had her conviction upheld.
Alison Chabloz has written many politically incorrect, humorous and insulting songs often targeted at jews but also more generally against the PC establishment. The songs have been published on many internet platforms including YouTube.
In May she was convicted of three charges relating to the songs and was given a suspended jail sentence by magistrates which she appealed against.
A judge at Southwark Crown Court has upheld her conviction ruling the content was particularly repellent. In the songs Chabloz suggested the Holocaust was a bunch of lies and referred to Auschwitz as a theme park.
Chabloz was convicted of two counts of sending an offensive, indecent or menacing message through a public communications network and a third charge relating to a song on YouTube.
She was sentenced to 20 weeks' imprisonment, suspended for two years and banned from social media for 12 months.
During the appeal Adrian Davies, defending, told judge Christopher Hehir: It would be a very, very strong thing to say that a criminal penalty should be imposed on someone for singing in polemical terms about matters on which she feels so strongly.
The case started as a private prosecution by the Campaign Against Anti-Semitism before the Crown Prosecution Service took over. The group's chairman, Gideon Falter, said: This is the first conviction in the UK over Holocaust denial on social media.
Two men who breached an injunction banning them from making drill music have been given suspended jail sentences of nine months each.
The ruling comes as Scotland Yard continues its controversial crackdown on the rap genre, a strategy which has attracted significant criticism from drill fans.
The Metropolitan Police have repeatedly blamed the music genre for rising knife crime in London and has launched a wide ranging crackdown on drill music videos. Detective Inspector Luke Williams of Lambeth and Southwark Gangs Unit said:
I am pleased with the sentences passed in these cases which reflect that the police and courts are unwilling to accept behaviour leading to serious violence.