|
UK Parliamentary committee claims that people failing to vote the 'correct' way is nothing to do with politicians' crap policies that don't look after British people, and must be all to do with fake news
|
|
|
 | 31st July 2018
|
|
| 28th July 2018. See article from bbc.co.uk See
committee report [pdf] from dominiccummings.files.wordpress.com |
Parliament's Digital, Culture, Media and Sport (DCMS) Committee has been investigating disinformation and fake news following the Cambridge Analytica data scandal and is claiming that the UK faces a democratic crisis due to the spread of pernicious
views and the manipulation of personal data. In its first report it will suggest social media companies should face tighter censorship. It also proposes measures to combat election interference. The report claims that the relentless targeting
of hyper-partisan views, which play to the fears and prejudices of people, in order to influence their voting plans is a threat to democracy. The report was very critical of Facebook, which has been under increased scrutiny following the Cambridge
Analytica data scandal. Facebook has hampered our efforts to get information about their company throughout this inquiry. It is as if it thinks that the problem will go away if it does not share information about the problem, and reacts only when
it is pressed, the report said. It provided witnesses who have been unwilling or unable to give full answers to the committee's questions. The committee suggests: 1. Social media sites should be held responsible for harmful content on
their services Social media companies cannot hide behind the claim of being merely a 'platform', claiming that they are tech companies and have no role themselves in regulating the content of their sites, the committee said. They continually change what is and is not seen on their sites, based on algorithms and human intervention.
They reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model. The committee suggested a new category of tech company should be created, which
was not necessarily a platform or a publisher but something in between. This should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms, the report said.
2. The
rules on political campaigns should be made fit for the digital age The committee said electoral law needed to be updated to reflect changes in campaigning techniques. It suggested creating a public register for political
advertising so that anybody can see what messages are being distributed online political advertisements should have a digital imprint stating who was responsible, as is required with printed leaflets and advertisements social media sites should be held
responsible for interference in elections by malicious actors electoral fraud fines should be increased from a maximum of £20,000 to a percentage of organisations' annual turnover
3. Technology companies should be taxed to fund
education and regulation Increased regulation of social media sites would result in more work for organisations such as the Electoral Commission and Information Commissioner's Office (ICO). The committee suggested a levy on tech
companies should fund the expanded responsibilities of the regulators. The money should also be spent on educational programmes and a public information campaign, to help people identify disinformation and fake news.
4. Social
networks should be audited The committee warned that fake accounts on sites such as Facebook and Twitter not only damage the user experience, but potentially defraud advertisers. It suggested an independent authority such as the
Competition and Markets Authority should audit the social networks. It also said security mechanisms and algorithms used by social networks should be available for audit by a government regulator, to ensure they are operating responsibly.
Offsite Comment: Now MPs want to police political discussion 31st July 2018. See
article from spiked-online.com by Mick Hume
Those members of parliament are half right at least. Democracy in Britain and the West is at risk today. But contrary to the wild claims in their fake-news report, the real risk does not come from Russian bloggers or shady groups farming Facebook users'
data. The big threat comes from political elitists like the cross-party clique of Remainer MPs who dominate the DCMS committee. ...Read the full
article from spiked-online.com Offsite Comment: British MPs, like authoritarians
from Moscow to Malaysia...
31st July 2018. See article from nationalreview.com by Andrew Stuttaford It looks a lot as if these
MPs, like authoritarians from Moscow to Malaysia, have been inspired by the strikingly illiberal precedent set by Angela Merkel's social media law . In particular, part of the idea behind sticking social media companies with legal liability is to scare
them into going even further in muzzling free speech than the strict letter of the law requires. ...Read the full article
from nationalreview.com |
|
Germany's highest court upholds legislation allowing public Wi-Fi previously impractical due to laws holding networks responsible for copyright infringement by users
|
|
|
 | 31st July 2018
|
|
| See article from xbiz.com
|
Germany's highest court last week upheld legislation that offers Wi-Fi operators immunity from acts carried out by third-party users. The decision by the Federal Court of Justice now makes it easier for individuals and businesses to offer Wi-Fi
without fearing civil prosecution for acts of copyright infringement committed by others. Prior to the ruling, because of the legal concept known as Störerhaftung, or interferer's liability, a third party who played no deliberate part in
someone else's actions could be held responsible for them. As a result, Wi-Fi hot spots are few and far between in Germany. Visitors from abroad have found themselves shut out at public venues and unable to access the web like they could in other
countries. Copyright holders are still able to get court orders requiring WiFi providers to block copyright infringing websites.
|
|
In light of Facebook's disgraceful disregard for its users' digital wellbeing, Trump's government seems to be stepping in and preparing a GDPR style privacy law
|
|
|
 | 30th July 2018
|
|
| See article from
foxnews.com |
The US Federal Government is quietly meeting with top tech company representatives to develop a proposal to protect web users' privacy amid the ongoing fallout globally of scandals that have rocked Facebook and other companies. Over the past month,
the Commerce Department has met with representatives from Facebook and Google, along with Internet providers like AT&T and Comcast, and consumer advocates, sources told the Washington Post. The goal of these meetings is to come up with a data
privacy proposal at the federal level that could serve as a blueprint for Congress to pass sweeping legislation in the mode of the European Union GDPR. There are currently no laws that govern how tech companies harness and monetize US users' data.
A total of 22 meetings with more than 80 companies have been held on this topic over the last month. One official at the White House told the Post this week that recent developments have been seismic in the privacy policy world, prompting the
government to discuss what a modern U.S. approach to privacy protection might look like. |
|
|
|
|
 |
30th July 2018
|
|
|
A detailed explanation of how Google ended domain fronting so as to make it easier for countries like Russia to censor the internet See
article from thenextweb.com |
|
Egypt Sentences Tourist to Eight Years Jail for Complaining about Vacation Online
|
|
|
 |
29th July 2018
|
|
| See article from eff.org
|
When she went to Egypt for vacation, Mona el-Mazbouh surely didn't expect to end up in prison. But after the 24-year-old Lebanese tourist posted a video in which she complained of sexual harassment--calling Egypt a lowly, dirty country and its citizens
pimps and prostitutes--el-Mazbouh was arrested at Cairo's airport and found guilty of deliberately spreading false rumors that would harm society, attacking religion, and public indecency. She was sentenced to eight years in prison. The video that
el-Mazbouh posted was ten minutes long, and went viral on Facebook, causing an uproar in Egypt. In the video, el-Mazbouh also expressed anger about poor restaurant service during Ramadan and complained of her belongings being stolen. Egyptian men and
women posted videos in response to her original video, prompting el-Mazbouh to delete the original video and post a second video on Facebook apologizing to Egyptians. Nevertheless, Mona was arrested at the end of her trip at the Cairo airport in
May 31, 2018 and charged with spreading false rumors that aim to undermine society, attack religions, and public indecency. Under Egyptian law, defaming and insulting the Egyptian people is illegal. Unhappy tourists have always criticized the
conditions of the countries they visit; doing so online, or on video, is no different from the centuries of similar complaints that preceded them offline or in written reviews. Beyond the injustice of applying a more vicious standard online to offline
speech, this case also punishes Mona for a reaction that was beyond her control. Mona had no influence over whether her video went viral. She did not intend her language or her actions to reach a wider audience or become a national topic of discussion.
It was angry commenters' reactions and social media algorithms that made the video viral and gave it significance beyond a few angry throwaway insults. Mona el-Mazbouh is just one of many innocent Internet users who have been caught up in the
Egyptian governments' attempts to vilify and control the domestic use of online media. At minimum, she should be released from her ordeal and returned to her country immediately. But more widely, Egypt's leaders need to pull back from their hysterical
and arbitrary enforcement of repressive laws, before more people -- including the foreign visitors on which much of Egypt's economy is based -- are hurt.
|
|
|
|
|
 | 28th July 2018
|
|
|
US House Judiciary Committee Falsely Claims Credit For Stopping 90% Of All Sex Trafficking Because Of FOSTA internet censorship See
article from techdirt.com |
|
Using fake 'outrage' to censor programmes people don't like
|
|
|
 | 27th July 2018
|
|
| See article from standard.co.uk See
Ban fat-shaming show Insatiable, its
critics cry. But none of them have seen it. From theguardian.com |
Over 100,000 people have signed a petition against the release of the Netflix TV show Insatiable , accusing it of 'fat shaming'. But to date it is still unknown what exactly is the plot line and whether there is any 'fat shaming' going on. 12
hour-long episodes of Insatiable will be released on Netflix on August 10. Netflix describes Insatiable as a dark, twisted, revenge comedy, but will also delve into topics such as bullying, eating disorders and body image. It follows Ryan
as the unfortunately-nicknamed Fatty Patty as she gets bullied for her weight by her high school peers. After having her jaw wired shut as a result of someone punching her in the face, she undergoes a transformation and becomes slim, hot, and vows to
take revenge on the mean girls who tormented her. Social justice warriots went on the warpath after Netflix released the official trailer for Insatiable. An online petition was subsequently created by a woman named Florence, calling for the
programme to be banned. In the petition, Florence writes: The toxicity of this series, is bigger than just this one particular series. This is not an isolated case, but part of a much larger problem that I can promise
you every single woman has faced in her life, sitting somewhere on the scale of valuing their worth on their bodies, to be desirable objects for the male gaze. That is exactly what this series does. It perpetuates not only the toxicity of diet culture
but the objectification of women's bodies.
|
|
Patrolling Rubens House in Antwerp to protect social media users from nudity
|
|
|
 | 26th July 2018
|
|
| See article from news.artnet.com See
video from YouTube |
The Flemish Tourism Board has responded to Facebook's relentless censorship of nudity in classical paintings by Peter Paul Rubens In the satirical video, a team of Social Media Inspectors block gallery goers from seeing paintings at the Rubens
House in Antwerp. Facebook-branded security--called fbi--redirect unwitting crowds away from paintings that depict nude figures. We need to direct you away from nudity, even if artistic in nature, says one Social Media Inspector. The Flemish
video, as well as a cheeky open letter from the tourism board and a group of Belgian museums, asks Facebook to roll back its censorship standards so that they can promote Rubens. "Breasts, buttocks and Peter Paul Rubens cherubs are all considered
indecent. Not by us, but by you, the letter, addressed to Facebook CEO Mark Zuckerberg, says. Even though we secretly have to laugh about it, your cultural censorship is making life rather difficult for us. The Guardian reported that Facebook is
planning to have talks with the Flemish tourist board. |
|
|
|
|
 | 26th July 2018
|
|
|
Microsoft comes clean over Windows 10 snooping as part of its GDPR compliance See
article from v3.co.uk |
|
India parliament considers a new internet censorship bill based on the recent US FOSTA law
|
|
|
 | 25th July 2018
|
|
| See article from livemint.com
|
Indian politicians have been admiring the effectiveness of the recent US censorship law, FOSTA that bans anything adult on the internet by making websites responsible for anything that facilitates sex trafficking. As websites can't distinguish
trafficking from adult consensual sex work then the internet companies are forced to ban anything to do with sex work and even dating. A new session of the Indian Parliament kicked off on 18 July with the introduction of the Trafficking of
Persons (Prevention, Protection and Rehabilitation) Bill . There are a few problematic provisions in the proposed legislation, which may severely impact freedom of expression. For instance, Section 36 of the Bill, which aims to prescribe
punishment for the promotion or facilitation of trafficking, proposes a minimum three-year sentence for producing, publishing, broadcasting or distributing any type of material that promotes trafficking or exploitation. An attentive reading of the
provision, however, reveals that it has been worded loosely enough to risk criminalizing many unrelated activities as well. The phrase any propaganda material that promotes trafficking of person or exploitation of a trafficked person in any manner
has wide amplitude, and many unconnected or even well-intentioned actions can be construed to come within its ambit as the Bill does not define what constitutes promotion. For example, in moralistic eyes, any sexual content online could be seen as
promoting prurient interests, and thus also promoting trafficking. In July 2015, the government asked internet service providers (ISPs) to block 857 pornography websites sites on grounds of outraging morality and decency, but later rescinded the
order after widespread criticism. If historical record is any indication, Section 36 in this present Bill will legitimize such acts of censorship. Section 39 proposes an even weaker standard for criminal acts by proposing that any act of
publishing or advertising which may lead to the trafficking of a person shall be punished (emphasis added) with imprisonment for 5-10 years. In effect, the provision mandates punishment for vaguely defined actions that may not actually be connected to
the trafficking of a person at all. Another by-product of passing the proposed legislation would be a dramatic shift in India's landscape of intermediary liability laws, i.e., rules which determine the liability of platforms such as Facebook and
Twitter, and messaging services like Whatsapp and Signal for hosting or distributing unlawful content. Provisions in the Bill that criminalize the publication and distribution of content, ignore that unlike the physical world, modern electronic
communication requires third-party intermediaries to store and distribute content. This wording can implicate neutral communication pipeways, such as ISPs, online platforms, mobile messengers, which currently cannot even know of the presence of such
material unless they surveil all their users. Under the proposed legislation, the fact that human traffickers used Whatsapp to communicate about their activities could be used to hold the messaging service criminally liable.
|
|
Catholic scientists develop AI based software to cover nude female images with bikinis
|
|
|
 | 25th
July 2018
|
|
| See article from
dailymail.co.uk |
An AI system developed by a Catholic institute in Brazil seeks out lewd pictures and digitally adds swimwear to censor the images images. Researchers warned that while the AI was designed to be used for good, cyber criminals could one day reverse
the process to erase bikinis from people's photos. The AI was trained by software engineers at the Pontifical Catholic University of Rio Grande do Sul using 2,000 images of women. It is a type of AI known as a generative adversarial network, which
that learn to perform tasks by recognising patterns commonly found in a set of images. Project scientist Dr Rodrigo Barros told the Register : When we train the network, it attempts to learn how to map data from
one domain - nude pictures - to another domain - swimsuit pictures. Researchers warned that while the system was designed to be used for good, cyber criminals could one day reverse the process to erase bikinis from people's photos (stock image)
Researchers warned that while the system was designed to be used for good, cyber criminals could one day reverse the process to erase bikinis from people's photos.
He added that the AI was developed to test out a novel way of
censoring images on the internet.
|
|
Infowars set to petition the British parliament for a digital rights law to guarantee free speech on the internet
|
|
|
 | 24th July 2018
|
|
| See article from metro.co.uk See
Infowars petition from change.org |
The well known alt-right news website Infowars is preparing to launch a campaign aimed at persuading politicians to stop tech giants censoring its content. It notes that Facebook, Google and Twitter are using algorithms to automatically clampdown on
right-wing publications as well as those which support Donald Trump. Infowars has now started its first petition on the website Change.org demanding that social media companies end censorship of alternative voices online. It is calling for a new
Digital Rights Act to guarantee free speech on the internet. Metro.co.uk adds: We have been told that Infowars staff have approached Conservative politicians in the UK and arranged for an MP to ask a question in
parliament about the issue.
Infowars is also planning to contact the White House, where its calls are likely to reach the ears of Donald Trump himself. Paul Joseph Watson, the British editor-at-large of Infowars, told
Metro.co.uk: Since social media platforms are now de facto becoming the Internet and have formed into monopolies, the argument that they are private companies who can behave with impunity is no longer a valid argument.
We demand congressional and parliamentary scrutiny. We demand a Digital Rights Act to secure free speech online.
|
|
|
|
|
 | 24th
July 2018
|
|
|
Stories that are satirical, ludicrous or (obviously) fictional might do well online, but this doesn't mean people are believing them en masse. See
article from spiked-online.com |
|
|
|
|
 | 24th July
2018
|
|
|
Be Careful What You Wish For. By Antonio García Martínez See article from wired.com |
|
|
|
|
 | 24th July 2018
|
|
|
But what can you do? shrugs judge See article from theregister.co.uk
|
|
BBFC boss writes a 'won't somebody think of the children' campaigning piece in support of the upcoming porn censorship law, disgracefully from behind a paywall
|
|
|
 | 22nd
July 2018
|
|
| See article from telegraph.co.uk by David Austin |
David Austin as penned what looks like an official BBFC campaigning piece trying to drum up support for the upcoming internet porn censorship regime. Disgracefully the article is hidden behind a paywall and is restricted to Telegraph paying subscribers.
Are children protected by endangering their parents or their marriage? The article is very much a one sided piece, focusing almost entirely on the harms to children. It says nothing about the extraordinary dangers faced by adults when
handing over personal identifying data to internet companies. Not a word about the dangers of being blackmailed, scammed or simply outed to employers, communities or wives, where the standard punishment for a trivial transgression of PC rules is the sack
or divorce. Austin speaks of the scale of the internet business and the scope of the expected changes. He writes: There are around five million pornographic websites across the globe. Most of them have no
effective means of stopping children coming across their content. It's no great surprise, therefore, that Government statistics show that 1.4 million children in the UK visited one of these websites in one month. ...
The BBFC will be looking for a step change in the behaviour of the adult industry. We have been working with the industry to ensure that many websites carry age-verification when the law comes into force. ...
Millions of British adults watch pornography online. So age-verification will have a wide reach. But it's not new. It's been a requirement for many years for age-restricted goods and services, including some UK hosted pornographic
material.
I guess at this last point readers will be saying I never knew that. I've never come across age verification ever before. But the point here is these previous rules devastated the British online porn industry and the reason
people don't ever come across it, is that there are barely any British sites left. Are children being protected by impoverishing their parents? Not that any proponents of age verification could care less about British people being
able to make money. Inevitably the new age verification will further compound the foreign corporate monopoly control on yet another internet industry. Having lorded over a regime that threatens to devastate lives, careers and livelihoods, Austin
ironically notes that it probably won't work anyway: The law is not a silver bullet. Determined, tech-savvy teenagers may find ways around the controls, and not all pornography online will be age-restricted. For
example, the new law does not require pornography on social media platforms to be placed behind age-verification controls.
|
|
|
|
|
 |
22nd July 2018
|
|
|
The context behind the controversy over Mark Zuckerberg's comments on Holocaust denial. By Ezra Klein See article from vox.com
|
|
Daily Telegraph reports that the upcoming porn censorship regime looks set to be delayed by a few months
|
|
|
| 21st July 2018
|
|
| See article from telegraph.co.uk |
The Telegraph reveals: The government is braced for criticism next week over an anticipated delay in its prospective curbs on under 18s' access to hardcore porn sites.
The current timetable culminating
in the implementation of UK porn censorship by the end of the year required that the final censorship guidelines are presented to MPs before they go on holiday on Thursday. They will then be ready to approve them when they return to work in the autumn.
It sound like they won't be ready for publishing by this Thursday. The BBFC noted that they were due to send the results of the public consultation along with the BBFC censorship rules to the government by late May of this year so presumably the
government is still pondering what to do. 'Best practice' just like Facebook and Cambridge Analytica Back in April when the BBFC initiated its rather naive draft rules for public consultation its prose tried to suggest that we can
trust age verifiers with our most sensitive porn browsing data because they will voluntarily follow 'best practice'. But in light of the major industry player, in this case Facebook, allowing Cambridge Analytica to so dramatically abuse our personal
data, the hope that these people will follow best practice' is surely forlorn. GDPR And there was the implementation of GDPR. The BBFC seemed to think that this was all that was needed to keep our data safe. But when t comes down to
it all GDPR seems to have done is to train us, like Pavlov's dogs, to endlessly tick the consent box for all these companies to do what the hell they like with our data. Ingenious kids Then there was a nice little piece of research
this week that revealed that network level ISP filtering of porn has next to no impact on preventing young porn seekers from obtaining their kicks. The research notes seems to suggest that it is not enough to block porn one lad because he has 30 mates
whose house he can round to surf the web there, or else it only takes a few lads to be able to download porn and it will soon be circulated to the whole community on a memory stick or whatever. Mass Buy in I guess the government is
finding it tough to find age verification ideas that are both convenient for adult users, whilst remaining robust about preventing access by the under 18s. I think the governments needs to find a solution that will achieve a mass buy in by adult users.
If the adults don't want to play ball with the age verification process, then the first fall back position is for them to use a VPN. I know that from my use of VPNS that they are very good, and once you turn it on then I find it gets left on all day. I
am sure millions of people using VPNs would not go down well with the security services on the trail of more serious crimes than under age porn viewing. I think the most likely age verification method proposed to date that has a chance of a mass
buy-in is the AVSecure system of anonymously buying a porn access card from a local shop, and using a PIN, perhaps typed in once a day. Then they are able to browse without further hassle on all participating websites. But I think it would require a
certain pragmatism from government to accept this idea, as it would be so open to over 18s buying a card and then selling the PIN to under 18s, or perhaps sons nicking their Dad's PINS when they see the card lying around, (or even perhaps installing a
keyboard logger to nick the password). The government would probably like something more robust where PINS have to be matched to people's proven ID. But I think pron users would be stupid to hand over their ID to anyone on the internet who can
monitor porn use. The risks are enormous, reputational damage, blackmail, fraud etc, and in this nasty PC world, the penalty of the most trivial of moral transgressions is to lose your job or even career. A path to failure The
government is also setting out on a path when it can do nothing but fail. The Telegraph piece mentioned above is already lambasting the government for not applying the rules to social media websites such as Twitter, that host a fair bit of porn. The
Telegraph comments: Children will be free to watch explicit X-rated sex videos on social media sites because of a loophole in a new porn crackdown, Britain's chief censor has admitted. David
Austin, chief executive of the BBFC, has been charged by ministers with enforcing new laws that require people to prove they are over 18 to access porn sites. However, writing for telegraph.co.uk, Mr Austin admitted it would not be a silver bullet as
online porn on sites such as Facebook and YouTube would escape the age restrictions. Social media companies will not be required to carry age-verification for pornographic content on their platforms. He said it was a matter for government to review this
position.
|
|
Uganda introduces a significant tax on social media usage
|
|
|
 | 21st July 2018
|
|
| 3rd July 2018. See article from
torrentfreak.com |
Uganda has just introduced a significant tax on social media usage. It is set at 200 shillings a day which adds up to about 3% of the average annual income if used daily. Use of a long list of websites including Facebook, Whatsapp, Twitter, Tinder
triggers the daily taxed through billing by ISPs. And as you may expect Uganda internet users are turning to VPNs so that ISPs can't detect access to taxed apps and websites. In response, the government says it has ordered local ISPs to
begin blocking VPNs. In a statement, Uganda Communications Commission Executive Director, Godfrey Mutabazi said that Internet service providers would be ordered to block VPNs to prevent citizens from avoiding the social media tax. Mutabazi told
Dispatch that ISPs are already taking action to prevent VPNs from being accessible but since there are so many, it won't be possible to block them all. In the meantime, the government is trying to portray VPNs as more expensive to use than the tax. In a
post on Facebook this morning, Mutabazi promoted the tax as the sensible economic option. it appears that many Ugandans are outraged at the prospect of yet another tax and see VPN use as a protest, despite any additional cost. Opposition figures
have already called for a boycott with support coming in from all corners of society. The government appears unmoved, however. Frank Tumwebaze, Minister of Information Technology and Communications said: If we tax
essentials like water, why not social media?
Update: And the people were not impressed 13th July 2018. See article
from bbc.com Uganda is reviewing its decision to impose taxes on the use of social media and on money transactions by mobile phone, following a public backlash. Prime Minister Ruhakana Rugunda made the announcement soon after police
broke up a protest against the taxes. President Yoweri Museveni had pushed for the taxes to boost government revenue and to restrict criticism via WhatsApp, Facebook and Twitter. The social media tax is 6000 Uganda shillings a month
(£1.25), but it is represents about 3% of the average wage. Activists argue that while the amount may seem little, it represents a significant slice of what poorer people are paying for getting online. There is also a 1% levy on the total value of mobile
phone money transactions, affecting poorer Ugandans who rarely use banking services. In a statement to parliament, Rugunda said: Government is now reviewing the taxes taking into consideration the concerns of
the public and its implications on the budget.
A revised budget is due to be tabled in parliament on 19 July. Update: And the government continues to repress the people 21st July 2018. See
article from qz.com Uganda's government has 'reviewed' its new social media tax and has decided to stick with it. Matia Kasaija, the finance minister, decided against rescinding the social media tax. His reasoning echoed Museveni's initial reason for floating the tax: stopping gossip.
|
|
Poland ratchets up the oppression of internet users by requiring ISPs to snitch on attempts to access banned websites
|
|
|
 | 20th July 2018
|
|
| See article
from europeangaming.eu |
The Polish government is demanding that ISPs snitch on their customers who attempt to access websites it deems illegal. The government wants to make the restrictions stricter for unauthorised online gambling sites and will require local ISPs to
inform it about citizens' attempts to access them. According to the Panoptykon Foundation, a digital rights watchdog, the government will compile a central registry of unauthorized websites to monitor. According to the digital rights body, the
government seeks to introduce a chief snooper that would compel data from ISPs disclosing which citizens tried to access unauthorised websites. In addition, the ISPs would have to keep the smooping requests secret from the customer. Local
organisations are unsurprisingly worried that the censorship's expansion could turn out to be the first of many steps in an online limitation escalation.
|
|
Law Is Causing Online Censorship and Removal of Protected Speech
|
|
|
 |
20th July 2018
|
|
| 18th July 2018. See press
release from eff.org |
On Thursday, July 19, at 4 pm, the Electronic Frontier Foundation (EFF) will urge a federal judge to put enforcement of FOSTA on hold during the pendency of its lawsuit challenging the constitutionality of the federal law. The hold is needed, in
part, to allow plaintiff Woodhull Freedom Foundation, a sex worker advocacy group, to organize and publicize its annual conference, held August 2-5. FOSTA , or the Allow States and Victims to Fight Online Sex Trafficking Act, was
passed by Congress in March. But despite its name, FOSTA attacks online speakers who speak favorably about sex work by imposing harsh penalties for any website that might be seen as facilitating prostitution or contribute to sex trafficking. In Woodhull
Freedom Foundation v. U.S. , filed on behalf of two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist, EFF maintains the law is unconstitutional because it muzzles constitutionally protected
speech that protects and advocates for sex workers and forces speakers and platforms to censor themselves. Enforcement of the law should be suspended because the plaintiffs are likely to win the case and because it has caused, and
will continue to cause, irreparable harm to the plaintiffs, EFF co-counsel Bob Corn-Revere of Davis Wright Tremaine will tell the court at a hearing this week on the plaintiffs' request for a preliminary injunction. Because of the risk of criminal
penalties, the plaintiffs have had their ads removed from Craigslist and censored information on their websites. Plaintiff Woodhull Freedom Foundation has censored publication of information that could assist sex workers negatively impacted by the law.
FOSTA threatens Woodhull's ability to engage in protected online speech, including livestreaming and live tweeting its August meeting, unless FOSTA is put on hold. Update: A little reserved 20th July 2018. See
article from avn.com
Judge Richard Leon of United States District Court in Washington D.C. heard Woodhull's request for a preliminary injunction that would stop the law from remaining in effect until the group's lawsuit, but did not issue a judgement. Nor did he announce a
date when he would issue a ruling. According to one account from inside the courtroon, Leon sounded skeptical that the law had actually caused harm to the plaintiffs in the case.
|
|
Sikhs threaten to protest against a Sunny Leone bio pic because of her real name
|
|
|
 | 19th July 2018
|
|
| See article from bbc.co.uk |
Sikh leaders in India have threatened to protest over the title of a biopic because it uses the name Kaur. Sunny Leone is a former porn star turned Bollywood actress who plays herself in web series Karenjit Kaur: The Untold Story of Sunny Leone
. Kaur - Leone's real name - is used by Sikh women as a surname or middle name and symbolises gender equality. The web series depicts her life and premiered on 16 July for Zee5, a streaming platform in India. In a letter to Subhash Chandra,
the chairman of Essel Group which owns Zee5, Indian politician Manjinder Singh Sirsa called for the show to be pulled from the network or have the name Kaur removed from the title. But Chandra responded simply by explaining that her name can't be
changed. Other Sikh groups and leaders have expressed similar sentiments and have threatened to protest outside the network's offices if their demands aren't met. |
|
Israel set to adopt a new internet censorship law
|
|
|
 | 18th July 2018
|
|
| See article from jta.org
|
The Israeli government would have far-reaching power to remove or block content from social media sites under legislation coming up for a vote in the Knesset. The so-called Facebook Law could delete the content for reasons that include incitement to
terrorism without criminal proceeding and without any admissible evidence. The legislation, which was approved Sunday by the Law, Constitution and Justice Committee, is expected to be voted on before the Knesset ends its summer session on July 22.
Along with Facebook, among the social media outlets that would be covered by the legislation are Twitter, WhatsApp, Telegram, YouTube and reddit. |
|
The BBFC consultation on UK internet porn censorship
|
|
|
 | 17th July 2018
|
|
| See BBFC minutes May 2018 [pdf] from bbfc.co.uk
|
Nobody seems to have heard much about the progress of the BBFC consultation about the process to censor internet porn in the UK. The sketchy timetable laid out so far suggests that the result of the consultation should be published prior to the
Parliamentary recess scheduled for 26th July. Presumably this would provide MPs with some light reading over their summer hols ready for them to approve as soon as the hols are over. Maybe this publication may have to be hurried along though, as
pesky MPs are messing up Theresa May's plans for a non-Brexit, and she would like to send them packing a week early before they can cause trouble. ( Update 18th July . The early holidays idea has
now been shelved). The BBFC published meeting minutes this week that mentions the consultation: The public consultation on the draft Guidance on Age Verification Arrangements and the draft Guidance on Ancillary
Service Providers closed on 23 April. The BBFC received 620 responses, 40 from organisations and 580 from individuals. Many of the individual responses were encouraged by a campaign organised by the Open Rights Group. Our proposed
response to the consultation will be circulated to the Board before being sent to DCMS on 21 May.
So assuming that the response was sent to the government on the appointed day then someone has been sitting on the results for quite a
long time now. Meanwhile its good to see that people are still thinking about the monstrosity that is coming our way. Ethical porn producer Erica Lust has been speaking to News Internationalist. She comments on the way the new law will compound
MindGeek's monopolitistc dominance of the online porn market: The age verification laws are going to disproportionately affect smaller low-traffic sites and independent sex workers who cannot cover the costs of
installing age verification tools. It will also impact smaller sites by giving MindGeek even more dominance in the adult industry. This is because the BBFC draft guidance does not enforce sites to offer more than one age
verification product. So, all of MindGeeks sites (again, 90% of the mainstream porn sites) will only offer their own product; Age ID. The BBFC have also stated that users do not have to verify their age on each visit if access is restricted by password
or a personal ID number. So users visiting a MindGeek site will only have to verify their age once using AgeID and then will be able to login to any complying site without having to verify again. Therefore, viewers will be less likely to visit competitor
sites not using the AgeID technology, and simultaneously competitor sites will feel pressured to use AgeID to protect themselves from losing viewers. ...Read the full
article from newint.org
|
|
A Key Victory Against European Copyright Filters and Link Taxes - But What's Next?
|
|
|
 |
17th July 2018
|
|
| See article from eff.org CC
by Danny O'Brien See Guy In Charge Of
Pushing Draconian EU Copyright Directive, Evasive About His Own Use Of Copyright Protected Images. from techdirt.com |
Against all the odds, but with the support of nearly a million Europeans , MEPs voted earlier this month to
reject the EU's proposed copyright reform--including controversial proposals to create a new "snippet" right for news publishers, and mandatory
copyright filters for sites that published user uploaded content. The change was testimony to how powerful and fast-moving Net activists can be. Four weeks ago, few knew that these crazy provisions were even being considered. By
the June 20th vote, Internet experts were weighing in , and
wider conversations were starting on sites like Reddit. The result was a vote on July
5th of all MEPS, which ended in a 318 against 278 victory in favour of withdrawing the Parliament's support for the languages. Now all MEPs will have a chance in September to submit new amendments and vote on a final text -- or reject the directive
entirely. While re-opening the text was a surprising set-back for Article 13 and 11, the battle isn't over: the language to be discussed on in September will be based on
the original proposal by the European Commission, from two years ago -- which included the first versions of the copyright filters, and
snippet rights. German MEP Axel Voss's controversial modifications will also be included in the debate, and there may well be a flood of other proposals, good and bad, from the rest of the European Parliament. There's still
sizeable support for the original text: Article 11 and 13's loudest proponents, led by Voss, persuaded many MEPs to support them by arguing that these new powers would restore the balance between American tech giants and Europe's newspaper and creative
industries -- or "close the value gap", as their arguments have it. But using mandatory algorithmic censors and new intellectual property rights to restore balance is like Darth Vader bringing balance to the Force: the
fight may involve a handful of brawling big players, but it's everybody else who would have to deal with the painful consequences. That's why it remains so vital for MEPs to hear voices that represent the wider public interest.
Librarians ,
academics , and redditors, everyone from small Internet businesses and celebrity Youtubers, spoke up in a way
that was impossible for the Parliament to ignore. The same Net-savvy MEPs and activists that wrote and fought for the GDPR put their names to challenge the idea that these laws would rein back American tech companies. Wikipedians stood up and were
counted: seven independent, European-language encyclopedias consensed to shut down on the day of the vote. European alternatives to Google, Facebook and
Twitter argued that this
would set back their cause . And
European artists spoke up that the EU shouldn't be setting up censorship and ridiculous link rights in their name.
To make sure the right amendments pass in September, we need to keep that conversation going. Read on to find out what you can do, and who you should be speaking to. Who Spoke Up In The European Parliament?
As we noted last week, the decision to challenge the JURI committee's language on Article 13 and 11 last week was not automatic -- a minimum of 78 MEPs needed to petition for it to be put to the vote. Here's
the list of those MEPs who actively stepped forward to stop the bill. Also heavily involved was Julia Reda, the Pirate Party MEP who worked
so hard on making the rest of the proposed directive so positive for copyright reform, and then re-dedicated herself to stopping the worst excesses of the JURI language, and
Marietje Schaake , the Parliament's foremost advocate for human rights online. These are the core of the opposition to
Article 13 and 11. A look at that list, and the final list of
votes on July 5th, shows that the proposals have opponents in every corner of Europe's political map. It also shows that every MEP who voted for Article 13 and 11, has someone close to them politically who knows why it's wrong.
What happens now? In the next few weeks, those deep in the minutiae of the Copyright directive will be crafting amendments for MEPs to vote on in September. The tentative schedule is that the amendments
are accepted until Wednesday September 5th, with a vote at 12:00 Central European Time on Wednesday September 12th. The European Parliament has a fine tradition of producing a rich supply of amendments (the GDPR had thousands).
We'll need to coalesce support around a few key fixes that will keep the directive free of censorship filters and snippet rights language, and replace them with something less harmful to the wider Net. Julia Reda already proposed
amendments. And one of Voss' strongest critics in the latest vote was Catherine Stihler, the Scottish MEP who had created and passed consumer-friendly directive language in her committee, which Voss ignored. (Here's her
barnstorming speech before the final vote.) While we wait for those amendments to appear, the next step
is to keep the pressure on MEPs to remember what's at stake -- no mandatory copyright filters, and no new ancillary rights on snippets of text. In particular, if you
talk to your MEP , it's important to convey how you feel these proposals will affect you . MEPs are hearing from giant tech and media companies. But they are only just
beginning to hear from a broader camp: the people of the Internet.
|
|
Ofcom boss Sharon White sneers at the British people, and volunteers Ofcom to be their internet news censor
|
|
|
 | 16th July 2018
|
|
| 13th July 2018. See article from theguardian.com
|
Sharon White, the CEO of Ofcom has put her case to be the British internet news censor, disgracefully from behind the paywalled website of the The Times. White says Ofcom has done research showing how little users trust what they read on social media.
She said that only 39% consider social media to be a trustworthy news source, compared with 63% for newspapers, and 70% for TV. But then again many people don't much trust the biased moralising from the politically correct mainstream media,
including the likes of Ofcom. White claims social media platforms need to be more accountable in how they curate and police content on their platforms, or face regulation. In reality, Facebook's algorithm seems pretty straightforward, it
just gives readers more of what they have liked in the past. But of course the powers that be don't like people choosing their own media sources, they would much prefer that the BBC, or the Guardian , or Ofcom do the choosing. Sharon White, wrote
in the Times: The argument for independent regulatory oversight of [large online players] has never been stronger. In practice, this would place much greater scrutiny on how effectively the
online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met.
She continued, disgracefully revealing her complete contempt of the British people:
Many people admit they simply don't have the time or inclination to think critically when engaging with news, which has important implications for our democracy.
White joins a growing number of the
establishment elite arguing that social media needs cenorship. The government has frequently suggested as much, with Matt Hancock, then digital, culture, media and sport secretary, telling Facebook in April: Social
media companies are not above the law and will not be allowed to shirk their responsibilities to our citizens.
Update: The whole pitch to offer Ofcom's services as a news censor 15th July 2018. See
Sunday Times article republished by Ofcom from ofcom.org.uk
Ofcom has published Sharon White's pitch for Ofcom to become the internet news censor. White is nominally commenting on two research reports:
There seems to be 4 whinges about modern news reading via smart phones and all of them are just characteristics of the medium that will never change regardless of whether we have news censors or not.
- Fake News: mostly only exists in the minds of politicians. No one else can find hardly any. So internet news readers are not much bothered by trying to detect it.
- Passive news reading. Its far too much trouble typing in stuff on a smart
phone to be bothered to go out and find stuff for yourself. So the next best thing is to use apps that do the best job in feeding you articles that are of interest.
- Skimming and shallow reading of news feeds. Well there's so much news out there
and the news feed algorithm isn't too hot anyway so if anything isn't quite 100% interesting, then just scroll on. This isn't going to change any time soon.
- Echo chambers. This is just a put-down phrase for phone users choosing to read the news
that they like. If a news censor thinks that more worthy news should be force fed into people's news readers than they will just suffer the indignity of being rapidly swiped into touch.
Anyway this is Sharon White's take: Picking up a newspaper with a morning coffee. Settling down to watch TV news after a day's work. Reading the sections of the Sunday papers in your favourite order.
For decades, habit and routine have helped to define our relationship with the news. In the past, people consumed news at set times of day, but heard little in between. But for many people, those habits, and the news landscape that
shapes them, have now changed fundamentally. Vast numbers of news stories are now available 24/7, through a wide range of online platforms and devices, with social media now the most popular way of accessing news on the internet.
Today's readers and viewers face the challenge to keep up. So too, importantly, does regulation. The fluid environment of social media certainly brings benefits to news, offering more choice, real-time updates, and a platform for
different voices and perspectives. But it also presents new challenges for readers and regulators alike -- something that we, as a regulator of editorial standards in TV and radio, are now giving thought for the online world. In
new Ofcom research, we asked people about their relationship with news in our always-on society, and the findings are fascinating. People feel there is more news than ever before, which presents a challenge for their time and
attention. This, combined with fear of missing out, means many feel compelled to engage with several sources of news, but only have the capacity to do so superficially. Similarly, as many of us now read news through social media
on our smartphones, we're constantly scrolling, swiping and clearing at speed. We're exposed to breaking news notifications, newsfeeds, shared news and stories mixed with other types of content. This limits our ability to process, or even recognise, the
news we see. It means we often engage with it incidentally, rather than actively. In fact, our study showed that, after being exposed to news stories online, many participants had no conscious recollection of them at all. For
example, one recalled seeing nine news stories online over a week -- she had actually viewed 13 in one day alone. Others remembered reading particular articles, but couldn't recall any of the detail. Social media's attraction as a
source of news also raises questions of trust, with people much more likely to doubt what they see on these platforms. Our research shows only 39% consider social media to be a trustworthy news source, compared to 63% for newspapers, and 70% for TV.
Fake news and clickbait articles persist as common concerns among the people taking part in our research, but many struggle to check the validity of online news content. Some rely on gut instinct to tell fact from fiction, while
others seek second opinions from friends and family, or look for established news logos, such as the Times. Many people admit they simply don't have the time or inclination to think critically when engaging with news, which has important implications for
our democracy. Education on how to navigate online news effectively is, of course, important. But the onus shouldn't be on the public to detect and deal with fake and harmful content. Online companies need to be much more
accountable when it comes to curating and policing the content on their platforms, where this risks harm to the public. We welcome emerging actions by the major online players, but consider that the argument for independent
regulatory oversight of their activities has never been stronger. Such a regime would need to be based on transparency, and a set of clear underpinning principles. In practice, this would place much greater scrutiny on how
effectively the online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met. We will outline further thoughts on the role independent regulation could play in the
autumn. When it comes to trust and accountability, public service broadcasters like the BBC also have a vital role to play. Their news operations provide the bedrock for much of the news content we see online, and as the
broadcasting regulator, Ofcom will continue to hold them to the highest standards. Ofcom's research can help inform the debate about how to regulate effectively in an online world. We will continue to shine a light on the
behavioural trends that emerge, as people's complex and evolving relationship with the media continues to evolve.
And perhaps if you have skimmed over White's piece a bit rapidly, here is the key paragraph again:
In practice, this would place much greater scrutiny on how effectively the online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met. We will
outline further thoughts on the role independent regulation could play in the autumn. |
|
|
|
|
 |
16th July 2018
|
|
|
A detailed critique of censorship clauses of the bill. By Index on Censorship See
article from indexoncensorship.org |
|
YouTube bans Erica Lust's series In Conversation with Sex Workers
|
|
|
 | 15th July 2018
|
|
| 14th July 2018. See article from
motherboard.vice.com See banned videos from erikalust.com |
YouTube has banned Erika Lust's series In Conversation with Sex Workers. There was NO explicit content, NO sex, NO naked bodies, NO (female) nipples or anything else that breaks YouTube's strict guidelines in the series, Lust wrote on her
website. It was simply sex workers speaking about their work and experiences. Presumably the censorship is inspired by the US FOSTA internet censorship where YouTube would be held liable for content that facilitates sex trafficking. It is cheaper
and easier for YouTube to take down any content that could in anyway connected to sex trafficking than spend time checking it out. Erika Lust, a Barcelona-based erotic filmmaker, wrote in a blog post on Wednesday that YouTube terminated her
eponymous channel on July 4, when it had around 11,000 subscribers. The ban came after an interviewee for the company's series In Conversation With Sex Workers, which had been on YouTube for about a week, tweeted to promote her involvement in the film.
Within hours of that tweet the channel was terminated, citing violation of community guidelines. Update: Charlotte Rose too 15th July 2018. See
article from twitter.com
Charlotte Rose, a well known sex industry campaigner tweets: We've just received an email to say that my YouTube channel has been suspended with no room for appeal - looks like no more live streams of
@rosetalkssex? I don't understand why I've been a target of a total censorship! @YouTube removes my Chanel and 5 years worth of content (which made them money) with no warning.
|
|
|
|
|
 |
14th July 2018
|
|
|
Now the US has pulled out of the UN Human Rights Council its direction accelerates away from human rights See article from accessnow.org
|
|
|
|
|
 | 14th July 2018
|
|
|
Russia's State Duma has adopted a draft law that aims to tackle apps through which pirated content is distributed See
article from torrentfreak.com |
|
Research finds that ISP porn filters have an insignificant effect on preventing adolescents from seeking out porn
|
|
|
 | 13th July 2018
|
|
| See article from liebertpub.com |
A paper has been published on the effects of network level website blocking to try and prevent adolescents from seeking out porn. Internet Filtering and Adolescent Exposure to Online Sexual Material
BY Andrew K. Przybylski, and Victoria Nash Abstract Early adolescents are spending an increasing amount of time online, and a significant share of caregivers now use Internet
filtering tools to shield this population from online sexual material. Despite wide use, the efficacy of filters is poorly understood. In this article, we present two studies: one exploratory analysis of secondary data collected in the European Union,
and one preregistered study focused on British adolescents and caregivers to rigorously evaluate their utility. In both studies, caregivers were asked about their use of Internet filtering, and adolescent participants were interviewed about their recent
online experiences. Analyses focused on the absolute and relative risks of young people encountering online sexual material and the effectiveness of Internet filters. Results suggested that caregiver's use
of Internet filtering had inconsistent and practically insignificant links with young people reports of encountering online sexual material. Conclusions The struggle to shape the experiences young
people have online is now part of modern parenthood. This study was conducted to address the value of industry, policy, and professional advice concerning the appropriate role of Internet filtering in this struggle. Our preliminary findings suggested
that filters might have small protective effects, but evidence derived from a more stringent and robust empirical approach indicated that they are entirely ineffective. These findings highlight the need for a critical cost -- benefit analysis in light of
the financial and informational costs associated with filtering and age verification technologies such as those now being developed in some European countries like the United Kingdom. Further, our results highlight the need for registered trials to
rigorously evaluate the effectiveness of costly technological solutions for social and developmental goals.
The write up doesn't really put its conclusions with any real context as to what is actually happening beyond the kids still
being able to get hold of porn. The following paragraph gives the best clue of what is going on: We calculated absolute risk reduction of exposure to online sexual material associated with caregivers using filtering
technology in practical terms. These resultswere used to calculate the number of households which would have to be filtered to prevent one young person, who would otherwise see sexual material online, from encountering it over a 12-month period.
Depending on the form of content, results indicated that between 17 and 77 households would need to be filtered to prevent a young adolescent from encountering online sexual material. A protective effect lower than we would consider practically
significant.
This seems to suggest that if one kid has a censored internet then he just goes around to a mate's house who isn't censored, and downloads from there. He wouldn't actually be blocked from viewing porn until his whole
circle of friends are similarly censored. It only takes one kid to be able download porn, as it can then be loaded on a memory stick to be passed around. |
|
Russia calls on volunteers to snitch on websites
|
|
|
 | 13th July 2018
|
|
| See article from zdnet.com |
Russia's interior minister says he wants citizens to scour the internet for banned material. The Russian internet censor Roskomnadzor, has an ever-expanding list of banned sites featuring material that Russian authorities don't like. The list
takes in everything from LGBT sites to critics of the Kremlin and sites that allegedly carry terrorist propaganda, the main justification for many of Russia's online censorship and surveillance laws. Free-speech activists reckon the number of
blocked websites now tops 100,000, but how best to keep adding to that list? Russia's interior minister, Vladimir Kolokoltsev, says volunteers should step up to aid the search for banned information. Whilst speaking about the challenges faced by
search and rescue volunteers, he said volunteers could help public authorities in preventing drug abuse, combating juvenile delinquency, and monitoring the internet networks to search for banned information.
|
|
Egypt's Draconian New Cybercrime Bill Will Only Increase Censorship
|
|
|
 | 13th July 2018
|
|
| See article from eff.org
|
The new 45-article cybercrime law, named the Anti-Cyber and Information Technology Crimes law, is divided into two parts. The first part of the bill stipulates that service providers are obligated to retain user information (i.e. tracking data) in the
event of a crime, whereas the second part of the bill covers a variety of cybercrimes under overly broad language (such as threat to national security). Article 7 of the law, in particular, grants the state the authority to shut
down Egyptian or foreign-based websites that incite against the Egyptian state or threaten national security through the use of any digital content, media, or advertising. Article 2 of the law authorizes broad surveillance capabilities, requiring
telecommunications companies to retain and store users' data for 180 days. And Article 4 explicitly enables foreign governments to obtain access to information on Egyptian citizens and does not make mention of requirements that the requesting country
have substantive data protection laws. Update: Passed 17th July 2018. See article from kveo.com
Egypt's parliament has passed three controversial draft bills regulating the press and media. The draft bills, which won the parliament's approval on Monday, will also regulate the Supreme Media Regulatory Council, the National Press
Authority and the National Media Authority. The bills still need to be approved by the president, Abdel-Fattah el-Sissi, before they can become laws.
|
|
|
|
|
 | 13th July 2018
|
|
|
Don't Fall for This Scam Claiming You Were Recorded Watching Porn See article from gizmodo.com |
|
FOSTA internet censorship has blind-sided police in their pursuit of pimps and traffickers
|
|
|
 |
12th July 2018
|
|
| See
article from techdirt.com |
Supporters of the US internet censorship law FOSTA were supposedly attempting to target pimps and traffickers, but of course their target was the wider sex work industry. Hence they weren't really interested in the warning that the law would make it
harder to target pimps and sex traffickers as their activity would be driven off radar. Anyway it seems that the police at least have started to realise that the warning is coming true, but I don't suppose this will bother the politicians much.
Over in Indianapolis, the police have just arrested their first pimp in 2018, and it involved an undercover cop being approached by the pimp. The reporter asks why there have been so few such arrests, and the police point the finger right at the shutdown
of Backpage: The cases, according to Sgt. John Daggy, an undercover officer with IMPD's vice unit, have just dried up. The reason for that is pretty simple: the feds closed police's best source of leads, the online personals site Backpage, earlier
this year. Daggy explained: We've been a little bit blinded lately because they shut Backpage down. I get the reasoning behind it, and the ethics behind it, however, it has blinded us. We used to look at Backpage as a
trap for human traffickers and pimps. With Backpage, we would subpoena the ads and it would tell a lot of the story. Also, with the ads we would catch our victim at a hotel room, which would give us a crime scene. There's a ton of
evidence at a crime scene. Now, since [Backpage] has gone down, we're getting late reports of them and we don't have much to go by.
|
|
Jeremy Wright is appointed as the new culture minister
|
|
|
 | 12th July 2018
|
|
| See article from en.wikipedia.org |
Jeremy Wright has been appointed as the new Secretary of State for Digital, Culture, Media and Sport. He is the government minister in charge of the up 'n' coming regime to censor internet porn. He will also be responsible for several government
initiatives attempting to censor social media. He is a QC and was previously the government's Attorney General. His parliamentary career to date has not really given any pointers to his views and attitudes towards censorship. The previous
culture minister, Matt Hancock has move upwards to become minister for health. Perhaps in his new post he can continue to whinge about limiting what he considers the excessive amount of screen time being enjoyed by children. |
|
|