AVN.com is reporting that the online forum Reddit has been blocked by the country's largest ISPs
Reddit is ranked as the 21st most heavily trafficked site in the world. It has 330 million users spread across 217 countries.
Reddit, of course, is one of the few remaining major social media platforms that does not ban porn, and according to the India Times report, that is likely the reason why the Indian ISPs would block the discussion forum site.
New Zealand police have charged a young man for sharing a meme based on Brenton Tarrant's live streamed murderous attack on a Christchurch mosque.
The New Zealand authorities had previously banned the video with the official film censor declaring it as 'objectionable'. And apparently this makes even the use of still images as totally illegal.
ABC News is reporting that at least six people have been charged with illegally sharing the video contents with other people, but presumably this is referring to the whole video being passed on.
And again according to ABC the meme sharing young man has been held in jail since being arrested for his joke. He will reappear in court on July 31 when electronically monitored bail will be considered.
Meanwhile New Zealand's Prime Minister, Jacinda Ardern, will be meeting with executives from big tech, along with world leaders, in order to prohibit the spread or sharing of violent extremism or terrorism from being shown online at all. This official
policy calling for censorship has been tagged The Christchurch Call but details haven't been made public, yet.
Ironically this all seems to playing into the hands of the Christchurch shooter, Brenton Tarrant. In his manifesto he specifically wanted governments and regulators to escalate censorship to the point of creating civil unrest.
A New Zealand man was jailed for 21 months yesterday for distributing the gruesome live-stream video of the Christchurch mosque attacks that killed 51 Muslim worshippers.
Christchurch District Court heard that the man distributed the raw footage to about 30 people and had another version modified to include crosshairs and a kill count, The New Zealand Herald reported.
This was in effect a hate crime against the Muslim community, Judge Stephen O'Driscoll said, adding that it was particularly cruel to share the video in the days after the attacks, when families were still waiting to hear news of their loved ones.
Starting with a little background into the authorship of the document under review. AVSecure CMO Steve Winyard told XBIZ:
The accreditation plan appears to have very strict rules and was crafted with significant input from various governmental bodies, including the DCMS (Department for Culture, Media & Sport), NCC Group plc (an expert security and audit firm), GCHQ
(U.K. Intelligence and Security Agency), ICO (Information Commissioner's Office) and of course the BBFC.
But computer security expert Alec Muffett writes:
This is the document which is being proffered to protect the facts & details of _YOUR_ online #Porn viewing. Let's read it together!
What could possibly go wrong?
This document's approach to data protection is fundamentally flawed.
The (considerably) safer approach - one easier to certificate/validate/police - would be to say everything is forbidden except for upon for ; you would then allow vendors to appeal for
exceptions under review.
It makes a few passes at pretending that this is what it's doing, but with subjective holes (green) that you can drive a truck through:
What we have here is a rehash of quite a lot of reasonable physical/operational security, business continuity & personnel security management thinking -- with digital stuff almost entirely punted.
It's better than #PAS1296 , but it's still not fit for purpose.
NewsGuard is a US organisation trying to muscle in governments' concerns about 'fake news'' It doesn't fact check individual news stories but gives ratings to news organisations on what it considers to be indicators of 'trustworthiness'.
At the moment it is most widely known for providing browser add-ons that displays a green shield when readers are browsing an 'approved' news website and a red shield when the website is disapproved.
Now the company is pushing something a little more Orwellian. It is in talks with UK internet providers such that the ISP would inject some sort of warning screen should an internet user [inadvertently] stray onto a 'wrong think' website.
The idea seems to be that users can select whether they want these intrusive warnings or not, via a similar mechanism used for the parental control of website blocking.
NewsGuard lost an awful of credibility in the UK when its first set of ratings singled out the Daily Mail as a 'wrong think' news source. It caused a bit of a stink and the decisions was reversed, but it rather shows where the company is coming from.
Surely they are patronising the British people if they think that people want to be nagged about reading the Daily Mail. People are well aware of the bases and points of views of news sources they read. They will not want to be nagged by those that
think they know best what people should be reading.
I think it is only governments and politicians that are supposedly concerned about 'fake news anyway'. They see it as some sort blame opportunity. It can't possibly be their politicians' own policies that are so disastrously unpopular with the people,
surely it must be mischievous 'fake news' peddlers that are causing the grief.
Strengthening our approach to deliberate attempts to mislead voters
Voting is a fundamental human right and the public conversation occurring on Twitter is never more important than during elections. Any attempts to undermine the process of registering to vote or engaging in the electoral process is contrary to our
company's core values.
Today, we are further expanding our enforcement capabilities in this area by creating a dedicated reporting feature within the product to allow users to more easily report this content to us. This is in addition to our existing proactive approach to
tackling malicious automation and other forms of platform manipulation on the service. We will start with 2019 Lok Sabha in India and the EU elections and then roll out to other elections globally throughout the rest of the year.
What types of content are in violation?
You may not use Twitter's services for the purpose of manipulating or interfering in elections. This includes but is not limited to:
Misleading information about how to vote or register to vote (for example, that you can vote by Tweet, text message, email, or phone call);
Misleading information about requirements for voting, including identification requirements; and
Misleading statements or information about the official, announced date or time of an election.
Video-sharing app TikTok has introduced an age gate feature for new users, which it claims will only allow those aged 13 years and above to create an account. TikTok also declared that it has removed more than six million videos that were in violation of
its community guidelines.
TikTok is said to be based in more than 20 countries, including India, and covers major Indian languages, including Hindi, Tamil, Telugu and Gujarati.
The app was banned by the Madras High Court earlier this month, chiefly on the ground that it posed a danger to children. The court said the app contained degrading culture, and that it encouraged pornography and pedophilia.
In February, TikTok was fined $5.7 million by the US Federal Trade Commission for violating the Children's Online Privacy Protection Act (COPPA) by collecting personal information of children below 13 years without parental consent.
As of April 15, the app remains available for download on Google's Play Store. TikTok's push for user safety
A DNS server translates the text name of a website into the numerical IP address. At the moment ISPs provide the DNS servers and they use this facility to block websites. If you want to access bannedwebsite.com the ISP simply refuses to tell your browser
the IP address of the website you are seeking. The ISPs use this capability to implement blocks on terrorist/child abuse, copyright infringing websites, porn websites with out age verification, network level parental control blocking and many more things
envisaged in the Government's Online Harms white paper.
At the moment DNS requests are transmitted in the clear so even if you chose another DNS server the ISP can see what you are up to, intercept the message and apply its own censorship rules anyway.
This is all about to change, as the internet authorities have introduced a change meaning that DNS requests can now be encrypted using the web standard encryption as used by https. The new protocol option is known is DNS Over HTTPS or DOH.
The address being requested cannot be monitored under several internet protocols, DNS over TLS and DNSCrypt but DNS Over HTTPS goes one step further in that ISPs cannot even detect that it is DNS request at al. It appears exactly the same as a standard
HTTPS request for the website content. This prevents the authorities from refusing to allow DNS Over HTTPS at all by blocking all such requests. If they tried they would have to block all https websites.
There's nothing to stop users from sticking with their ISPs DNS and submitting to all the familiar censorship policies. However if your browser allows, you can ask the browser to ask to use a non censorial DNS server over HTTPS. There are already plenty
of servers out there to choose from, but it is down to the browser to define the choice available to you. Firefox already allows you to select their own encrypted DNS server. Google is not far behind with its Chrome Browser.
At the moment Firefox already allows those with techie bent to opt for the Firefox DOH, but Firefox recently made waves by suggesting that it would soon default to using its own server and make it a techie change to opt out and revert to ISP DNS. Perhaps
this sounds a little unlikely.
The Government have got well wound up by the fear of losing censorship control over UK internet users so no doubt will becalling in people from Firefox and Chrome to try to get them to enforce state censorship. However it may not be quite so easy.
The new protocol allows for anyone to offer non censorial (or even censorial) DOH servers. If Firefox can be persuaded to toe the government line then other browsers can step in instead.
The UK Government broadband ISPs and the National Cyber Security Centre (NCSC) are now set to meet on the 8th May 2019 in order to discuss Google's forthcoming implementation of encrypted DOH. It should be an interesting meeting but I bet they'll never
publish the minutes.
I rather suspect that the Government has shot itself in the foot over this with its requirements for porn users to identify themselves before being able to access porn. Suddenly they have will have spurred millions of users to take an interest in
censorship circumvention to avoid endangering themselves, and probably a couple of million more who will be wanting to avoid the blocks because they are too young. DNS, DOH, VPNs, Tor and the likes will soon become everyday jargon.
On the 15th of March, the German Bundesrat (Federal Council) voted to amend the Criminal Code in relation to internet based services such as The onion router (Tor).
The proposed law has been lambasted as being too vague, with privacy experts rightfully fearful that the law would be overapplied. The proposal, originating from the North Rhine-Westphalian Minister of Justice Peter Biesenbach, would amend and expand
criminal law and make running a Tor node or website illegal and punishable by up to three years in prison. According to Zeit.de, if passed, the expansion of the Criminal Code would be used to punish anyone who offers an internet-based service whose
access and accessibility is limited by special technical precautions, and whose purpose or activity is directed to commit or promote certain illegal acts.
What's worse is that the proposed changes are so vaguely worded that many other services that offer encryption could be seen as falling under this new law. While the proposal does seem to have been written to target Tor hidden services which are dark net
markets, the vague way that the proposal has been written makes it a very real possibility that other encrypted services such as messaging might be targeted under these new laws, as well.
Now that the motion to amend has been accepted by Bundesrat, it will be forwarded to the Federal Government for drafting, consideration, and comment. Then, within a month and a half, this new initiative will be forwarded to the German Senate, aka the
Bundestag, where it will be finally voted on. Private Internet Access and many others denounce this proposal and continue to support Tor and an open internet
Private Internet Access currently supports the Tor Project and runs a number of Tor exit nodes as a part of our commitment to online privacy. PIA believes this proposed amendment to the German Criminal Code is not just bad for Tor, which was named
specifically, but also for online privacy as a whole -- and we're not the only ones.
German criminal lawyer David Schietinger told Der Spiegel that he was concerned the law was too overreaching and could also mean an e-mail provider or the operator of a classic online platform with password protection.
The bill contains mainly rubber paragraphs with the clear goal to criminalize operators and users of anonymization services. Intentionally, the facts are kept very blurred. The intention is to create legal uncertainty and unavoidable risks of possible
criminal liability for anyone who supports the right to anonymous communication on the Internet.
It's not only China and the UK that want to identify internet users, Austria also wants to demand that forum contributors submit their ID before being able to post.
Austria's government has introduced a bill that would require larger social media websites and forums to obtain the identity of its users prior to them being able to post comments. Users will have to provide their name and address to websites but
nicknames are still allowed and the identity data will not be made public.
Punishments for non complying websites will be up to 500,000 euros and double that for repeat offences.
It would only affect sites with more than 100,000 registered users, bring in revenues above 500,000 euros per year or receive press subsidies larger than 50,000 euros.
There would also be exemptions for retail sites as well as those that don't earn money from either ads or the content itself.
If passed and cleared by the EU, the law would take effect in 2020. The immediate issues noted are that some of the websites most offending the sensitivities of the government are often smaller than the trigger condition. The law may also step on the
toes of the EU in rules governing which EU states has regulatory control over websites.
Update: Identity data will be available to other users
The law on care and responsibility on the net forces media platforms with forums to store detailed data about their users in order to deliver them in case of a possible offence not only to police authorities, but also to other users who want to legally
prosecute another forum user. Looking at the law in detail, it is obvious that they contain so many problematic passages that their intended purpose is completely undermined.
According to the Minister of Media, Gernot Blümel, harmless software will deal with the personal data processing. One of the risks of such a system would be the potential for abuse from public authorities or individuals requesting a platform provider the
person's name and address with the excuse to wanting to investigate or sue them, and then use the information for entirely other purposes.
In the aftermath of the horrific mosque attack in New Zealand, internet companies were interrogated over their efforts to censor the livestream video of Brenton Tarrant's propaganda.
Some of their responses have included ideas that point in a disturbing direction: toward increasingly centralized and opaque censorship of the global internet.
Facebook, for example, describes plans for an expanded role for the Global Internet Forum to Counter Terrorism, or GIFCT. The GIFCT is an industry-led self-regulatory effort launched in 2017 by Facebook, Microsoft, Twitter, and YouTube. One of its
flagship projects is a shared database of hashes of files identified by the participating companies to be extreme and egregious terrorist content. The hash database allows participating companies (which include giants like YouTube and one-man operations
like JustPasteIt) to automatically identify when a user is trying to upload content already in the database.
In Facebook's post-Christchurch updates, the company discloses that it added 800 new hashes to the database, all related to the Christchurch video. It also mentions that the GIFCT is experimenting with sharing URLs systematically rather than just content
hashes --that is, creating a centralized list of URLs that would facilitate widespread blocking of videos, accounts, and potentially entire websites or forums.
VPNCompare is reporting that internet users in Britain are responding to the upcoming porn censorship regime by investigating the option to get a VPN so as to workaround most age verification requirements without handing over dangerous identity
VPNCompare says that the number of UK visitors to its website has increased by 55% since the start date of the censorship scheme was announced. The website also sated that Google searches for VPNs had trippled. Website editor, Christopher Seward told the
We saw a 55 per cent increase in UK visitors alone compared to the same period the previous day. As the start date for the new regime draws closer, we can expect this number to rise even further and the number of VPN users in the UK is likely to go
through the roof.
The UK Government has completely failed to consider the fact that VPNs can be easily used to get around blocks such as these.
Whilst the immediate assumption is that porn viewers will reach for a VPN to avoid handing over dangerous identity information, there may be another reason to take out a VPN, a lack of choice of appropriate options for age validation.
3 companies run the 6 biggest adult websites. Mindgeek owns Pornhub, RedTube and Youporn. Then there is Xhamster and finally Xvideos and xnxx are connected.
Now Mindgeek has announced that it will partner with Portes Card for age verification, which has options for identity verification, giving a age verified mobile phone number, or else buying a voucher in a shop and showing age ID to the shop keeper
(which is hopefully not copied or recorded).
Meanwhile Xhamster has announced that it is partnering with 1Account which accepts a verified mobile phone, credit card, debit card, or UK drivers licence. It does not seem to have an option for anonymous verification beyond a phone being age verified
without having to show ID.
Perhaps most interestingly is that both of these age verifiers are smart phone based apps. Perhaps the only option for people without a phone is to get a VPN. I also spotted that most age verification providers that I have looked at seem to be only
interested in UK cards, drivers licences or passports. I'd have thought there may be legal issues in not accepting EU equivalents. But foreigners may also be in the situation of not being able to age verify and so need a VPN.
And of course the very fact that is no age verification option common to the major porn website then it may just turn out to be an awful lot simpler just to get a VPN.
The BBFC (on its Age Verification website)...err...no!...:
An assessment and accreditation under the AVC is not a guarantee that the age-verification provider and its solution (including its third party companies) comply with the relevant legislation and standards, or that all data is safe from malicious or
Accordingly the BBFC shall not be responsible for any losses, damages, liabilities or claims of whatever nature, direct or indirect, suffered by any age-verification provider, pornography services or consumers/ users of age-verification provider's
services or pornography services or any other person as a result of their reliance on the fact that an age-verification provider has been assessed under the scheme and has obtained an Age-verification Certificate or otherwise in connection with the
Facebook has banned far-right groups including the British National Party (BNP) and the English Defence League (EDL) from having any presence on the social network. The banned groups, which also includes Knights Templar International, Britain First and
the National Front as well as key members of their leadership, have been removed from both Facebook or Instagram.
Facebook said it uses an extensive process to determine which people or groups it designates as dangerous, using signals such as whether they have used hate speech, and called for or directly carried out acts of violence against others based on factors
such as race, ethnicity or national origin.
This week we have seen David Lammy doubling down on his ludicrous comparison of the European Research Group with the Nazi party, and Chris Key in the Independent calling for UKIP and the newly formed Brexit Party to be banned from television debates. It
is clear that neither Key nor Lammy have a secure understanding of what far right actually means and, quite apart from the distasteful nature of such political opportunism, their strategy only serves to generate the kind of resentment upon which the far
Offsite comment: Facebook is calling for Centralized Censorship. That Should Scare You
If we're going to have coherent discussions about the future of our information environment, we--the public, policymakers, the media, website operators--need to understand the technical realities and policy dynamics that shaped the response to the
Christchurch massacre. But some of these responses have also included ideas that point in a disturbing direction: toward increasingly centralized and opaque censorship of the global interne
The European Parliament has approved a draft version of new EU internet censorship law targeting terrorist content.
In particular the MEPs approved the imposition of a one-hour deadline to remove content marked for censorship by various national organisations. However the MEPs did not approve a key section of the law requiring internet companies to pre-process and
censor terrorsit content prior to upload.
A European Commission official told the BBC changes made to the text by parliament made the law ineffective. The Commission will now try to restore the pre-censorship requirement with the new parliament when it is elected.
The law would affect social media platforms including Facebook, Twitter and YouTube, which could face fines of up to 4% of their annual global turnover. What does the law say?
In amendments, the European Parliament said websites would not be forced to monitor the information they transmit or store, nor have to actively seek facts indicating illegal activity. It said the competent authority should give the website
information on the procedures and deadlines 12 hours before the agreed one-hour deadline the first time an order is issued.
In February, German MEP Julia Reda of the European Pirate Party said the legislation risked the surrender of our fundamental freedoms [and] undermines our liberal democracy. Ms Reda welcomed the changes brought by the European Parliament but said the
one-hour deadline was unworkable for platforms run by individual or small providers.
As we have been explaining across media, we believe that by using default settings and vague privacy policies which allow Amazon employees to listen in on the recordings of users' interactions with their devices, Amazon risks deliberately deceiving
Amazon has so far been dismissive, arguing that people had the options to opt out from the sharing of their recordings -- although it is unclear how their customers could have done so if they were not aware this was going on in the first place.
listening up to a thousand recordings per day. And sharing file recordings with one another they find to be "amusing".
As a result, today we wrote to Jeff Bezos to let him know we think Amazon needs to step up and do a lot better to protect the privacy of their customers.
If you use an Amazon Echo device and are concerned about this, read our instructions on how to opt out
Dear Mr. Bezos,
We are writing to call for your urgent action regarding last week's report  in Bloomberg, which revealed that Amazon has been employing thousands of workers to listen in on the recordings of Amazon Echo users.
Privacy International (PI) is a registered charity based in London that works at the intersection of modern technologies and rights. Privacy International challenges overreaching state and corporate surveillance, so that people everywhere can have
greater security and freedom through greater personal privacy.
The Bloomberg investigation asserts that Amazon employs thousands of staff around the world to listen to voice recordings captured by the Amazon Alexa. Among other examples, the report states that your employees use internal chat rooms to share files
when they "come across an amusing recording", and that they share "distressing" recordings -- including one of a sexual assault.
recordings of their interactions with the Amazon Echo could, by default, be listened to by your employees.
Millions of customers enjoy your product and they deserve better from you. As such, we ask whether you will:
Notify all users whose recordings have been accessed, and describe to them which recordings;
Notify all users whenever their recordings are accessed in the future, and describe to them which recordings;
Modify the settings of the Amazon Echo so that "Help Develop New Features" and "Use Messages to Improve Transcriptions" are turned off by default;
In your response to the Bloomberg investigation, you state you take the privacy of your customer seriously. It is now time for you to step up and walk the walk. We look forward to engaging with you further on this.
Reddit is a social media website that boasts 234 million members and approximately 8 billion page views per month.
Reddit's system is naturally built to highlight online influencers; all posts are automatically submitted to a voting process: The most up-voted messages receive the most visibility.
The site has a very passionate following and advertising on Reddit can be very successful. Companies are able to promote top posts to a very targeted audience, directly on its front page.
On Tuesday, Reddit posted an update about their Not Suitable for Work Advertising Policy. From now on, the platform doesn't allow any adult-oriented ads and targeting. Promoted posts pushing adult products or services are no longer permissible and NSFW
subreddits will no longer be eligible for ads or targeting.
The new policy targets specifically targets pornographic and sexually explicit content as well as adult sexual recreational content, product and services.
The UK will become the first country in the world to bring in age-verification for online pornography when the measures come into force on 15 July 2019.
It means that commercial providers of online pornography will be required by law to carry out robust age-verification checks on users, to ensure that they are 18 or over.
Websites that fail to implement age-verification technology face having payment services withdrawn or being blocked for UK users.
The British Board of Film Classification (BBFC) will be responsible for ensuring compliance with the new laws. They have confirmed that they will begin enforcement on 15 July, following an implementation period to allow websites time to comply with the
Minister for Digital Margot James said that she wanted the UK to be the most censored place in the world to b eonline:
Adult content is currently far too easy for children to access online. The introduction of mandatory age-verification is a world-first, and we've taken the time to balance privacy concerns with the need to protect children from inappropriate content. We
want the UK to be the safest place in the world to be online, and these new laws will help us achieve this.
Government has listened carefully to privacy concerns and is clear that age-verification arrangements should only be concerned with verifying age, not identity. In addition to the requirement for all age-verification providers to comply with General Data
Protection Regulation (GDPR) standards, the BBFC have created a voluntary certification scheme, the Age-verification Certificate (AVC), which will assess the data security standards of AV providers. The AVC has been developed in cooperation with
industry, with input from government.
Certified age-verification solutions which offer these robust data protection conditions will be certified following an independent assessment and will carry the BBFC's new green 'AV' symbol. Details will also be published on the BBFC's age-verification
website, ageverificationregulator.com so consumers can make an informed choice between age-verification providers.
BBFC Chief Executive David Austin said:
The introduction of age-verification to restrict access to commercial pornographic websites to adults is a ground breaking child protection measure. Age-verification will help prevent children from accessing pornographic content online and means the UK
is leading the way in internet safety.
On entry into force, consumers will be able to identify that an age-verification provider has met rigorous security and data checks if they carry the BBFC's new green 'AV' symbol.
The change in law is part of the Government's commitment to making the UK the safest place in the world to be online, especially for children. It follows last week's publication of the Online Harms White Paper which set out clear responsibilities for
tech companies to keep UK citizens safe online, how these responsibilities should be met and what would happen if they are not.
Twitter co-founder Jack Dorsey has said again there is much work to do to improve Twitter and cut down on the amount of abuse and misinformation on the platform. He said the firm might demote likes and follows, adding that in hindsight he would not have
designed the platform to highlight these.
Speaking at the TED technology conference he said that Twitter currently incentivised people to post outrage. Instead he said it should invite people to unite around topics and communities. Rather than focus on following individual accounts, users could
be encouraged to follow hashtags, trends and communities.
Doing so would require a systematic change that represented a huge shift for Twitter.
One of the choices we made was to make the number of people that follow you big and bold. If I started Twitter now I would not emphasise follows and I would not create likes. We have to look at how we display follows and likes, he added.
The EU Council of Ministers has approved the Copyright Directive, which includes the link tax and censorship machines. The legislation was voted through by a majority of EU ministers despite noble opposition from Italy, Luxembourg, Netherlands, Poland,
Finland, and Sweden.
As explained by Julia Reda MEP, a majority of 55% of Member States, representing 65% of the population, was required to adopt the legislation. That was easily achieved with 71.26% in favor, so the Copyright Directive will now pass into law.
As the image above shows, several countries voted against adoption, including Italy, Luxembourg, Netherlands, Poland, Finland, and Sweden. Belgium, Estonia, and Slovenia absta ined.
But in the final picture that just wasn't enough, with both Germany and the UK voting in favor, the Copyright Directive is now adopted.
EU member states will now have two years to implement the law, which requires platforms like YouTube to sign licensing agreements with creators in order to use their content. If that is not possible, they will have to ensure that infringing content
uploaded by users is taken down and not re-uploaded to their services.
The entertainment lobby will not stop here, over the next two years, they will push for national implementations that ignore users' fundamental rights, comments Julia Reda:
It will be more important than ever for civil society to keep up the pressure in the Member States!
This is the biggest censorship event of the year. It is going destroy the livelihoods of many. It is framed as if it were targeted at Facebook and the like, to sort out their abuse of user data, particularly for kids.
However the kicker is that the regulations will equally apply to all UK accessed websites that earn at least earn some money and process user data in some way or other. Even small websites will then be required to default to treating all their
readers as children and only allow more meaningful interaction with them if they verify themselves as adults. The default kids-only mode bans likes, comments, suggestions, targeted advertising etc, even for non adult content.
Furthermore the ICO expects websites to formally comply with the censorship rules using market researchers, lawyers, data protection officers, expert consultants, risk assessors and all the sort of people that cost a grand a day.
Of course only the biggest players will be able to afford the required level of red tape and instead of hitting back at Facebook, Google, Amazon and co for misusing data, they will further add to their monopoly position as they will be the only companies
big enough to jump over the government's child protection hurdles.
Another dark day for British internet users and businesses.
The ICO write in a press release
Today we're setting out the standards expected of those responsible for designing, developing or providing online services likely to be accessed by children, when they process their personal data.
Parents worry about a lot of things. Are their children eating too much sugar, getting enough exercise or doing well at school. Are they happy?
In this digital age, they also worry about whether their children are protected online. You can log on to any news story, any day to see just how children are being affected by what they can access from the tiny computers in their pockets.
Last week the Government published its white paper covering online harms.
Its proposals reflect people's growing mistrust of social media and online services. While we can all benefit from these services, we are also increasingly questioning how much control we have over what we see and how our information is used.
There has to be a balancing act: protecting people online while embracing the opportunities that digital innovation brings.
And when it comes to children, that's more important than ever. In an age when children learn how to use a tablet before they can ride a bike, making sure they have the freedom to play, learn and explore in the digital world is of paramount importance.
The answer is not to protect children from the digital world, but to protect them within it.
When finalised, it will be the first of its kind and set an international benchmark.
It will leave online service providers in no doubt about what is expected of them when it comes to looking after children's personal data. It will help create an open, transparent and protected place for children when they are online.
Organisations should follow the code and demonstrate that their services use children's data fairly and in compliance with data protection law. Those that don't, could face enforcement action including a fine or an order to stop processing data.
Introduced by the Data Protection Act 2018, the code sets out 16 standards of age appropriate design for online services like apps, connected toys, social media platforms, online games, educational websites and streaming services, when they process
children's personal data. It's not restricted to services specifically directed at children.
The code says that the best interests of the child should be a primary consideration when designing and developing online services. It says that privacy must be built in and not bolted on.
Settings must be "high privacy" by default (unless there's a compelling reason not to); only the minimum amount of personal data should be collected and retained; children's data should not usually be shared; geolocation services should be
switched off by default. Nudge techniques should not be used to encourage children to provide unnecessary personal data, weaken or turn off their privacy settings or keep on using the service. It also addresses issues of parental control and profiling.
The code is out for consultation until 31 May. We will draft a final version to be laid before Parliament and we expect it to come into effect before the end of the year.
Our Code of Practice is a significant step, but it's just part of the solution to online harms. We see our work as complementary to the current initiatives on online harms, and look forward to participating in discussions regarding the Government's white
The proposals are now open for public consultation:
The Information Commissioner is seeking feedback on her draft code of practice
Age appropriate design -- a code of practice for online services likely to be accessed by children (the code).
The code will provide guidance on the design standards that the Commissioner will expect providers of online 'Information Society Services' (ISS), which process personal data and are likely to be accessed by children, to meet.
The code is now out for public consultation and will remain open until 31 May 2019. The Information Commissioner welcomes feedback on the specific questions set out below.
You can respond to this consultation via
our online survey , or you can download the document below and email to email@example.com .
lternatively, print off the document and post to:
Age appropriate design code consultation
Policy Engagement Department
Information Commissioner's Office
Today the Information Commissioner's Office announced a consultation on a draft Code of Practice to help protect children online.
The code forbids the creation of profiles on children, and bans data sharing and nudges of children. Importantly, the code also requires everyone be treated like a child unless they undertake robust age-verification.
The ASI believes that this code will entangle start-ups in red tape, and inevitably end up with everyone being treated like children, or face undermining user privacy by requiring the collection of credit card details or passports for every user.
Matthew Lesh, Head of Research at free market think tank the Adam Smith Institute, says:
This is an unelected quango introducing draconian limitations on the internet with the threat of massive fines.
This code requires all of us to be treated like children.
An internet-wide age verification scheme, as required by the code, would seriously undermine user privacy. It would require the likes of Facebook, Google and thousands of other sites to repeatedly collect credit card and passport details from millions of
users. This data collection risks our personal information and online habits being tracked, hacked and exploited.
There are many potential unintended consequences. The media could be forced to censor swathes of stories not appropriate for young people. Websites that cannot afford to develop 'children-friendly' services could just block children. It could force
start-ups to move to other countries that don't have such stringent laws.
This plan would seriously undermine the business model of online news and many other free services by making it difficult to target advertising to viewer interests. This would be both worse for users, who are less likely to get relevant advertisements,
and journalism, which is increasingly dependent on the revenues from targeted online advertising.
The Government should take a step back. It is really up to parents to keep their children safe online.
Offsite Comment: Web shake-up could force ALL websites to treat us like children
The information watchdog has been accused of infantilising web users, in a draconian new code designed to make the internet safer for children.
Web firms will be forced to introduce strict new age checks on their websites -- or treat all their users as if they are children, under proposals published by the Information Commissioner's Office today.
The rules are so stringent that critics fear people could end up being forced to demonstrate their age for virtually every website they visit, or have the services that they can access limited as if they are under 18.
The ink has yet dried on two enormous packaged of internet censorship and yet the Government is already planning the next.
The Government is considering an overhaul of censorship rules for Netflix and Amazon Prime Video. The Daily Telegraph understands that the Department for Cesnorship, Media and Sport is looking at whether censorship rules for on-demand video streaming
sites should extended to those suffered by traditional broadcasters.
Cesnorship Secretary Jeremy Wright had signaled this could be a future focus for DCMS last month, saying rules for Netflix and Amazon Prime Video were not as robust as they were for other broadcasters.
Public service broadcasters currently have set requirements to commission content from within the UK. The BBC, for example, must ensure that UK-made shows make up a substantial proportion of its content, and around 50% of that content must come from
outside the M25 area.
No such rules, over specific UK-made content, currently apply to Netflix and Amazon Prime Video, though . The European Union is currently finalising the details of rules for the bloc, which require streaming companies to ensure at least 30% of their
libraries are dedicated to content made by EU-member states.
Age verification for online gambling is set to evolve into full identity verification from 7th May 2019. The other big change is that all verification will have to be completed prior to any bets being placed. Previously age verification was required only
when people tried to withdraw their winnings. There were many complaints that gambling companies would then inflict onerous validation requirements to try and avoid paying out.
I would hazard a guess that the new implementation will quash an awful lot of the TV end media adverts that try and get new members with a small joiners bonus. Now it will be a lot more hassle to join, and maybe there will be less interest in trying out
new websites just to get a free introductory bet.
Wikileaks was a whistle blowing website that shone a light on how governments of the world have been running our lives. And it was not a pretty sight.
Julian Assange who ran Wikileaks, is surely a freedom of speech hero, however he broke many serious state secret laws and has been evading the authorities via diplomatic immunity afforded to him by the Ecuadorean embassy in London. This has now been
rescinded and Assange has been duly arrested. He is now in serious trouble and will surely end up being sent to the USA to answer the accusations.
It is hard to see that the prosecuting authorities will be convinced by ethics or morality of the ends justifying the means.
The legislators behind the Digital Economy Act couldn't be bothered to include any provisions for websites and age verifiers to keep the identity and browsing history of porn users safe. It has now started to dawn on the authorities that this was a
mistake. They are currently implementing a voluntary kitemark scheme to try and assure users that porn website's and age verifier's claims of keeping data safe can be borne out.
It is hardly surprising that significant numbers of people are likely to be interested in avoiding having to register their identity details before being able to access porn.
It seems obvious that information about VPNs and Tor will therefore be readily circulated amongst any online community with an interest in keeping safe. But perhaps it is a little bit of a shock to see it is such large letters in a mainstream magazine on
the shelves of supermarkets and newsagents.
And perhaps anther thought is that once the BBFC starting ISPs to block non-compliant websites then circumvention will be the only way see your blocked favourite websites. So people stupidly signing up to age verification will have less access to porn
and a worse service than those that circumvent it.
Critics of the government's flagship internet regulation policy are warning it could lead to a North Korean-style censorship regime, where regulators decide which websites Britons are allowed to visit, because of how broad the proposals are.
Index on Censorship has raised strong concerns about the government's focus on tackling unlawful and harmful online content, particularly since the publication of the Internet Safety Strategy Green Paper in 2017. In October 2018, Index published a joint
statement with Global Partners Digital and Open Rights Group noting that any proposals that regulate content are likely to have a significant impact on the enjoyment and exercise of human rights online, particularly freedom of expression.
We have also met with officials from the Department for Digital, Culture, Media and Sport, as well as from the Home Office, to raise our thoughts and concerns.
With the publication of the Online Harms White Paper , we would like to reiterate our earlier points.
While we recognise the government's desire to tackle unlawful content online, the proposals mooted in the white paper -- including a new duty of care on social media platforms , a regulatory body , and even the fining and banning of social media
platforms as a sanction -- pose serious risks to freedom of expression online.
These risks could put the United Kingdom in breach of its obligations to respect and promote the right to freedom of expression and information as set out in Article 19 of the International Covenant on Civil and Political Rights and Article 10 of the
European Convention on Human Rights, amongst other international treaties.
Social media platforms are a key means for tens of millions of individuals in the United Kingdom to search for, receive, share and impart information, ideas and opinions. The scope of the right to freedom of expression includes speech which may be
offensive, shocking or disturbing . The proposed responses for tackling online safety may lead to disproportionate amounts of legal speech being curtailed, undermining the right to freedom of expression.
In particular, we raise the following concerns related to the white paper:
Lack of evidence base
The wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses. Measures proposed must be underpinned by strong evidence, both of the likely scale of the harm and the measures'
likely effectiveness. The evidence which formed the base of the Internet Safety Strategy Green Paper was highly variable in its quality. Any legislative or regulatory measures should be supported by clear and unambiguous evidence of their need and
Duty of care concerns/ problems with 'harm' definition
Index is concerned at the use of a duty of care regulatory approach. Although social media has often been compared the public square, the duty of care model is not an exact fit because this would introduce regulation -- and restriction -- of speech
between individuals based on criteria that is far broader than current law. A failure to accurately define "harmful" content risks incorporating legal speech, including political expression, expressions of religious views, expressions of
sexuality and gender, and expression advocating on behalf of minority groups.
Risks in linking liability/sanctions to platforms over third party content
While well-meaning, proposals such as these contain serious risks, such as requiring or incentivising wide-sweeping removal of lawful and innocuous content. The imposition of time limits for removal, heavy sanctions for non-compliance or incentives to
use automated content moderation processes only heighten this risk, as has been evidenced by the approach taken in Germany via its Network Enforcement Act (or NetzDG), where there is evidence of the over-removal of lawful content.
Lack of sufficient protections for freedom of expression.
The obligation to protect users' rights online that is included in the white paper gives insufficient weight to freedom of expression. A much clearer obligation to protect freedom of expression should guide development of future regulation.
In recognition of the UK's commitment to the multistakeholder model of internet governance, we hope all relevant stakeholders, including civil society experts on digital rights and freedom of expression, will be fully engaged throughout the development
of the Online Harms bill.
PI welcomes the UK government's commitment to investigating and holding companies to account. When it comes to regulating the internet, however, we must move with care. Failure to do so will introduce, rather than reduce, "online harms". A
12-week consultation on the proposals has also been launched today. PI plans to file a submission to the consultation as it relates to our work. Given the breadth of the proposals, PI calls on others respond to the consultation as well.
Here are our initial suggestions:
proceed with care: proposals of regulation of content on digital media platforms should be very carefully evaluated, given the high risks of negative impacts on expression, privacy and other human rights. This is a very complex challenge and we support
the need for broad consultation before any legislation is put forward in this area.
do not lose sight of how data exploitation facilitates the harms identified in the report and ensure any new regulator works closely with others working to tackle these issues.
assess carefully the delegation of sole responsibility to companies as adjudicators of content. This would empower corporate judgment over content, with would have implications for human rights, particularly freedom of expression and privacy.
require that judicial or other independent authorities, rather than government agencies, are the final arbiters of decisions regarding what is posted online and enforce such decisions in a manner that is consistent with human rights norms.
assess the privacy implications of any demand for "proactive" monitoring of content in digital media platforms.
ensure that any requirement or expectation of deploying automated decision making/AI is in full compliance with existing human rights and data protection standards (which, for example, prohibit, with limited exceptions, relying on solely automated
decisions, including profiling, when they significantly affect individuals).
ensure that company transparency reports include information related to how the content was targeted at users.
require companies to provide efficient reporting tools in multiple languages, to report on action taken with regard to content posted online. Reporting tools should be accessible, user-friendly, and easy to find. There should be full transparency
regarding the complaint and redress mechanisms available and opportunities for civil society to take action.
UK Now Proposes Ridiculous Plan To Fine Internet Companies For Vaguely Defined Harmful Content
Last week Australia rushed through a ridiculous bill to fine internet companies if they happen to host any abhorrent content. It appears the UK took one look at that nonsense and decided it wanted some too. On Monday it released a white paper calling for
massive fines for internet companies for allowing any sort of online harms. To call the plan nonsense is being way too harsh to nonsense
The plan would result in massive, widespread, totally unnecessary censorship solely for the sake of pretending to do something about the fact that some people sometimes do not so nice things online. And it will place all of the blame on the internet
companies for the (vaguely defined) not so nice things that those companies' users might do online.
We agree with your characterisation of the online harms white paper as a flawed attempt to deal with serious problems (Regulating the internet demands clear thought about hard problems, Editorial, 9 April). However, we would draw your attention to
several fundamental problems with the proposal which could be disastrous if it proceeds in its current form.
Firstly, the white paper proposes to regulate literally the entire internet, and censor anything non-compliant. This extends to blogs, file services, hosting platforms, cloud computing; nothing is out of scope.
Secondly, there are a number of undefined harms with no sense of scope or evidence thresholds to establish a need for action. The lawful speech of millions of people would be monitored, regulated and censored.
The result is an approach that would make China's state censors proud. It would be very likely to face legal challenge. It would give the UK the widest and most prolific internet censorship in an apparently functional democracy. A fundamental rethink is
Antonia Byatt Director, English PEN,
Silkie Carlo Big Brother Watch
Thomas Hughes Executive director, Article 19
Jim Killock Executive director, Open Rights Group
Joy Hyvarinen Head of advocacy, Index on Censorship
Comment: The DCMS Online Harms Strategy must design in fundamental rights
Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to certain
online content and make Facebook in particular liable for the uglier and darker side of its user-generated material.
DCMS talks a lot about the 'harm' that social media causes. But its proposals fail to explain how harm to free expression impacts would be avoided.
On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the mechanisms to deliver this protection and the issues at play are not
explored in any detail at all.
In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about what
content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces. DCMS hasn't in the White Paper elaborated on what its proposed duty would entail. If it's drawn narrowly so that it only bites
when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it's drawn widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship.
If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread prior restraint. Platforms can't always know in advance the real-world harm that online content might cause, nor can they accurately predict
what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping upload filters. Scaled implementation of this relies on automated decision-making and algorithms, which risks even greater speech restrictions
given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.
DCMS's policy is underpinned by societally-positive intentions, but in its drive to make the internet "safe", the government seems not to recognise that ultimately its proposals don't regulate social media companies, they regulate social media
users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool and curtailing a whole host of legal expression.
Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.
The duty of care seems to be broadly about whether systemic interventions reduce overall "risk". But must the risk be always to an identifiable individual, or can it be broader - to identifiable vulnerable groups? To society as a whole? What
evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered.
DCMS's approach appears to be that it will be up to the regulator to answer these questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by platforms should be interfered with, allowing
government to distance itself from taking full responsibility over the fine detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It enables the
government to opt not to create a transparent, judicially reviewable legislative framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how that scheme will
affect UK citizens' free speech, both in the immediate future and for years to come.
How the government decides to legislate and regulate in this instance will set a global norm.
The UK government is clearly keen to lead international efforts to regulate online content. It knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader, DCMS needs
to stop basing policy on isolated issues and anecdotes and engage with a broader conversation around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and regulatory model that
emerge from this process as a blueprint for more widespread internet censorship.
The House of Lords
report on the future of the internet, published in early March 2019, set out ten principles it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces
offers a positive opportunity to collectively reflect, across industry, civil society, academia and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its
support for the fundamental right to freedom of expression - and in a way that goes beyond mere expression of principle - this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack.
The White Paper expresses a clear desire for tech companies to "design in safety". As the process of consultation now begins, we call on DCMS to "design in fundamental rights". Freedom of expression is itself a framework, and must not
be lightly glossed over. We welcome the opportunity to engage with DCMS further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone.
Totalitarian-style new online code that could block websites and fine them 2£20million for harmful content will not limit press freedom, Culture Secretary promises
Government proposals have sparked fears that they could backfire and turn Britain into the first Western nation to adopt the kind of censorship usually associated with totalitarian regimes.
Former culture secretary John Whittingdale drew parallels with China, Russia and North Korea. Matthew Lesh of the Adam Smith Institute, a free market think-tank, branded the white paper a historic attack on freedom of speech.
[However] draconian laws designed to tame the web giants will not limit press freedom, the Culture Secretary said yesterday.
In a letter to the Society of Editors, Jeremy Wright vowed that journalistic or editorial content would not be affected by the proposals.
And he reassured free speech advocates by saying there would be safeguards to protect the role of the Press.
But as for the safeguarding the free speech rights of ordinary British internet users, he more or less told them they could fuck off!
The European Parliament is set to vote on legislation that would require websites that host user-generated content to take down material reported as terrorist content within one hour. We have some examples of current notices sent to the Internet Archive
that we think illustrate very well why this requirement would be harmful to the free sharing of information and freedom of speech that the European Union pledges to safeguard.
In the past week, the Internet Archive has received a series of email notices from Europol's European Union Internet Referral Unit (EU IRU) falsely identifying hundreds of URLs on archive.org as terrorist propaganda. At least one of these mistaken URLs
was also identified as terrorist content in a separate take down notice from the French government's L'Office Central de Lutte contre la Criminalit39 li39e aux Technologies de l'Information et de la Communication (OCLCTIC).
The Internet Archive has a few staff members that process takedown notices from law enforcement who operate in the Pacific time zone. Most of the falsely identified URLs mentioned here (including the report from the French government) were sent to us in
the middle of the night 203 between midnight and 3am Pacific 203 and all of the reports were sent outside of the business hours of the Internet Archive.
The one-hour requirement essentially means that we would need to take reported URLs down automatically and do our best to review them after the fact.
It would be bad enough if the mistaken URLs in these examples were for a set of relatively obscure items on our site, but the EU IRU's lists include some of the most visited pages on archive.org and materials that obviously have high scholarly and
research value. See a summary below with specific examples.
Zippyshare is a long running data locker and file sharing platform that is well known particularly for the distribution of porn.
Last month UK users noted that they have been blocked from accessing the website and that it can now only be accessed via a VPN.
Zippyshare themselves has made no comment about the block, but TorrentFreak have investigated the censorship and have determined that the block is self imposed and is not down to action by UK courts or ISPs.
Alan wonders if this is a premature reaction to the Great British Firewall, noting it's quite a popular platform for free porn.
Of course it poses the interesting question that if websites generally decide to address the issue of UK porn censorship by self imposed blocks, then keen users will simply have to get themselves VPNs. Being willing to sign up for age verification simply
won't work. Perhaps VPNs will be next to mandatory for British porn users, and age verification will become an unused technology.
Facebook changes its terms and clarify its use of data for consumers following discussions with the European Commission and consumer authorities
The European Commission and consumer protection authorities have welcomed Facebook's updated terms and services. They now clearly explain how the company uses its users' data to develop profiling activities and target advertising to finance their
The new terms detail what services, Facebook sells to third parties that are based on the use of their user's data, how consumers can close their accounts and under what reasons accounts can be disabled. These developments come after exchanges, which
aimed at obtaining full disclosure of Facebook's business model in a comprehensive and plain language to users.
Vera Jourova , Commissioner for Justice, Consumers and Gender Equality welcomed the agreement:
jargon on how it is making billions on people's data. Now, users will clearly understand that their data is used by the social network to sell targeted ads. By joining forces, the consumer authorities and the European Commission, stand up for the rights
of EU consumers.
In the aftermath of the Cambridge Analytica scandal and as a follow-up to the investigation on social media platforms in 2018 , the European Commission and national consumer protection authorities requested Facebook to clearly inform consumers how the
social network gets financed and what revenues are derived from the use of consumer data. They also requested the platform to bring the rest of its terms of service in line with EU Consumer Law.
As a result, Facebook will introduce new text in its Terms and Services explaining that it does not charge users for its services in return for users' agreement to share their data and to be exposed to commercial advertisements. Facebook's terms will now
clearly explain that their business model relies on selling targeted advertising services to traders by using the data from the profiles of its users.
In addition, following the enforcement action, Facebook has also amended:
its policy on limitation of liability and now acknowledges its responsibility in case of negligence, for instance in case data has been mishandled by third parties;
its power to unilaterally change terms and conditions by limiting it to cases where the changes are reasonable also taking into account the interest of the consumer;
the rules concerning the temporary retention of content which has been deleted by consumers. Such content can only be retained in specific cases 203 for instance to comply with an enforcement request by an authority 203 and for a maximum of 90 days in
case of technical reasons;
the language clarifying the right to appeal of users when the their content has been removed.
Facebook will complete the implementation of all commitments at the latest by the end of June 2019.
In the first online safety laws of their kind, social media companies and tech firms will be legally required to protect their users and face tough penalties if they do not comply.
As part of the Online Harms White Paper, a joint proposal from the Department for Digital, Culture, Media and Sport and Home Office, a new independent regulator will be introduced to ensure companies meet their responsibilities.
This will include a mandatory 'duty of care', which will require companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services. The regulator will have effective enforcement tools, and we are
consulting on powers to issue substantial fines, block access to sites and potentially to impose liability on individual members of senior management.
A range of harms will be tackled as part of the
Online Harms White Paper , including inciting violence and violent content, encouraging suicide, disinformation, cyber bullying and children accessing inappropriate material.
There will be stringent requirements for companies to take even tougher action to ensure they tackle terrorist and child sexual exploitation and abuse content.
The new proposed laws will apply to any company that allows users to share or discover user generated content or interact with each other online. This means a wide range of companies of all sizes are in scope, including social media platforms, file
hosting sites, public discussion forums, messaging services, and search engines.
A regulator will be appointed to enforce the new framework. The Government is now consulting on whether the regulator should be a new or existing body. The regulator will be funded by industry in the medium term, and the Government is exploring options
such as an industry levy to put it on a sustainable footing.
12 week consultation on the proposals has also been launched today. Once this concludes we will then set out the action we will take in developing our final proposals for legislation.
Tough new measures set out in the White Paper include:
A new statutory 'duty of care' to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.
Further stringent requirements on tech companies to ensure child abuse and terrorist content is not disseminated online.
Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this.
Making companies respond to users' complaints, and act to address them quickly.
Codes of practice, issued by the regulator, which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.
A new "Safety by Design" framework to help companies incorporate online safety features in new apps and platforms from the start.
A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, including catfishing, grooming and extremism.
The UK remains committed to a free, open and secure Internet. The regulator will have a legal duty to pay due regard to innovation, and to protect users' rights online, being particularly mindful to not infringe privacy and freedom of expression.
Recognising that the Internet can be a tremendous force for good, and that technology will be an integral part of any solution, the new plans have been designed to promote a culture of continuous improvement among companies. The new regime will ensure
that online firms are incentivised to develop and share new technological solutions, like Google's "Family Link" and Apple's Screen Time app, rather than just complying with minimum requirements. Government has balanced the clear need for tough
regulation with its ambition for the UK to be the best place in the world to start and grow a digital business, and the new regulatory framework will provide strong protection for our citizens while driving innovation by not placing an impossible burden
on smaller companies.
Bird Box is a 2018 USA Sci-Fi horror thriller by Susanne Bier.
Starring Rosa Salazar, Sandra Bullock and Sarah Paulson.
In the wake of an unknown global terror, a mother must find the strength to flee with her children down a treacherous river in search of safety. Due to unseen deadly forces, the perilous journey must be made blindly. Directed by Academy Award winner
Susanne Bier, Bird Box is a thriller starring Academy Award winner Sandra Bullock, John Malkovich, Sarah Paulson, and Trevante Rhodes.
Netflix announced in mid March 2019 that Netflix VoD would be cut going forwards for 2019 VoD.
The cuts follows months of social media pressure claiming that stock footage of a 2013 crash in the Quebec town of Lac-Megantic was exploiting the victims of the tragedy. The crash involved a train carrying crude oil coming off the tracks and
exploded into a ball of fire, killing 47 people.
Netflix said that it will replace the footage with fictional scenes from a former TV series in the U.S. The company said it is sorry for any pain caused to the Lac-Megantic community.
In the UK the film was passed 15 uncut by the BBFC for strong violence, threat, language, suicide scenes for UK cinema and VoD release prior to the announcement of cuts.
A group of some of the best known internet pioneers have written an open letter explaining how the EU's censorship law nominally targeting terrorism will both chill the non terrorist internet whilst simultaneously advantaging US internet giants
over smaller European businesses. The group writes:
EU Terrorist Content regulation will damage the internet in Europe without meaningfully
contributing to the fight against terrorism
Dear MEP Dalton,
Dear MEP Ward,
Dear MEP Reda,
As a group of pioneers, technologists, and innovators who have helped create and sustain todays internet,
we write to you to voice our concern at proposals under consideration in the EU Terrorist Content
Tackling terrorism and the criminal actors who perpetrate it is a necessary public policy objective, and the internet plays an important role in achieving this end. The tragic and harrowing incident in Christchurch, New Zealand
earlier this month has underscored the continued threat terrorism poses to our fundamental freedoms, and the need to confront it in all its forms. However, the fight against terrorism does not preclude lawmakers from their responsibility to implement
evidence-based law that is proportionate, justified, and supportive of its stated aim.
The EU Terrorist Content regulation, if adopted as proposed, will restrict the basic rights of European internet users and undercut innovation on the internet without meaningfully contributing to the fight against terrorism. We are
particularly concerned by the following aspects of the proposed Regulation:
ÂUnclear definition of terrorist content: The definition of 'terrorist content' is extremely broad, and includes no clear exemption for educational, journalistic, or research purposes. This creates the risk of over-removal of lawful
and important public interest speech.
Lack of proportionality: The regulation applies equally to all internet hosting services, bringing thousands of services into scope that have no relevance to terrorist content. By not taking any account of the different types and
sizes of online services, nor their exposure to such illegal content, the new rules would be far out of proportion with the stated aim of the proposal.
Unworkable takedown timeframes: The obligation to remove content within a mere 60 minutes of notification will likely lead to significant over-removal of lawful content and place a catastrophic compliance burden on micro, small, and
medium-sized companies offering services within Europe. At the same time, it will greatly favour large multinational platforms that have already developed highly sophisticated content moderation operations
Reliance on upload filters and other ;proactive measures': The draft regulation frames automated upload filters as ÂetheÂf solution for terrorist content moderation at scale, and provides government agencies with the power to
mandate how such upload filters and other proactive measures are designed and implemented. But upload filtering of 'terrorist content' is fraught with challenges and risks, and only a handful of online services have the resources and capacity to build or
license such technology. As such, the proposal is setting a benchmark that only the largest platforms can meet. Moreover, upload filtering and related proactive measures risks suppressing important public interest content, such as news reports about
terrorist incidents and dispatches from warzones
We fully support efforts to combat dangerous and illegal information on the internet, including through new legislation where appropriate. Yet as currently drafted, this Regulation risks inflicting harm on free expression and due process, competition and
the possibility to innovate online5.
Given these likely ramifications we urge you to undertake a proper assessment of the proposal and make the necessary changes to ensure that the perverse outcomes described above are not realised. At the very least, any legislation of this nature must
include far greater rights protection and be built around a proportionality criterion that ensures companies of all sizes and types can comply and compete in Europe.
Citizens in Europe look to you for leadership in developing progressive policy that protects their rights, ensures their companies can compete, and protects their public interest. This legislation in its current form runs contrary to those ambitions. We
urge you to amend it, for the sake of European citizens and for the sake of the internet. Yours sincerely,
Mitchell Baker Executive Chairwoman, The Mozilla Foundation and Mozilla Corporation Tim Berners-Lee Inventor of the World Wide Web and Founder of the Web Foundation Vint Cerf Internet Pioneer Brewster Kahle Founder & Digital Librarian, Internet
Archive Jimmy Wales Founder of Wikipedia and Member of the Board of Trustees of the Wikimedia Foundation Markus Beckedahl Founder, Netzpolitik; Co-founder, re:publica Brian Behlendorf Member of the EFF Board of Directors; Executive Director of
Hyperledger at the Linux Foundation Cindy Cohn Executive Director, Electronic Frontier Foundation Cory Doctorow Author; Co-Founder of Open Rights Group; Visiting Professor at Open University (UK) Rebecca MacKinnon Co-founder, Global Voices; Director,
Ranking Digital Rights Katherine Maher Chief Executive Officer of the Wikimedia Foundation Bruce Schneier Public-interest technologist; Fellow, Berkman Klein Center for Internet & Society; Lecturer, Harvard Kennedy School
The Australian Government have announced the introduction of a new bill aimed at imposing criminal liability on executives of social media platforms if they fail to remove abhorrent violent content. The hastily drafted legislation could have serious
unintended consequences for human rights in Australia.
The rushed and secretive approach, the lack of proper open, democratic debate, and the placement of far-reaching and unclear regulatory measures on internet speech in the the criminal code are all matters of grave concern for digital rights groups,
including Access Now and Digital Rights Watch.
Poorly designed criminal intermediary liability rules are not the right approach here, which the Government would know if it had taken the time to consult properly. It's simply wrong to assume that an amendment to the criminal code is going to solve the
wider issue of content moderation on the internet, said Digital Rights Watch Chair, Tim Singleton Norton.
In particular, the lack of any public consultation is particularly worrisome as it shows that impacts on human rights were not likely to be considered by the government in drafting the text. Forcing companies to regulate content under threat of criminal
liability is likely to lead to over-removal and censorship as the companies attempt their best to avoid jail-time for their executives or hefty fines on their turnover. Also worryingly, the bill could encourage online companies to constantly surveil
internet users by requiring proactive measures for general content monitoring, a measure that would be a blow to free speech and privacy online. Lucie Krahulcova, Australia Policy Analyst at Access Now, said:
Reforming criminal law in a way that can heavily impact free expression online is unacceptable in a democracy. If Australian officials seek to ram through half-cooked fixes past Parliament without the proper expert advice and public scrutiny, the result
is likely to be a law that undermines human rights. Last year's encryption-breaking powers are a prime example of this
Regulating online speech in a few days is a tremendous mistake. Rather than pushing through reactionary proposals that make for good talking points, the Australian government and members of Parliament should invest in a measured, paced participatory
reflection carefully aimed at achieving their legitimate public policy goals.
The reality here is that there is no easy way to stop people from uploading or sharing links to videos of harmful content. No magic algorithm exists that can distinguish a violent massacre from videos of police brutality. The draft legislation creates a
great deal of uncertainty that can only be dealt with by introducing measures that may harm important documentation of hateful conduct. In the past, measures like these have worked to harm, rather than protect, the interests of marginalised and
vulnerable communities, said Mr. Singleton Norton.
This knee-jerk reaction will not make us safer or address the way that hatred circulates and grows in our society. We need to face up to the cause of this behaviour, and not look for quick fixes and authoritarian approaches to legislating over it, he
Singapore is set to introduce a new anti-fake news law, allowing authorities to remove articles deemed to breach government regulations.
The law, being read in parliament this week will further stifle dissent in an already tightly-controlled media environment. Singapore's Prime Minister Lee Hsien Loong suggested that the law would tackle the country's growing problem of online
misinformation. It follows an examination of fake news in Singapore by a parliamentary committee last year, which concluded that the city-state was a target of hostile information campaigns.
Lee said the law will require media outlets to correct fake news articles, and show corrections or display warnings about online falsehoods so that readers or viewers can see all sides and make up their own minds about the matter. In extreme and urgent
cases, the legislation will also require online news sources to take down fake news before irreparable damage is done.
Facebook, Twitter and Google have Asia headquarters in Singapore, with the companies expected to be under increased pressure to aid the law's implementation.
Facebook is set to begin telling its users why posts appear in their news feeds, presumably in response to government concerns over its influence over billions of people's reading habits.
The social network will today introduce a button on each post revealing why users are seeing it, including factors such as whether they have interacted often with the person who made the post or whether it is popular with other users.
It comes as part of a wider effort to make Facebook's systems more transparent and secure in advance of the EU elections in May and attempts by European and American politicians to regulate social media. John Hegeman, Facebook's vice president of news
feed, told the Telegraph:
We hear from people frequently that they don't know how the news feed algorithm works, why things show up where they do, as well as how their data is used, This is a step towards addressing that.
We haven't done as much as we could do to explain to people how the products work and help them access this information... I can't blame people for being a little bit uncertain or suspicious.
We recognise how important the platform that Facebook has become now is in the world, and that means we have a responsibility to ensure that people who use it have a good experience and that it can't be used in ways that are harmful.
We are making decisions that are really important, and so we are trying to be more and more transparent... we want the external world to be able to hold us accountable.
Google has an unfair bias towards Ann Summers' domain name, making us significantly harder to find online than our lingerie competitors. Customers looking for our biggest category, lingerie, are actively diverted away from finding our website -- even if
we shut up shop tomorrow and started selling sofas, this prejudice would not change.
In a recent Google search for Ann Summers lingerie -- in organic search, ignoring paid -- we were served Very, Amazon, Asos, Debenhams, Simply Be, House of Fraser and eBay before you got to our website, which sits depressingly on page two.
Google's argument is that Ann Summers is an adult retailer -- non family safe to use its terminology. Yes, we sell sex toys. But let me be clear, the products we are talking about in this context are our mainstream lingerie range. And, of course, we do
not want to put inappropriate products in front of children.
Here's the irony. Ann Summers has a range of 293 sex toys. Amazon has over 50,000 sex products, many of them considerably more adult in their nature than those we sell.
Yet Google would not consider Amazon non family safe. Google would not impose the same restrictions on Amazon as it does on us. So, the good news is, Google's policy prevents my nine-year-old daughter searching for a sex toy from Ann Summers, but the bad
news is she can buy a gimp mask from Amazon -- in fact, Alexa will even order it for her. Abuse of power