Melon Farmers Unrated

Online Safety Bill


UK Government legislates to censor social media


 

Vladimir would be proud...

UK Government introduces its Online Censorship Bill which significantly diminishes British free speech whilst terrorising British businesses with a mountain of expense and red tape


Link Here17th March 2022
Full story: Online Safety Bill...UK Government legislates to censor social media
The UK government's new online censorship laws have been brought before parliament. The Government wrote in its press release:

The Online Safety Bill marks a milestone in the fight for a new digital age which is safer for users and holds tech giants to account. It will protect children from harmful content such as pornography and limit people's exposure to illegal content, while protecting freedom of speech.

It will require social media platforms, search engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions.

The regulator Ofcom will have the power to fine companies failing to comply with the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites.

Today the government is announcing that executives whose companies fail to cooperate with Ofcom's information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted.

A raft of other new offences have also been added to the Bill to make in-scope companies' senior managers criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and for obstructing the regulator when it enters company offices.

In the UK, tech industries are blazing a trail in investment and innovation. The Bill is balanced and proportionate with exemptions for low-risk tech and non-tech businesses with an online presence. It aims to increase people's trust in technology, which will in turn support our ambition for the UK to be the best place for tech firms to grow.

The Bill will strengthen people's rights to express themselves freely online and ensure social media companies are not removing legal free speech. For the first time, users will have the right to appeal if they feel their post has been taken down unfairly.

It will also put requirements on social media firms to protect journalism and democratic political debate on their platforms. News content will be completely exempt from any regulation under the Bill.

And, in a further boost to freedom of expression online, another major improvement announced today will mean social media platforms will only be required to tackle 'legal but harmful' content, such as exposure to self-harm, harassment and eating disorders, set by the government and approved by Parliament.

Previously they would have had to consider whether additional content on their sites met the definition of legal but harmful material. This change removes any incentives or pressure for platforms to over-remove legal content or controversial comments and will clear up the grey area around what constitutes legal but harmful.

Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets.

Bill introduction and changes over the last year

The Bill will be introduced in the Commons today. This is the first step in its passage through Parliament to become law and beginning a new era of accountability online. It follows a period in which the government has significantly strengthened the Bill since it was first published in draft in May 2021. Changes since the draft Bill include:

  • Bringing paid-for scam adverts on social media and search engines into scope in a major move to combat online fraud .

  • Making sure all websites which publish or host pornography , including commercial sites, put robust checks in place to ensure users are 18 years old or over.

  • Adding new measures to clamp down on anonymous trolls to give people more control over who can contact them and what they see online.

  • Making companies proactively tackle the most harmful illegal content and criminal activity quicker.

  • Criminalising cyberflashing through the Bill.

Criminal liability for senior managers

The Bill gives Ofcom powers to demand information and data from tech companies, including on the role of their algorithms in selecting and displaying content, so it can assess how they are shielding users from harm.

Ofcom will be able to enter companies' premises to access data and equipment, request interviews with company employees and require companies to undergo an external assessment of how they're keeping users safe.

The Bill was originally drafted with a power for senior managers of large online platforms to be held criminally liable for failing to ensure their company complies with Ofcom's information requests in an accurate and timely manner.

In the draft Bill, this power was deferred and so could not be used by Ofcom for at least two years after it became law. The Bill introduced today reduces the period to two months to strengthen penalties for wrongdoing from the outset.

Additional information-related offences have been added to the Bill to toughen the deterrent against companies and their senior managers providing false or incomplete information. They will apply to every company in scope of the Online Safety Bill. They are:

  • offences for companies in scope and/or employees who suppress, destroy or alter information requested by Ofcom;

  • offences for failing to comply with, obstructing or delaying Ofcom when exercising its powers of entry, audit and inspection, or providing false information;

  • offences for employees who fail to attend or provide false information at an interview.

Falling foul of these offences could lead to up to two years in imprisonment or a fine.

Ofcom must treat the information gathered from companies sensitively. For example, it will not be able to share or publish data without consent unless tightly defined exemptions apply, and it will have a responsibility to ensure its powers are used proportionately.

Changes to requirements on 'legal but harmful' content

Under the draft Bill, 'Category 1' companies - the largest online platforms with the widest reach including the most popular social media platforms - must address content harmful to adults that falls below the threshold of a criminal offence.

Category 1 companies will have a duty to carry risk assessments on the types of legal harms against adults which could arise on their services. They will have to set out clearly in terms of service how they will deal with such content and enforce these terms consistently. If companies intend to remove, limit or allow particular types of content they will have to say so.

The agreed categories of legal but harmful content will be set out in secondary legislation and subject to approval by both Houses of Parliament. Social media platforms will only be required to act on the priority legal harms set out in that secondary legislation, meaning decisions on what types of content are harmful are not delegated to private companies or at the whim of internet executives.

It will also remove the threat of social media firms being overzealous and removing legal content because it upsets or offends someone even if it is not prohibited by their terms and conditions. This will end situations such as the incident last year when TalkRadio was forced offline by YouTube for an "unspecified" violation and it was not clear on how it breached its terms and conditions.

The move will help uphold freedom of expression and ensure people remain able to have challenging and controversial discussions online.

The DCMS Secretary of State has the power to add more categories of priority legal but harmful content via secondary legislation should they emerge in the future. Companies will be required to report emerging harms to Ofcom.

Proactive technology

Platforms may need to use tools for content moderation, user profiling and behaviour identification to protect their users.

Additional provisions have been added to the Bill to allow Ofcom to set expectations for the use of these proactive technologies in codes of practice and force companies to use better and more effective tools, should this be necessary.

Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any technologies they develop meet standards of accuracy and effectiveness required by the regulator. Ofcom will not be able to recommend these tools are applied on private messaging or legal but harmful content.

Reporting child sexual abuse

A new requirement will mean companies must report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency .

The CSEA reporting requirement will replace the UK's existing voluntary reporting regime and reflects the Government's commitment to tackling this horrific crime.

Reports to the National Crime Agency will need to meet a set of clear standards to ensure law enforcement receives the high quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimisation by preventing the ongoing recirculation of illegal content.

In-scope companies will need to demonstrate existing reporting obligations outside of the UK to be exempt from this requirement, which will avoid duplication of company's efforts.

 

 

Offsite Article: The Real-World Consequences Of Outlawing Porn...


Link Here 4th March 2022
Full story: Online Safety Bill...UK Government legislates to censor social media
Censoring adult entertainment does not reduce demand -- it just allows fraudsters, blackmailers and corruption to flourish

See article from reprobatepress.com

 

 

The UK Government masses its heavy censorship artillery at the borders of free speech...

Threatening to invade and repress the freedoms of a once proud people


Link Here20th February 2022
Full story: Online Safety Bill...UK Government legislates to censor social media
The Financial Times is reporting that the cabinet have agreed to extending UK online censorship to cover legal but harmful content. The government will define what material must be censored via its internet censor Ofcom.

The FT reports:

A revised Online Safety bill will give Ofcom, the communications regulator, powers to require internet companies to use technology to proactively seek out and remove both illegal content and legal content which is harmful to children. The new powers were proposed in a recent letter to cabinet colleagues by home secretary Priti Patel and culture secretary Nadine Dorries.

It seems that the tech industry is not best pleased by being forced to pre-vet and censor content according to rules decreed by government or Ofcom.  The FT reports:

After almost three years of discussion about what was originally named the Online Harms bill, tech industry insiders said they were blindsided by the eleventh-hour additions.

The changes would make the UK a global outlier in how liability is policed and enforced online, said Coadec, a trade body for tech start-ups. It added the UK would be a significantly less attractive place to start, grow and maintain a tech business.

Westminster insiders said ministers were reluctant to be seen opposing efforts to remove harmful material from the internet.

 

 

A scammer's wet dream...

UK Government announces that the Online Censorship Bill will now extend to requiring identity/age verification to view porn


Link Here6th February 2022
Full story: Online Safety Bill...UK Government legislates to censor social media

On Safer Internet Day, Digital Censorship Minister Chris Philp has announced the Online Safety Bill will be significantly strengthened with a new legal duty requiring all sites that publish pornography to put robust checks in place to ensure their users are 18 years old or over.

This could include adults using secure age verification technology to verify that they possess a credit card and are over 18 or having a third-party service confirm their age against government data.

If sites fail to act, the independent regulator Ofcom will be able fine them up to 10% of their annual worldwide turnover or can block them from being accessible in the UK. Bosses of these websites could also be held criminally liable if they fail to cooperate with Ofcom.

A large amount of pornography is available online with little or no protections to ensure that those accessing it are old enough to do so. There are widespread concerns this is impacting the way young people understand healthy relationships, sex and consent. Half of parents worry that online pornography is giving their kids an unrealistic view of sex and more than half of mums fear it gives their kids a poor portrayal of women.

Age verification controls are one of the technologies websites may use to prove to Ofcom that they can fulfil their duty of care and prevent children accessing pornography.

Many sites where children are likely to be exposed to pornography are already in scope of the draft Online Safety Bill, including the most popular pornography sites as well as social media, video-sharing platforms and search engines. But as drafted, only commercial porn sites that allow user-generated content - such as videos uploaded by users - are in scope of the bill.

The new standalone provision ministers are adding to the proposed legislation will require providers who publish or place pornographic content on their services to prevent children from accessing that content. This will capture commercial providers of pornography as well as the sites that allow user-generated content. Any companies which run such a pornography site which is accessible to people in the UK will be subject to the same strict enforcement measures as other in-scope services.

The Online Safety Bill will deliver more comprehensive protections for children online than the Digital Economy Act by going further and protecting children from a broader range of harmful content on a wider range of services. The Digital Economy Act did not cover social media companies, where a considerable quantity of pornographic material is accessible, and which research suggests children use to access pornography.

The government is working closely with Ofcom to ensure that online services' new duties come into force as soon as possible following the short implementation period that will be necessary after the bill's passage.

The onus will be on the companies themselves to decide how to comply with their new legal duty. Ofcom may recommend the use of a growing range of age verification technologies available for companies to use that minimise the handling of users' data. The bill does not mandate the use of specific solutions as it is vital that it is flexible to allow for innovation and the development and use of more effective technology in the future.

Age verification technologies do not require a full identity check. Users may need to verify their age using identity documents but the measures companies put in place should not process or store data that is irrelevant to the purpose of checking age. Solutions that are currently available include checking a user's age against details that their mobile provider holds, verifying via a credit card check, and other database checks including government held data such as passport data.

Any age verification technologies used must be secure, effective and privacy-preserving. All companies that use or build this technology will be required to adhere to the UK's strong data protection regulations or face enforcement action from the Information Commissioner's Office.

Online age verification is increasingly common practice in other online sectors, including online gambling and age-restricted sales. In addition, the government is working with industry to develop robust standards for companies to follow when using age assurance tech, which it expects Ofcom to use to oversee the online safety regime.

Notes to editors:

Since the publication of the draft Bill in May 2021 and following the final report of the Joint Committee in December, the government has listened carefully to the feedback on children's access to online pornography, in particular stakeholder concerns about pornography on online services not in scope of the bill.

To avoid regulatory duplication, video-on-demand services which fall under Part 4A of the Communications Act will be exempt from the scope of the new provision. These providers are already required under section 368E of the Communications Act to take proportionate measures to ensure children are not normally able to access pornographic content.

The new duty will not capture user-to-user content or search results presented on a search service, as the draft Online Safety Bill already regulates these. Providers of regulated user-to-user services which also carry published (i.e. non user-generated) pornographic content would be subject to both the existing provisions in the draft Bill and the new proposed duty.

 

 

Online Censorship Bill...

Government defines a wide range of harms that will lead to criminal prosecution and that will require censorship by internet intermediaries


Link Here3rd February 2022
Full story: Online Safety Bill...UK Government legislates to censor social media

Online Safety Bill strengthened with new list of criminal content for tech firms to remove as a priority

own after it had been reported to them by users but now they must be proactive and prevent people being exposed in the first place.

It will clamp down on pimps and human traffickers, extremist groups encouraging violence and racial hate against minorities, suicide chatrooms and the spread of private sexual images of women without their consent.

Naming these offences on the face of the bill removes the need for them to be set out in secondary legislation later and Ofcom can take faster enforcement action against tech firms which fail to remove the named illegal content.

Ofcom will be able to issue fines of up to 10 per cent of annual worldwide turnover to non-compliant sites or block them from being accessible in the UK.

Three new criminal offences, recommended by the Law Commission, will also be added to the Bill to make sure criminal law is fit for the internet age.

The new communications offences will strengthen protections from harmful online behaviours such as coercive and controlling behaviour by domestic abusers; threats to rape, kill and inflict physical violence; and deliberately sharing dangerous disinformation about hoax Covid-19 treatments.

The government is also considering the Law Commission's recommendations for specific offences to be created relating to cyberflashing, encouraging self-harm and epilepsy trolling.

To proactively tackle the priority offences, firms will need to make sure the features, functionalities and algorithms of their services are designed to prevent their users encountering them and minimise the length of time this content is available. This could be achieved by automated or human content moderation, banning illegal search terms, spotting suspicious users and having effective systems in place to prevent banned users opening new accounts.

New harmful online communications offences

Ministers asked the Law Commission to review the criminal law relating to abusive and offensive online communications in the Malicious Communications Act 1988 and the Communications Act 2003.

The Commission found these laws have not kept pace with the rise of smartphones and social media. It concluded they were ill-suited to address online harm because they overlap and are often unclear for internet users, tech companies and law enforcement agencies.

It found the current law over-criminalises and captures 'indecent' images shared between two consenting adults - known as sexting - where no harm is caused. It also under-criminalises - resulting in harmful communications without appropriate criminal sanction. In particular, abusive communications posted in a public forum, such as posts on a publicly accessible social media page, may slip through the net because they have no intended recipient. It also found the current offences are sufficiently broad in scope that they could constitute a disproportionate interference in the right to freedom of expression.

In July the Law Commission recommended more coherent offences. The Digital Secretary today confirms new offences will be created and legislated for in the Online Safety Bill.

The new offences will capture a wider range of harms in different types of private and public online communication methods. These include harmful and abusive emails, social media posts and WhatsApp messages, as well as 'pile-on' harassment where many people target abuse at an individual such as in website comment sections. None of the offences will apply to regulated media such as print and online journalism, TV, radio and film.

The offences are:

A 'genuinely threatening' communications offence, where communications are sent or posted to convey a threat of serious harm.

This offence is designed to better capture online threats to rape, kill and inflict physical violence or cause people serious financial harm. It addresses limitations with the existing laws which capture 'menacing' aspects of the threatening communication but not genuine and serious threatening behaviour.

It will offer better protection for public figures such as MPs, celebrities or footballers who receive extremely harmful messages threatening their safety. It will address coercive and controlling online behaviour and stalking, including, in the context of domestic abuse, threats related to a partner's finances or threats concerning physical harm.

A harm-based communications offence to capture communications sent to cause harm without a reasonable excuse.

This offence will make it easier to prosecute online abusers by abandoning the requirement under the old offences for content to fit within proscribed yet ambiguous categories such as "grossly offensive," "obscene" or "indecent". Instead it is based on the intended psychological harm, amounting to at least serious distress, to the person who receives the communication, rather than requiring proof that harm was caused. The new offences will address the technical limitations of the old offences and ensure that harmful communications posted to a likely audience are captured.

The new offence will consider the context in which the communication was sent. This will better address forms of violence against women and girls, such as communications which may not seem obviously harmful but when looked at in light of a pattern of abuse could cause serious distress. For example, in the instance where a survivor of domestic abuse has fled to a secret location and the abuser sends the individual a picture of their front door or street sign.

It will better protect people's right to free expression online. Communications that are offensive but not harmful and communications sent with no intention to cause harm, such as consensual communication between adults, will not be captured. It will have to be proven in court that a defendant sent a communication without any reasonable excuse and did so intending to cause serious distress or worse, with exemptions for communication which contributes to a matter of public interest.

An offence for when a person sends a communication they know to be false with the intention to cause non-trivial emotional, psychological or physical harm.

Although there is an existing offence in the Communications Act that captures knowingly false communications, this new offence raises the current threshold of criminality. It covers false communications deliberately sent to inflict harm, such as hoax bomb threats, as opposed to misinformation where people are unaware what they are sending is false or genuinely believe it to be true. For example, if an individual posted on social media encouraging people to inject antiseptic to cure themselves of coronavirus, a court would have to prove that the individual knew this was not true before posting it.

The maximum sentences for each offence will differ. If someone is found guilty of a harm based offence they could go to prison for up to two years, up to 51 weeks for the false communication offence and up to five years for the threatening communications offence. The maximum sentence was six months under the Communications Act and two years under the Malicious Communications Act.

Notes

The draft Online Safety Bill in its current form already places a duty of care on internet companies which host user-generated content, such as social media and video-sharing platforms, as well as search engines, to limit the spread of illegal content on these services. It requires them to put in place systems and processes to remove illegal content as soon as they become aware of it but take additional proactive measures with regards to the most harmful 'priority' forms of online illegal content.

The priority illegal offences currently listed in the draft bill are terrorism and child sexual abuse and exploitation, with powers for the DCMS Secretary of State to designate further priority offences with Parliament's approval via secondary legislation once the bill becomes law. In addition to terrorism and child sexual exploitation and abuse, the further priority offences to be written onto the face of the bill includes illegal behaviour which has been outlawed in the offline world for years but also newer illegal activity which has emerged alongside the ability to target individuals or communicate en masse online.

This list has been developed using the following criteria: (i) the prevalence of such content on regulated services, (ii) the risk of harm being caused to UK users by such content and (iii) the severity of that harm.

The offences will fall in the following categories:

  • Encouraging or assisting suicide
  • Offences relating to sexual images i.e. revenge and extreme pornography
  • Incitement to and threats of violence
  • Hate crime
  • Public order offences - harassment and stalking
  • Drug-related offences
  • Weapons / firearms offences
  • Fraud and financial crime
  • Money laundering
  • Controlling, causing or inciting prostitutes for gain
  • Organised immigration offences

 

 

Upper Class Twit of the Year...

The Earl of Erroll spouts to Parliament that anal sex and blowjobs are not how to go about wooing a woman


Link Here27th January 2022
Full story: Online Safety Bill...UK Government legislates to censor social media
The House of Lords has given a second reading to the Digital Economy Act 2017 (Commencement of Part 3) Bill [HL] which is attempting to resurrect the failed law requiring age verification for porn websites. The bill reads:

Commencement of Part 3 of the Digital Economy Act 2017

The Secretary of State must make regulations under section 118(6) (commencement) of the Digital Economy Act 2017 to ensure that by 20 June 2022 all provisions under Part 3 (online pornography) of that Act have come into force.

The full reasoning for law to come into force have never been published but this is most likely due to the law totally failing to address the issue of keeping porn users' data safe from scammers, blackmailers and thieves. It also seems that the government would prefer to have general rules under which to harangue websites for not keeping children safe from harm rather than set to an expensive bunch of film censors seeking out individual transgressors.

The 2nd reading debate featured the usual pro-censorship peers queing up to have a whinge about the availability of porn. And as is always the case, most of them haven't been bothered thinking about the effectiveness of the measures, their practicality and acceptability. And of course nothing about the safety of porn users who foolishly trust their very dangerous identity data to porn websites and age verification companies.

Merlin Hay, the Earl of Erroll seems to be something of a shill for those age verification companies. He chairs the Digital Policy Alliance ( dpalliance.org.uk ) which acts as a lobby group for age verifiers. He excelled himself in the debate with a few words that have been noticed by the press. He spouted:

What really worries me is not just extreme pornography, which has quite rightly been mentioned, but the stuff you can access for free -- what you might call the teaser stuff to get you into the sites. It normalises a couple of sexual behaviours which are not how to go about wooing a woman. Most of the stuff you see up front is about men almost attacking women. It normalises -- to be absolutely precise about this, because I think people pussyfoot around it -- anal sex and blowjobs. I am afraid I do not think that is how you go about starting a relationship.

 

 

Open Letter: The current version of the Online Safety Bill is not the answer...

Individuals and LGBT organisations speak out against the Governments Online Safety Bill


Link Here4th September 2021
Full story: Online Safety Bill...UK Government legislates to censor social media

As proud members of the LGBTQ+ community, we know first-hand the vile abuse that regularly takes place online. The data is clear; 78% of us have faced anti-LGBTQ+ hate crime or hate speech online in the last 5 years. So we understand why the Government is looking for a solution, but the current version of the Online Safety Bill is not the answer -- it will make things worse not better.

The new law introduces the "duty of care" principle and would give internet companies extensive powers to delete posts that may cause 'harm.' But because the law does not define what it means by 'harm' it could result in perfectly legal speech being removed from the web.

As LGBTQ+ people we have seen what happens when vague rules are put in place to police speech. Marginalised voices are silenced. From historic examples of censors banning LGBTQ+ content to 'protect' the public, to modern day content moderation tools marking innocent LGBTQ+ content as explicit or harmful.

This isn't scaremongering. In 2017, Tumblr's content filtering system marked non-sexual LGBTQ+ content as explicit and blocked it, in 2020 TikTok censored depictions of homosexuality such as two men kissing or holding hands and it reduced the reach of LGBTQ+ posts in some countries, and within the last two months LinkedIn removed a coming out post from a 16-year-old following complaints.

This Bill, as it stands, would provide a legal basis for this censorship. Moreover, its vague wording makes it easy for hate groups to put pressure on Silicon Valley tech companies to remove LGBTQ+ content and would set a worrying international standard.

Growing calls to end anonymity online also pose a danger. Anonymity allows LGBTQ+ people to share their experiences and sexuality while protecting their privacy and many non-binary and transgender people do not hold a form of acceptable ID and could be shut out of social media.

The internet provides a crucial space for our community to share experiences and build relationships. 90% of LGBTQ+ young people say they can be themselves online and 96% say the internet has helped them understand more about their sexual orientation and/or gender identity. This Bill puts the content of these spaces at potential risk.

Racism, homophobia, transphobia, and threats of violence are already illegal. But data shows that when they happen online it is ignored by authorities. After the system for flagging online hate crime was underused by the police, the Home Office stopped including these figures in their annual report all together, leaving us in the dark about the scale of the problem. The government's Bill should focus on this illegal content rather than empowering the censorship of legal speech.

This is why we are calling for "the duty of care", which in the current form of the Online Safety Bill could be used to censor perfectly legal free speech, to be reframed to focus on illegal content, for there to be specific, written, protections for legal LGBTQ+ content online, and for the LGBTQ+ community to be properly consulted throughout the process.

  • Stephen Fry , actor, broadcaster, comedian, director, and writer.
  • Munroe Bergdorf , model, activist, and writer.
  • Peter Tatchell , human rights campaigner.
  • Carrie Lyell , Editor-in-Chief of DIVA Magazine.
  • James Ball , Global Editor of The Bureau Of Investigative Journalism.
  • Jo Corrall , Founder of This is a Vulva.
  • Clara Barker , material scientist and Chair of LGBT+ Advisory Group at Oxford University.
  • Marc Thompson , Director of The Love Tank and co-founder of PrEPster and BlackOut UK.
  • Sade Giliberti , TV presenter, actor, and media personality.
  • Fox Fisher , artist, author, filmmaker, and LGBTQIA+ rights advocate.
  • Cara English , Head of Public Engagement at Gendered Intelligence, Founder OpenLavs.
  • Paula Akpan , journalist, and founder of Black Queer Travel Guide.
  • Tom Rasmussen , writer, singer, and drag performer.
  • Jamie Wareham , LGBTQ journalist and host of the #QueerAF podcast.
  • Crystal Lubrikunt , international drag performer, host, and producer.
  • David Robson, Chair of London LGBT+ Forums Network
  • Shane ShayShay Konno , drag performer, curator and host of the ShayShay Show, and founder of The Bitten Peach.
  • UK Black Pride , Europe's largest celebration for African, Asian, Middle Eastern, Latin America, and Caribbean-heritage LGBTQI+ people.

 

 

Updated - Lords comment: Censored comments...

Comments about the UK Government's new Internet Censorship Bill


Link Here21st July 2021
Full story: Online Safety Bill...UK Government legislates to censor social media

Offsite Comment: The Online Safety Bill won’t solve online abuse

 2nd July 2021. See article by Heather Burns

The Online Safety Bill contains threats to freedom of expression, privacy, and commerce which will do nothing to solve online abuse, deal with social media platforms, or make the web a better place to be.

 

Update: House of Lords Committee considers that social media companies are not the best 'arbiters of truth'

21st July 2021. See article from dailymail.co.uk , See report from committees.parliament.uk

A house of Lords committee has warned that the government's plans for new online censorship laws will diminish freedom of speech by making Facebook and Google the arbiters of truth.

The influential Lords Communications and Digital Committee cautioned that legitimate debate is at risk of being stifled by the way major platforms filter out misinformation. Committee chairman Lord Gilbert said:

The benefits of freedom of expression online mustn't be curtailed by companies such as Facebook and Google, too often guided their commercial and political interests than the rights and wellbeing of their users.

The report said:

We are concerned that platforms approaches to misinformation have stifled legitimate debate, including between experts.

Platforms should not seek to be arbiters of truth. Posts should only be removed in exceptional circumstances.

The peers said the government should switch to enforcing existing laws more robustly, and criminalising any serious harms that are not already illegal.

 

 

Claiming that face analysis would provide a way of proving age without handing over identity...

But would you trust money seeking age verification companies not to use facial identification to record who is watching porn anyway


Link Here10th July 2021
Full story: Online Safety Bill...UK Government legislates to censor social media
Our Big Brother government is seeking ways for all websites users to be identified and tracked in the name of child protection. But for all the up and coming legislation that demands age verification, there aren't actually any methods yet that satisfy both strict age verification and protect people's personal data from hackers, thieves, scammers, spammers, money grabbing age verification companies, the government, and the provably data abusing social media companies.

The Observer has reported on a face scanning scheme whereby the age verification company claims not to look up your identity via facial recognition and instead just trying and count the wrinkles on your photo.

See article from theguardian.com .

Security expert Alec Muffet has also posted some interesting and relevant background provided to the Observer that somehow did not make the cut.

See article from alecmuffett.com

 

 

Updated: Censored comments...

Comments about the UK Government's new Internet Censorship Bill


Link Here28th June 2021
Full story: Online Safety Bill...UK Government legislates to censor social media

Comment: Disastrous

11th May 2021. See article from bigbrotherwatch.org.uk

Mark Johnson, Legal and Policy Officer at Big Brother Watch said:

The Online Safety Bill introduces state-backed censorship and monitoring on a scale never seen before in a liberal democracy.

This Bill is disastrous for privacy rights and free expression online. The Government is clamping down on vague categories of lawful speech. This could easily result in the silencing of marginalised voices and unpopular views.

Parliament should remove lawful content from the scope of this Bill altogether and refocus on real policing rather than speech-policing.

 

 

Offsite Comment: Online safety bill: a messy new minefield in the culture wars

13th May 2021. See article from theguardian.com by Alex Hern

The message of the bill is simple: take down exactly the content the government wants taken down, and no more. Guess wrong and you could face swingeing fines. Keep guessing wrong and your senior managers could even go to jail.

Content moderation is a hard job, and it's about to get harder.

 

 

Offsite Comment: Harm Version 3.0

15th May 2021. See article from cyberleagle.com by Graham Smith

Two years on from the April 2019 Online Harms White Paper, the government has published its draft Online Safety Bill. It is a hefty beast: 133 pages and 141 sections. It raises a slew of questions, not least around press and journalistic material and the newly-coined content of democratic importance. Also, for the first time, the draft Bill spells out how the duty of care regime would apply to search engines, not just to user generated content sharing service providers.

This post offers first impressions of a central issue that started to take final shape in the government's December 2020 Full Response to consultation: the apparent conflict between imposing content monitoring and removal obligations on the one hand, and the government's oft-repeated commitment to freedom of expression on the other - now translated into express duties on service providers.

The draft Bill represents the government's third attempt at defining harm (if we include the White Paper, which set no limit). The scope of harm proposed in its second version (the Full Response) has now been significantly widened.

See article from cyberleagle.com

 

 

Offsite Comment: The unstoppable march of state censorship

17th May 2021. See article from spiked-online.com

Vaguely worded hate-speech laws can end up criminalising almost any opinion.

 

 

Offsite Comment: Drowning internet services in red tape

 18th May 2021. See article from techmonitor.ai by Laurie Clarke

The UK government has unveiled sprawling new legislation that takes aim at online speech on internet services 203 stretching from illegal to legal yet harmful content. The wide-ranging nature of the proposals could leave internet businesses large and small facing a huge bureaucratic burden, and render the bill impractical to implement.

 

 

Offsite Comment: UK online safety bill raises censorship concerns and questions on future of encryption

24th May 2021. See article from cpj.org

 

 

Offsite Comment: Why the online safety bill threatens our civil liberties

26th May 2021. See article from politics.co.uk by Heather Burns

With the recent publication of the draft online safety bill, the UK government has succeeded in uniting the British population in a way not seen since the weekly clap for the NHS. This time, however, no one is applauding. After two years of dangled promises, the government's roadmap to making the UK the safest place in the world to be online sets up a sweeping eradication of our personal privacy, our data security, and our civil liberties.

 

 

Offsite Comment: Misguided Online Safety Bill will be catastrophic for ordinary people's social media

23rd June 2021. See article from dailymail.co.uk

The Government's new Online Safety Bill will be catastrophic for ordinary people's freedom of speech, former minister David Davis has warned.

The Conservative MP said forcing social networks to take down content in Britain they deem unacceptable seems out of Orwell's 1984.

Davis slammed the idea Silicon Valley firms could take down posts they think are not politically correct - even though it is legal.

See full article from dailymail.co.uk

 

 

Offsite Comment: On the trail of the Person of Ordinary Sensibilities

28th June 2021. See article from cyberleagle.com by Graham Smith

  The bill boils down to what a mythical adult or child of 'ordinary sensibilities' considers to be 'lawful but awful' content.

 

 

Offsite Comment: The Online Safety Bill won’t solve online abuse

 2nd July 2021. See article by Heather Burns

The Online Safety Bill contains threats to freedom of expression, privacy, and commerce which will do nothing to solve online abuse, deal with social media platforms, or make the web a better place to be.

 

 

 

 

Age of miserableness...

Strident Scottish feminist MSP tables motion calling for the resurrection of failed UK law requiring age verification for porn


Link Here11th June 2021
Full story: Online Safety Bill...UK Government legislates to censor social media
Rhoda Grant is a campaigning MSP with a long and miserable history of calling for bans on sex work and lap dancing. She has now tabled a motion for consideration by the Scottish Parliament expressing concern the UK government's reported failure to implement Part 3 of the Digital Economy Act 2017 seeking to impose age verification for porn but without any consideration for the dangers to porn users of having their personal data hacked or abused.

Grant's motion has received the backing of Labour and SNP MSPs and notes that a coalition of women's organisations, headteachers, children's charities and parliamentarians want the government to enforce Part 3 without further delay. Grant said:

How we keep our children safe online should be an absolute priority, so the failure to implement Part 3 of the Digital Economy Act 2017 is a terrible reflection on the UK government.

 

 

Lords of Dreams...

House of Lords Private Members Bills seek the restoration of failed age verification for porn and another that demands more perfect age assurance methods


Link Here9th June 2021
Full story: Online Safety Bill...UK Government legislates to censor social media
Members of the House of Lords are clamouring for more red tape and censorship in the name of protecting children from the dangers of the internet. Of course these people don't seem to give a shit about the safety of adults using the internet.

Maurice Morrow is attempting to revive the failed age verification for porn in his bill, Digital Economy Act 2017 (Commencement of Part 3) Bill [HL]. The original bill failed firstly because it failed to consider data protection for porn user's identity data. The original authors of the bill couldn't even be bothered to consider such security implications as porn users handing over identity data and porn browsing data directly to Russian porn sites, possibly acting as fronts for the Russian government dirty tricks dept.

Perhaps the bill also failed because the likes of GCHQ don't fancy half the porn using population of the UK using VPNs and Tor to work around age verification and ISP porn blocking.

See Morrow's bill progress from bills.parliament.uk and the bill text from bills.parliament.uk . The bill had its first reading on 9th June.

Meanwhile Beeban Kidron has proposed a bill demanding accurate age assurance. Age assurance is generally an attempt to determine age without the nightmare of dangerously handing over full identity identity data. Eg estimating the age of soical media users from the age of their friends.

See Kidron's bill progress from bills.parliament.uk and the bill text is at bill text from bills.parliament.uk . The bill had its first reading on 27th May

 

 

Offsite Article: OnlyFans pulled up by the BBC when youngsters fool age verification...


Link Here29th May 2021
Full story: Online Safety Bill...UK Government legislates to censor social media
Asking the interesting question for future age verification laws. In today's blame society who has to carry the can when people inevitably find ways to circumvent the system. Is it the user, the website, or the age verification service?

See article from bbc.co.uk

 

 

Unsafe legislation...

The Government publishes its draft Internet Censorship Bill


Link Here11th May 2021
Full story: Online Safety Bill...UK Government legislates to censor social media

New internet laws will be published today in the draft Online Safety Bill to protect children online and tackle some of the worst abuses on social media, including racist hate crimes.

Ministers have added landmark new measures to the Bill to safeguard freedom of expression and democracy, ensuring necessary online protections do not lead to unnecessary censorship.

The draft Bill marks a milestone in the Government's fight to make the internet safe. Despite the fact that we are now using the internet more than ever, over three quarters of UK adults are concerned about going online, and fewer parents feel the benefits outweigh the risks of their children being online -- falling from 65 per cent in 2015 to 50 per cent in 2019.

The draft Bill includes changes to put an end to harmful practices, while ushering in a new era of accountability and protections for democratic debate, including:

  • New additions to strengthen people's rights to express themselves freely online, while protecting journalism and democratic political debate in the UK.

  • Further provisions to tackle prolific online scams such as romance fraud, which have seen people manipulated into sending money to fake identities on dating apps.

  • Social media sites, websites, apps and other services hosting user-generated content or allowing people to talk to others online must remove and limit the spread of illegal and harmful content such as child sexual abuse, terrorist material and suicide content.

  • Ofcom will be given the power to fine companies failing in a new duty of care up to £18 million or ten per cent of annual global turnover, whichever is higher, and have the power to block access to sites.

  • A new criminal offence for senior managers has been included as a deferred power. This could be introduced at a later date if tech firms don't step up their efforts to improve safety.

The draft Bill will be scrutinised by a joint committee of MPs before a final version is formally introduced to Parliament.

The following elements of the Bill aim to create the most progressive, fair and accountable system in the world. This comes only weeks after a boycott of social media by sports professionals and governing bodies in protest at the racist abuse of footballers online, while at the same time concerns continue to be raised at social media platforms arbitrarily removing content and blocking users.

Duty of care

In line with the government's response to the Online Harms White Paper , all companies in scope will have a duty of care towards their users so that what is unacceptable offline will also be unacceptable online.

They will need to consider the risks their sites may pose to the youngest and most vulnerable people and act to protect children from inappropriate content and harmful activity.

They will need to take robust action to tackle illegal abuse, including swift and effective action against hate crimes, harassment and threats directed at individuals and keep their promises to users about their standards.

The largest and most popular social media sites (Category 1 services) will need to act on content that is lawful but still harmful such as abuse that falls below the threshold of a criminal offence, encouragement of self-harm and mis/disinformation. Category 1 platforms will need to state explicitly in their terms and conditions how they will address these legal harms and Ofcom will hold them to account.

The draft Bill contains reserved powers for Ofcom to pursue criminal action against named senior managers whose companies do not comply with Ofcom's requests for information. These will be introduced if tech companies fail to live up to their new responsibilities. A review will take place at least two years after the new regulatory regime is fully operational.

The final legislation, when introduced to Parliament, will contain provisions that require companies to report child sexual exploitation and abuse (CSEA) content identified on their services. This will ensure companies provide law enforcement with the high-quality information they need to safeguard victims and investigate offenders.

Freedom of expression

The Bill will ensure people in the UK can express themselves freely online and participate in pluralistic and robust debate.

All in-scope companies will need to consider and put in place safeguards for freedom of expression when fulfilling their duties. These safeguards will be set out by Ofcom in codes of practice but, for example, might include having human moderators take decisions in complex cases where context is important.

People using their services will need to have access to effective routes of appeal for content removed without good reason and companies must reinstate that content if it has been removed unfairly. Users will also be able to appeal to Ofcom and these complaints will form an essential part of Ofcom's horizon-scanning, research and enforcement activity.

Category 1 services will have additional duties. They will need to conduct and publish up-to-date assessments of their impact on freedom of expression and demonstrate they have taken steps to mitigate any adverse effects.

These measures remove the risk that online companies adopt restrictive measures or over-remove content in their efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire.

Democratic content

Ministers have added new and specific duties to the Bill for Category 1 services to protect content defined as 'democratically important'. This will include content promoting or opposing government policy or a political party ahead of a vote in Parliament, election or referendum, or campaigning on a live political issue.

Companies will also be forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no matter their affiliation. Policies to protect such content will need to be set out in clear and accessible terms and conditions and firms will need to stick to them or face enforcement action from Ofcom.

When moderating content, companies will need to take into account the political context around why the content is being shared and give it a high level of protection if it is democratically important.

For example, a major social media company may choose to prohibit all deadly or graphic violence. A campaign group could release violent footage to raise awareness about violence against a specific group. Given its importance to democratic debate, the company might choose to keep that content up, subject to warnings, but it would need to be upfront about the policy and ensure it is applied consistently.

Journalistic content

Content on news publishers' websites is not in scope. This includes both their own articles and user comments on these articles.

Articles by recognised news publishers shared on in-scope services will be exempted and Category 1 companies will now have a statutory duty to safeguard UK users' access to journalistic content shared on their platforms.

This means they will have to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists' removed content, and will be held to account by Ofcom for the arbitrary removal of journalistic content. Citizen journalists' content will have the same protections as professional journalists' content.

Online fraud

Measures to tackle user-generated fraud will be included in the Bill. It will mean online companies will, for the first time, have to take responsibility for tackling fraudulent user-generated content, such as posts on social media, on their platforms. This includes romance scams and fake investment opportunities posted by users on Facebook groups or sent via Snapchat.

Romance fraud occurs when a victim is tricked into thinking that they are striking up a relationship with someone, often through an online dating website or app, when in fact this is a fraudster who will seek money or personal information.

Analysis by the National Fraud Intelligence Bureau found in 2019/20 there were 5,727 instances of romance fraud in the UK (up 18 per cent year on year). Losses totalled more than £60 million.

Fraud via advertising, emails or cloned websites will not be in scope because the Bill focuses on harm committed through user-generated content.

The Government is working closely with industry, regulators and consumer groups to consider additional legislative and non-legislative solutions. The Home Office will publish a Fraud Action Plan after the 2021 spending review and the Department for Digital, Culture, Media and Sport will consult on online advertising, including the role it can play in enabling online fraud, later this year.




 

melonfarmers icon

Home

Index

Links

Email

Shop
 


US

World

Media

Nutters

Liberty
 

Film Cuts

Cutting Edge

Info

Sex News

Sex+Shopping
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys