Melon Farmers Unrated

Internet News


2019: October

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2020   2022   2023   2024   Latest 
Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    

 

Spanish authorities feel a bit insecure...

Spain blocks Github to try to disrupt a secure communications app used by Catalonian protestors


Link Here30th October 2019
The Spanish government has quietly begun censoring the entirety of GitHub in response to mounting Catalan protests, causing problems for those developers that use the Microsoft-owned service for their work projects.

While the issue is easily remedied by using a VPN, the fact that the Spanish government has been quick to try to censor such a valuable tool speaks volumes for the increasingly prominent but authoritarian idea that mass censorship is the best way to crush dissent.

One new pro-independence group, Tsunami Democratic, organizes digitally online and is known for the mass occupation of Barcelona's El Prat airport by an estimated 10,000 protesters. In addition to other social media it has a website hosted on Github as well as an encrypted communication app that's also available on Github.

The Android app uses geolocation and end-to-end protocols to make sure that only trusted and verified users have access. Verification takes place through the scanning of a QR code of an already-verified member. The app isn't available via Playstore so the APK file containing the app needs to be downloaded and manually installed on a phone.

It's for this reason that the Spanish government has begun to block GitHub in the country, cutting off access to all users. Over the last week, several Spanish internet service providers have blocked access to the service.

 

 

US ISPs feel a bit insecure...

ISPs are lobbying Congress to ban encrypted DNS lest they lose the ability to snoop on their customers. By Ernesto Falcon


Link Here30th October 2019
Full story: DNS Over Https...A new internet protocol will make government website blocking more difficult

An absurd thing is happening in the halls of Congress. Major ISPs such as Comcast, AT&T, and Verizon are banging on the doors of legislators to stop the deployment of DNS over HTTPS (DoH), a technology that will give users one of the biggest upgrades to their Internet privacy and security since the proliferation of HTTPS . This is because DoH ensures that when you look up a website, your query to the DNS system is secure through encryption and can't be tracked, spoofed, or blocked.

But despite these benefits, ISPs have written dozens of congressional leaders about their concerns, and are handing out misleading materials invoking Google as the boogeyman. EFF, Consumer Reports, and National Consumers League wrote this letter in response .

The reason the ISPs are fighting so hard is that DoH might undo their multi-million dollar political effort to take away user privacy. DoH isn't a Google technology--it's a standard, like HTTPS. They know that. But what is frustrating is barely two years ago, these very same lobbyists, and these very same corporations, were meeting with congressional offices and asking them to undo federal privacy rules that protect some of the same data that encrypted DNS allows users to hide.

ISPs Want to Protect an Illegitimate Market of Privacy Invasive Practices to "Compete" with Google's Privacy Invasive Practices, Congress Should Abolish Both

Congress shouldn't take its cues from these players on user privacy. The last time they did, Congress voted to take away users' rights . As long as DNS traffic remains exposed, ISPs can exploit our data the same way that Facebook and Google do. That's the subtext of this ISP effort. Comcast and its rivals are articulating a race to the bottom. ISPs will compete with Internet giants on who can invade user privacy more, then sell that to advertisers.

The major ISPs have also pointed out that centralization of DNS may not be great for user privacy in the long run. That's true, but that would not be an issue if everyone adopted DoH across the board. Meaning, the solution isn't to just deny anyone a needed privacy upgrade. The solution is to create laws that abolish the corporate surveillance model that exists today for both Google and Comcast.

But that's not what the ISPs want Congress to do, because they're ultimately on the same side as Google and other big Internet companies--they don't want us to have effective privacy laws to handle these issues. Congress should ignore the bad advice it's getting from both the major ISPs and Big Tech on consumer privacy, and instead listen to the consumer and privacy groups.

EFF and consumer groups have been pleading with Congress to pass a real privacy law, which would give individuals a right to sue corporations that violate their privacy, mandate opt-in consent for use of personal information, and allowing the states to take privacy law further, should the need arise . But many in Congress are still just listening to big companies, even holding Congressional hearings that only invite industry and no privacy groups to "learn" what to do next. In fact the only reason we don't have a strong federal privacy law because corporations like Comcast and Google want Congress to simply delete state laws like California's CCPA and Illinois's Biometric Protection Act while offering virtually nothing to users.

DNS over HTTPS Technology Advances More than Just Privacy, It Advances Human Rights and Internet Freedom

Missing from the debate is the impact DoH has on Internet freedom and human rights in authoritarian regimes where the government runs the broadband access network. State-run ISPs in Venezuela , China , and Iran have relied on insecure DNS traffic to censor content and target activists . Many of the tools governments like China and Iran rely on in order to censor content relies on exposed DNS traffic that DoH would eliminate. In other words, widespread adoption of encrypted DNS will shrink the censorship toolbox of authoritarian regimes across the world. In other words the old tools of censorship will be bypassed if DoH is systematically adopted globally. So while the debate about DoH is centered on data privacy and advertising models domestically, U.S. policymakers should recognize the big picture being that DoH can further American efforts to promote Internet freedom around the world. They should in fact be encouraging Google and the ISPs to offer encrypted DNS services and for them to quickly adopt it, rather than listen to ISP's pleas to stop it outright.

For ISPs to retain the power to censor the Internet, DNS needs to remain leaky and exploitable. That's where opposition to DoH is coming from. And the oposition to DoH today isn't much different from early opposition to the adoption of HTTP.

EFF believes this is the wrong vision for the Internet. We've believed, since our founding, that user empowerment should be the center focus. Let's try to advance the human right of privacy on all fronts. Establishing encrypted DNS can greatly advance this mission - fighting against DoH is just working on behalf of the censors.

 

 

BBFC introduces new symbols on Netflix to help a rather fragile sounding generation...

A BBFC tabloid style survey of teens finds that unwanted content leaves 46% feeling anxious and 5% saying it had a negative impact on their mental health


Link Here29th October 2019

Don't call us boring: 'Generation Conscious' want to make better decisions than ever before

The British Board of Film Classification (BBFC) is launching new age rating symbols which, for the first time, are designed for digital streaming platforms - a move which will give young people better and consistent guidance about film and TV content, enabling them to make conscious decisions about what they watch.

New research from the BBFC reveals, given their access to more media, nine in 10 (87%) 12-19 year olds want to make better decisions than ever before. Two thirds (66%) of young people resent the idea of being perceived as 'boring' or 'sensible' - something three quarters (74%) of adults admit to having thought.

Instead, almost all teens (97%) want more credit for being conscious decision makers, making informed and positive choices throughout all aspects of their life. The BBFC's own research showed 95% of teenagers want consistent age ratings that they recognise from the cinema and DVD to apply to content accessed through streaming services.

A majority (56%) of teens are concerned about watching content without knowing what it contains - and say they want clear age ratings to guide them. A third of teens (32%) say they see content they'd rather avoid on a weekly basis, leaving them feeling uncomfortable or anxious (46%), and one in twenty (5%) saying it had a negative impact on their mental health.

The BBFC's new digital classification symbols, launching on Thursday 31 October, will help young people to make conscious decisions when it comes to film and content on video on demand platforms. Netflix has welcomed the new symbols, and will begin rolling them out on the platform starting from Thursday 31 October. This builds on the ongoing partnership between the BBFC and Netflix, which will see the streaming service classify content using BBFC guidelines, with the aim that 100% of content on the platform will carry a BBFC age rating.

David Austin, Chief Executive of the BBFC, said: "It's inspiring to see young people determined to make conscious and thoughtful decisions. We want all young people to be empowered and confident in their film and TV choices. As the landscape of viewing content changes, so do we. We're proud to be launching digital symbols for a digital audience, to help them choose content well."

The move empowers young people to confidently engage with TV and film content in the right way. Half (50%) of young people say having access to online content and the internet helps them have tough conversations or navigate tricky subjects, like mental health and sexuality, when talking to parents.

Jack, 12, from Peterborough said: "It's difficult to choose what to watch online as there is so much choice out there. I like to think about things before I watch them. Sometimes my friends watch stuff I don't think is appropriate or I might find scary or it just isn't for me. I could definitely make better decisions and avoid uncomfortable situations if age ratings were more clearly signposted."

The BBFC is calling for streaming services to clearly label content with age ratings - and has this month launched its first set of VOD User Guidelines , developed in conjunction with video on demand platforms. These user guidelines outline how streaming services can help people by offering clearer, more consistent and comprehensive use of trusted, well understood, BBFC age ratings to support 'Generation Conscious'.

The BBFC commissioned Studio AKA to produce a short animation , showcasing the new age rating symbols, to help families help view what's right for them. The film is currently being played nationwide in cinemas until Sunday 3 November.

 

 

Offsite Article: The Open Observatory of Network Interference...


Link Here 29th October 2019
These watchdogs track secret online censorship across the globe They measure what's being blocked or removed, and why.

See article from cnet.com

 

 

Australian Government facial recognition services offered for checking access to porn...

More shitty politicians who couldn't give a damn about endangering adults and just want to virtue signal about protecting kids


Link Here28th October 2019
Australia's Department of Home Affairs is hoping to use its Face Verification Service and Document Verification Service across the economy, and is backing its use for age verification for Australians to look at pornography.

Writing in a submission to the House of Representatives Standing Committee on Social Policy and Legal Affairs' inquiry, launched in September, Home Affairs said it could provide a suite of identity-matching services.

Whilst the services are primarily designed to prevent identity crime, Home Affairs would support the increased use of the Document and Face Verification Services across the Australian economy to strengthen age verification processes, the department wrote.

Home Affairs conceded the Face Verification Service was not yet operational, as it relied on the passage of biometric legislation through Parliament.

 

 

Offsite Article: US ISP is lobbying against encrypted DNS...


Link Here27th October 2019
Full story: DNS Over Https...A new internet protocol will make government website blocking more difficult
Lest it loses its ability to snoop on its customers' browsing history

See article from vice.com

 

 

An Iron Curtain for the Internet...

Russia to test cutting off its internet users from the outside world


Link Here26th October 2019
Full story: Internet Censorship in Russia...Russia and its repressive state control of media
The Russian Government is set to begin tests of an internal version of the web -- isolated from the outside world -- in November, local sources claim.

Such a setup is supposedly intended to shield critical Russian systems from cyber-attack, allowing the federation to operate disconnected from the rest of the web.

However, critics have claimed that the tests are part of a wider attempt to isolate its citizens from the surrounding world and its influences.

Previous tests announced in February, for April, did not actually occur, presumably there were technical issues.

According to D-Russia , the tests of the network isolation will begin after November 1, 2019 and will be repeatedly at least annually.

 

 

How about a Government Harms Bill?...

The Government reveals that it spent 2.2 million on its failed Age Verification for porn policy and that doesn't include the work from its own civil servants


Link Here25th October 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust

More than £2m of taxpayers' money was spent preparing for the age verification for porn censorship regime before the policy was dropped in early October, the government has revealed.

The bulk of the spending, £2.2m, was paid to the BBFC to do the detailed work on the policy from 2016 onwards. Before then, additional costs were borne by the Department for Digital, Culture, Media and Sport, where civil servants were tasked with developing the proposals as part of their normal work.

Answering a written question fromthe shadow DCMS secretary, Tom Watson, Matt Warman for the government added: Building on that work, we are now establishing how the objectives of part three of the Digital Economy Act can be delivered through our online harms regime.

It is not just government funds that were wasted on the abortive scheme. Multiple private companies had developed systems that they were hoping to provide age verification services.

The bizarre thing was all this money was spent when the government knew that it wouldn't even prevent determined viewers from getting access to porn. It was only was only considered as effective from blocking kids from stumbling on porn.

So all that expense, and all that potential danger for adults stupidly submitting to age verification, and all for what?

Well at least next time round the  government may consider that they should put a least a modicum of thought about people's privacy.

It's not ALL about the kids. Surely the government has a duty of care for adults too. We need a Government Harms bill requiring a duty of care for ALL citizens. Now that would be a first!

 

 

BBC on Tor...

BBC joins the dark web so that those in the dark can see the light


Link Here24th October 2019
The BBC has made its international news website available via Tor, in a bid to thwart censorship attempts.

Tor is a privacy-focused web browser used to access pages on the dark web and also to evade ISP censorship more generally.

The browser obscures who is using it and what data is being accessed, which can help people avoid government surveillance and censorship.

Countries including China, Iran and Vietnam are among those who have tried to block access to the BBC News website or programmes.

Instead of visiting bbc.co.uk/news or bbc.com/news, users of the Tor browser can visit the new bbcnewsv2vjtpsuy.onion web address. Clicking this web address will not work in a regular web browser.

In a statement, the BBC said:

The BBC World Service's news content is now available on the Tor network to audiences who live in countries where BBC News is being blocked or restricted. This is in line with the BBC World Service mission to provide trusted news around the world.

 

 

No erotic dancing, no suggestive offers, no obscured nudity...

Facebook has updated its censorship rules about sexual solicitation


Link Here 24th October 2019
Full story: Facebook Censorship...Facebook quick to censor
This past summer, without much fanfare, Facebook updated their censorship rules concerning sexual expression on the company's platforms, including Instagram.

The new language, under the guise of preventing sexual solicitation, restricts even further the posts that sex workers are allowed to share, making them even more exposed to targeted harassment campaigns by anti-sex crusaders.

Among the new things that could get someone Instagram's account flagged and/or removed for Sexual Solicitation: the eggplant or peach emoji in conjunction with any statement referring to being horny; nude pictures with digital alterations or emojis covering female nipples and buttocks.

The new rules include:

Do not post:

Attempted coordination of or recruitment for adult sexual activities, including but not limited to:

    Filmed sexual activities Pornographic activities, strip club shows, live sex performances, erotic dances Sexual, erotic, or tantric massages

Explicit sexual solicitation by, including but not limited to the following, offering or asking for:

    Sex or sexual partners Sex chat or conversations Nude photos/videos/imagery

Content that meets both of the following criteria:

    Criteria 1: Offer or Ask

      Content implicitly or indirectly offers or asks for:

        Nude imagery, or Sex or sexual partners, or Sex chat conversations

    Criteria 2: Suggestive Elements

      Content makes the aforementioned offer or ask using one of the following sexually suggestive elements:

        Contextually specific and commonly sexual emojis or emoji strings, or Regional sexualized slang, or Mentions or depictions of sexual activity (including hand drawn, digital, or real world art) such as: sexual roles, sex positions, fetish scenarios, state of arousal, act of sexual intercourse or activity (sexual penetration or self-pleasuring), or Imagery of real individuals with nudity covered by human parts, objects, or digital obstruction, including long shots of fully nude butts

 

 

A Copyright Troll's Charter explained...

TorrentFreak outlines the pros and cons of the proposed new small claims copyright process


Link Here24th October 2019

The U.S. House of Representatives has passed the CASE Act, a new bill that proposes to institute a small claims court for copyright disputes. Supporters see the legislation as the ideal tool for smaller creators to protect their works, but opponents warn that it will increase the number of damages claims against regular Internet users. The new bill, which passed with a clear 410-6 vote, will now progress to the Senate.

The bill is widely supported by copyright-heavy industry groups as well as many individual creators. However, as is often the case with new copyright legislation, there's also plenty of opposition from digital rights groups and Internet users who fear that the bill will do more harm than good.

Supporters of the CASE Act point out that the new bill is the missing piece in the present copyright enforcement toolbox. They believe that many creators are not taking action against copyright infringers at the moment, because filing federal lawsuits is too expensive. The new small claims tribunal will fix that, they claim.

Opponents, for their part, fear that the new tribunal will trigger an avalanche of claims against ordinary Internet users, with potential damages of up to $30,000 per case. While targeted people have the choice to opt-out, many simply have no clue what to do, they argue.

Thus far legislators have shown massive support for the new plan. Yesterday the bill was up for a vote at the U.S. House of Representatives where it was passed with overwhelming bipartisan support. With a 410-6 vote , the passage of the CASE Act went smoothly.

Public Knowledge and other groups, such as EFF and Re:Create , fear that the bill will lead to more copyright complaints against regular Internet users. Re:Create's Executive Director Joshua Lamel hopes that the Senate will properly address these concerns. Lamel notes:

The CASE Act will expose ordinary Americans to tens of thousands of dollars in damages for things most of us do everyday. We are extremely disappointed that Congress passed the CASE Act as currently written, and we hope that the Senate will do its due diligence to make much-needed amendments to this bill to protect American consumers and remove any constitutional concerns,

 

 

Copyright Troll's Charter...

The US House of Representatives Votes in Favour of Disastrous Copyright Bill


Link Here23rd October 2019

The US House of Representatives has just voted in favor of the Copyright Alternative in Small-Claims Enforcement Act (CASE Act) by 410-6 (with 16 members not voting), moving forward a bill that Congress has had no hearings and no debates on so far this session. That means that there has been no public consideration of the serious harm the bill could do to regular Internet users and their expression online.

The CASE Act creates a new body in the Copyright Office which will receive copyright complaints, notify the person being sued, and then decide if money is owed and how much. This new Copyright Claims Board will be able to fine people up to $30,000 per proceeding. Worse, if you get one of these notices (maybe an email, maybe a letter--the law actually does not specify) and accidentally ignore it, you're on the hook for the money with a very limited ability to appeal. $30,000 could bankrupt or otherwise ruin the lives of many Americans.

The CASE Act also has bad changes to copyright rules , would let sophisticated bad actors get away with trolling and infringement , and might even be unconstitutional . It fails to help the artists it's supposed to serve and will put a lot of people at risk.

Even though the House has passed the CASE Act, we can still stop it in the Senate. Tell your Senators to vote "no" on the CASE Act.

Take Action

Tell the Senate not bankrupt regular Internet users

 

 

Offsite Article: Twitter's Rules are Trumped...


Link Here19th October 2019
Full story: Twitter Censorship...Twitter offers country by country take downs
Twitter details exactly how world leaders are partially exempted from the website's usual biased political censorship rules

See article from blog.twitter.com

 

 

A verified dud...

The government cancels current plans for age verification requirements for porn as defined in the Digital Economy Act. It will readdress the issue as part of its Online Harms bill


Link Here16th October 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Nicky Morgan, Secretary of State for Digital, Culture, Media and Sport, issued a written statement cancelling the government's current plans to require age verification for porn. She wrote:

The government published the Online Harms White Paper in April this year. It proposed the establishment of a duty of care on companies to improve online safety, overseen by an independent regulator with strong enforcement powers to deal with non-compliance. Since the White Paper's publication, the government's proposals have continued to develop at pace. The government announced as part of the Queen's Speech that we will publish draft legislation for pre-legislative scrutiny. It is important that our policy aims and our overall policy on protecting children from online harms are developed coherently in view of these developments with the aim of bringing forward the most comprehensive approach possible to protecting children.

The government has concluded that this objective of coherence will be best achieved through our wider online harms proposals and, as a consequence, will not be commencing Part 3 of the Digital Economy Act 2017 concerning age verification for online pornography. The Digital Economy Act objectives will therefore be delivered through our proposed online harms regulatory regime. This course of action will give the regulator discretion on the most effective means for companies to meet their duty of care. As currently drafted, the Digital Economy Act does not cover social media platforms.

The government's commitment to protecting children online is unwavering. Adult content is too easily accessed online and more needs to be done to protect children from harm. We want to deliver the most comprehensive approach to keeping children safe online and recognised in the Online Harms White Paper the role that technology can play in keeping all users, particularly children, safe. We are committed to the UK becoming a world-leader in the development of online safety technology and to ensure companies of all sizes have access to, and adopt, innovative solutions to improve the safety of their users. This includes age verification tools and we expect them to continue to play a key role in protecting children online.

The BBFC sounded a bit miffed about losing the internet censor gig. The BBFC posted on its website:

The introduction of age-verification on pornographic websites in the UK is a necessary and important child protection measure. The BBFC was designated as the Age-verification Regulator under the Digital Economy Act 2017 (DEA) in February 2018, and has since worked on the implementation of age-verification, developing a robust standard of age-verification designed to stop children from stumbling across or accessing pornography online. The BBFC had all systems in place to undertake the role of AV Regulator, to ensure that all commercial pornographic websites accessible from the UK would have age gates in place or face swift enforcement action.

The BBFC understands the Government's decision, announced today, to implement age-verification as part of the broader online harms strategy. We will bring our expertise and work closely with government to ensure that the child protection goals of the DEA are achieved.

I don suppose we will ever hear the real reasons why the law was ditched, but I  suspect that there were serious problems with it. The amount of time and effort put into this, and the serious ramifications for the BBFC and age verification companies that must now be facing hard times must surely make this cancelling a big decision.

It is my guess that a very troublesome issue for the authorities is how both age verification and website blocking would have encouraged a significant number of people to work around government surveillance of the internet. It is probably more important to keep tabs on terrorists and child abusers rather than to lose this capability for the sake of a kids stumbling on porn.

Although the news of the cancellation was reported today, Rowland Manthorpe, a reporter for Sky News suggested on Twitter that maybe the idea had already been shelved back in the summer. He tweeted:

When @AJMartinSky and I  broke the news that the porn block was being delayed again, we reported that it was on hold indefinitely. It was. Then our story broke. Inside DCMS a sudden panic ensued. Quickly, they drafted a statement saying it was delayed for 6 months

 

 

My Ministers will continue to develop proposals to extend internet censorship...

A summary of the Online Harms Bill as referenced in the Queen's Speech


Link Here15th October 2019

The April 2019 Online Harms White Paper set out the Government's plan for world-leading legislation to make the UK the safest place in the world to be online.

The proposals, as set out in the White Paper were:

  • A new duty of care on companies towards their users, with an independent regulator to oversee this framework.

  • We want to keep people safe online, but we want to do this in a proportionate way, ensuring that freedom of expression is upheld and promoted online, and businesses do not face undue burdens.

  • We are seeking to do this by ensuring that companies have the right processes and systems in place to fulfil their obligations, rather than penalising them for individual instances of unacceptable content.

  • Our public consultation on this has closed and we are analysing the responses and considering the issues raised. We are working closely with a variety of stakeholders, including technology companies and civil society groups, to understand their views.

We are seeking to do this by ensuring that companies have the right processes and systems in place to fulfil their obligations, rather than penalising them for individual instances of unacceptable content.

Our public consultation on this has closed and we are analysing the responses and considering the issues raised. We are working closely with a variety of stakeholders, including technology companies and civil society groups, to understand their views.

Next steps:

  • We will publish draft legislation for pre-legislative scrutiny.

  • Ahead of this legislation, the Government will publish work on tackling the use of the internet by terrorists and those engaged in child sexual abuse and exploitation, to ensure companies take action now to tackle content that threatens our national security and the physical safety of children.

  • We are also taking forward additional measures, including a media literacy strategy, to empower users to stay safe online. A Safety by Design framework will help start-ups and small businesses to embed safety during the development or update of their products and services.

 

 

Offsite Article: Without encryption, we will lose all privacy. This is our new battleground...


Link Here 15th October 2019
Full story: Internet Encryption...Encryption, essential for security but givernments don't see it that way
The US, UK and Australia are taking on Facebook in a bid to undermine the only method that protects our personal information. By Edward Snowden

See article from theguardian.com

 

 

Bountiful times for film censors in New Zealand...

Prime minister Jacinda Ardern's Christchurch Call leads to a doubling of funding for the censors


Link Here14th October 2019
Full story: The Christchurch Call...Calls for censorship is response to New Zealand gun massacre
New Zealand's government is doubling the funding for its film censors. Prime Minister Jacinda Ardern says the Government is doubling the funding for its Office of Film & Literature Classification so it can crack down on terrorist content alongside child exploitation images.

The package is the main domestic component of Ardern's more globally-focused Christchurch Call. The Call is a set of pledges and practices she is promoting following the Christchurch terror attack of March 15.

The $17m funding boost will go towards the Chief Censor and the Censorship Compliance Unit and will see about 17 new censors employed.

The announcement came with a bit of a barb though as it was noted that it took two days for the chief censor to rule that the livestream of the Christchurch mosque attack was objectionable, something the officials said could be sped up with new funding for his office.

Stuff.co.nz commented that the prime minister has invested serious time and political capital into her Christchurch Call program and noted that it met its first real test last week after another racist attack was livestreamed from the German city of Halle. That video was deemed objectionable by the censor and the shared protocol created by the Christchurch Call was put into action. Presumably this time round it took less than two days to decide that it is should be banned.

 

 

More about monitoring politicians' 'disinformation' rather than unrealistically trying to stop it...

Oxford researchers make recommendations to control politician's social media campaigners during elections


Link Here14th October 2019

The Market of Disinformation , a report produced by Oxford Information Labs on behalf of OxTEC, examines the impact of algorithmic changes made by social media platforms, designed to curb the spread of disinformation, through the lens of digital marketing.

The report highlights some of the techniques used by campaigners to attract, retain and persuade online audiences. It also sets out recommendations for the UK Electoral Commission.

Key findings:

  • Despite over 125 announcements in three years aimed at demoting disinformation and junk news, algorithmic changes made by platforms like Facebook, Google, and Twitter have not significantly altered brands' and companies digital marketing

  • Election campaigns continue to generate a significant amount of organic engagement, with people typically accessing content that has not been supported by paid placement

  • Political campaigns blend paid and organic material to maximise reach and minimise spend

  • There has been growth in digital marketing techniques combining online and offline data to reach specific audiences

Stacie Hoffmann, cyber security and policy expert at Oxford Information Labs, said:

Today's successful online campaigns effectively blend organic and paid-for elements, standing or falling by the levels of engagement they provoke amongst users. Self-regulation of social media platforms has only succeeded in achieving higher profits for the platforms by reducing organic reach and increasing the amount of paid content required by advertisers to reach new audiences.

Professor Philip Howard, Director of the Oxford Internet Institute (OII) and OxTEC Commissioner said:

The report highlights how the algorithmic changes made by social media platforms have been inadequate in curbing the spread of low-quality content online. Those actors spreading disinformation have quickly identified algorithmic changes and have adjusted their strategies accordingly. Fundamentally self-regulation by social media platforms has failed to achieve the promised public policy benefit of improving the quality of the information ecosystem.

The Oxford Information Labs report also sets out a series of recommendations for consideration by OxTEC on how to protect the integrity of elections. The recommendations are based on developing and implementing guidance related to four distinct areas, digital imprints, sanctions, financial reporting and campaign spend, foreign interference and location verification.

OxTEC, convened by the Oxford Internet Institute at the University of Oxford, consists of academics, researchers, technology experts and policymakers, and was established to explore how to protect the integrity of democracy in a digital age. It is due to publish a full report shortly.

 

 

Hook, line and sinker...

Just a reminder to those unnecessarily handing over their personal data to adult websites, eg for age verification. Hackers will attempt to steal your data, ask hookers.nl


Link Here12th October 2019
Account details of more than 250,000 people who used a site for sex workers in the Netherlands have been stolen in a hack attack.

Email addresses, user names and passwords were stolen from a site called Hookers.nl.

The attacker is believed to have exploited a bug in its chat room software found last month. Reports suggest the malicious hacker who took the data has offered it for sale on a dark web marketplace.

The website's media spokesman Tom Lobermann told Dutch broadcaster NOS that the site had informed everyone who had an account about the breach. The message sent by the site's administrators also advised people to change their passwords.

Hookers.nl used a popular program for hosting online forums and discussions called vBulletin. In late September, security researchers identified a vulnerability in the program that could be exploited to steal data. VBulletin quickly produced a patch for the bug but several sites were breached before they could deploy and install the protection.

 

 

An alternative internet governance...

China and Russia internet censors to work together at the World Internet Conference


Link Here11th October 2019
Russia's state internet censor has announced that China and Russia will sign an agreement to cooperate in further censoring internet access for their citizens.

Roskomnadzor said it would formally sign the international treaty with their Chinese counterpart, the Cyberspace Administration of China, on October 20. That date is the first day of China's three-day World Internet Conference, to be held this year in the city of Wuzhen, in eastern Zhejiang province.

This co-operation seems to be based on the two countries promoting an alternative internet governance regime that is not controlled by the US. An alternative governance would allow national censorship processes a route to getting deeper into the overall control management of the internet. Eg to disallow censorship busting technology such as encrypted DNS.

 

 

Offsite Article: Upcoming automated censorship...


Link Here11th October 2019
Former MEP Catherine Stihler keeps up the campaign against the EU's censorship machines

See article from eureporter.co

 

 

Offsite Article: In the name of safe browsing...


Link Here11th October 2019
Apple's Safari browser hands over your browsing history to a company controlled by the Chinese government

See article from reclaimthenet.org

 

 

Breaking out...

China's Global Reach: Surveillance and Censorship Beyond the Great Firewall. By Danny O'Brien


Link Here11th October 2019

Those outside the People's Republic of China (PRC) are accustomed to thinking of the Internet censorship practices of the Chinese state as primarily domestic, enacted through the so-called "Great Firewall"--a system of surveillance and blocking technology that prevents Chinese citizens from viewing websites outside the country. The Chinese government's justification for that firewall is based on the concept of " Internet sovereignty. " The PRC has long declared that "within Chinese territory, the internet is under the jurisdiction of Chinese sovereignty.''

Hong Kong, as part of the "one country, two systems" agreement, has largely lived outside that firewall: foreign services like Twitter, Google, and Facebook are available there, and local ISPs have made clear that they will oppose direct state censorship of its open Internet.

But the ongoing Hong Kong protests, and mainland China's pervasive attempts to disrupt and discredit the movement globally, have highlighted that China is not above trying to extend its reach beyond the Great Firewall, and beyond its own borders. In attempting to silence protests that lie outside the Firewall, in full view of the rest of the world, China is showing its hand, and revealing the tools it can use to silence dissent or criticism worldwide.

Some of those tools--such as pressure on private entities, including American corporations NBA and Blizzard--have caught U.S. headlines and outraged customers and employees of those companies. Others have been more technical, and less obvious to the Western observers.

The "Great Cannon" takes aim at sites outside the Firewall

The Great Cannon is a large-scale technology deployed by ISPs based in China to inject javascript code into customers' insecure (HTTP) requests . This code weaponizes the millions of mainland Chinese Internet connections that pass through these ISPs. When users visit insecure websites, their browsers will also download and run the government's malicious javascript--which will cause them to send additional traffic to sites outside the Great Firewall, potentially slowing these websites down for other users, or overloading them entirely.

The Great Cannon's debut in 2015 took down Github , where Chinese users were hosting anti-censorship software and mirrors of otherwise-banned news outlets like the New York Times. Following widespread international backlash , this attack was halted.

Last month, the Great Cannon was activated once again , aiming this time at Hong Kong protestors. It briefly took down LIHKG , a Hong Kong social media platform central to organizing this summer's protests.

Targeting the global Chinese community through malware

Pervasive online surveillance is a fact of life within the Chinese mainland. But if the communities the Chinese government wants to surveill aren't at home, it is increasingly willing to invest in expensive zero-days to watch them abroad, or otherwise hold their families at home hostage.

Last month, security researchers uncovered several expensive and involved mobile malware campaigns targeting the Uyghur and Tibetan diasporas . One constituted a broad "watering hole" attack using several zero-days to target visitors of Uyghur-language websites .

As we've noted previously , this represents a sea-change in how zero-days are being used; while China continues to target specific high-profile individuals in spear-phishing campaigns , they are now unafraid to cast a much wider net, in order to place their surveillance software on entire ethnic and political groups outside China's border.

Censoring Chinese Apps Abroad

At home, China doesn't need to use zero-days to install its own code on individuals' personal devices. Chinese messaging and browser app makers are required to include government filtering on their client, too. That means that when you use an app created by a mainland Chinese company, it likely contains code intended to scan and block prohibited websites or language.

Until now, China has been largely content to keep the activation of this device-side censorship concentrated within its borders. The keyword filtering embedded in WeChat only occurs for users with a mainland Chinese phone number. Chinese-language versions of domestic browsers censor and surveill significantly more than the English-language versions. But as Hong Kong and domestic human rights abuses draw international interest, the temptation to enforce Chinese policy abroad has grown.

TikTok is one of the largest and fastest-growing global social media platforms spun out of Beijing. It heavily moderates its content, and supposedly has localized censors for different jurisdictions . But following a government crackdown on "short video" platforms at the beginning of this year , news outlets began reporting on the lack of Hong Kong-related content on the platform . TikTok's leaked general moderation guidelines expressly forbid any content criticizing the Chinese government, like content related to the Chinese persecution of ethnic minorities, or about Tiananmen Square.

Internet users outside the United States may recognise the dynamic of a foreign service exporting its domestic decision-making abroad. For many years, America's social media companies have been accused of exporting U.S. culture and policy to the rest of the world: Facebook imposes worldwide censorship of nudity and sexual language , even in countries that are more culturally permissive on these topics than the U.S. Most services obey DMCA takedown procedures of allegedly copyright-infringing content, even in countries that have had alternative resolution laws. The influence that the United States has on its domestic tech industries has led to an outsized influence on those companies' international user base.

That said, U.S. companies have, as with developers in most countries, resisted the inclusion of state-mandated filters or government-imposed code within their own applications. In China, domestic and foreign companies have been explicitly mandated to comply with Chinese censorship under the national Cybersecurity Law passed in 2017 , which provides aggressive yet vague guidelines for content moderation. China imposing its rules on global Chinese tech companies differs from the United States' influence on the global Internet in more than just degree.

Money Talks: But Critics Can't

This brings us to the most visible arm of the China's new worldwide censorship toolkit: economic pressure on global companies. The Chinese domestic market is increasingly important to companies like Blizzard and the National Basketball Association (NBA). This means that China can use threats of boycotts or the denial of access to Chinese markets to silence these companies when they, or people affiliated with them, express support for the Hong Kong protestors.

Already, people are fighting back against the imposition of Chinese censorship on global companies. Blizzard employees staged a walk-out in protest, NBA fans continue to voice their support for the demonstrations in Hong Kong, and fans are rallying to boycott the two companies. But multi-national companies who can control their users' speech can expect to see more pressure from China as its economic clout grows.

Is China setting the Standard for Global Enforcement of Local Law?

Parochial "Internet sovereignty' has proven insufficient to China's needs: Domestic policy objectives now require it to control the Internet outside and inside its borders.

To be clear, China's government is not alone in this: rather than forcefully opposing and protesting their actions, other states--including the United States and the European Union-- have been too busy making their own justifications for the extra-territorial exercise of their own surveillance and censorship capabilities.

China now projects its Internet power abroad through the pervasive and unabashed use of malware and state-supported DDoS attacks; mandated client-side filtering and surveillance; economic sanctions to limit cross-border free speech; and pressure on private entities to act as a global cultural police.

Unless lawmakers, corporations, and individual users are as brave in standing up to authoritarian acts as the people of Hong Kong, we can expect to see these tactics adopted by every state, against every user of the Internet.

 

 

Film censors become internet censors...

A new internet censorship law is signed into law by South Africa's president


Link Here8th October 2019
Full story: Internet Censorship in South Africa...Proppsal to block all porn from South Africans
South Africa's Films and Publications Act, also known as the Internet censorship Bill, came into force when president Cyril Ramaphosa signed the controversial Bill.

Opponents of the law had criticised the vague and broad terminology used; stipulations that would see the Film and Publication Board overstepping into the Independent Communications Authority of South Africa's regulatory jurisdiction; and that it contains constitutional infringements on citizens' right to privacy and freedom of expression.

The new law provides for the establishment, composition and appointment of members of an enforcement committee that will, among other tasks, control online distribution of films and games.

The law extends the compliance obligations of the Films and Publications Act and the compliance and monitoring functions of the Film and Publication Board to online distributors.

 

 

Facebook excuses politicians from telling the truth in advertising...

If disinformation were to be banned there would no politicians, no religion, no Christmas and no railway timetables


Link Here 7th October 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook has quietly rescinded a policy banning false claims in advertising, creating a specific exemption that leaves political adverts unconstrained regarding how they could mislead or deceive.

Facebook had previously banned adverts containing deceptive, false or misleading content, a much stronger restriction than its general rules around Facebook posts. But, as reported by the journalist Judd Legum , in the last week the rules have narrowed considerably, only banning adverts that include claims debunked by third-party fact-checkers, or, in certain circumstances, claims debunked by organisations with particular expertise.

A separate policy introduced by the social network recently declared opinion pieces and satire ineligible for verification, including any website or page with the primary purpose of expressing the opinion or agenda of a political figure. The end result is that any direct statement from a candidate or campaign cannot be fact-checked and so is automatically exempted from policies designed to prevent misinformation. (After the publication of this story, Facebook clarified that only politicians currently in office or running for office, and political parties, are exempt: other political adverts still need to be true.)

 

 

Commented: Endangering the many to detect the few...

The government initiates a data sharing agreement with the US and takes the opportunity to complain about internet encryption that keeps us safe from snoops, crooks, scammers and thieves


Link Here5th October 2019
Full story: UK Government vs Encryption...Government seeks to restrict peoples use of encryption

Home Secretary Priti Patel has signed an historic agreement that will enable British law enforcement agencies to directly demand electronic data relating to terrorists, child sexual abusers and other serious criminals from US tech firms.

The world-first UK-US Bilateral Data Access Agreement will dramatically speed up investigations and prosecutions by enabling law enforcement, with appropriate authorisation, to go directly to the tech companies to access data, rather than through governments, which can take years.

The Agreement was signed with US Attorney General William P. Barr in Washington DC, where the Home Secretary also met security partners to discuss the two countries' ever deeper cooperation and global leadership on security.

The current process, which see requests for communications data from law enforcement agencies submitted and approved by central governments via Mutual Legal Assistance (MLA), can often take anywhere from six months to two years. Once in place, the Agreement will see the process reduced to a matter of weeks or even days.

The US will have reciprocal access, under a US court order, to data from UK communication service providers. The UK has obtained assurances which are in line with the government's continued opposition to the death penalty in all circumstances.

Any request for data must be made under an authorisation in accordance with the legislation of the country making the request and will be subject to independent oversight or review by a court, judge, magistrate or other independent authority.

The Agreement does not change anything about the way companies can use encryption and does not stop companies from encrypting data.

It gives effect to the Crime (Overseas Production Orders) Act 2019, which received Royal Assent in February this year and was facilitated by the CLOUD Act in America, passed last year.

Letter to Mark Zuckerberg asking him not to keep his internet users safe through encrypted messages

The Home Secretary has also published an open letter to Facebook, co-signed with US Attorney General William P. Barr, Acting US Homeland Security Secretary Kevin McAleenan and Australia's Minister for Home Affairs Peter Dutton, outlining serious concerns with the company's plans to implement end-to-end encryption across its messaging services. The letter reads:

Dear Mr. Zuckerberg,

We are writing to request that Facebook does not proceed with its plan to implement end-to-end encryption across its messaging services without ensuring that there is no reduction to user safety and without including a means for lawful access to the content of communications to protect our citizens.

In your post of 6 March 2019, 'A Privacy-Focused Vision for Social Networking', you acknowledged that "there are real safety concerns to address before we can implement end-to-end encryption across all our messaging services." You stated that "we have a responsibility to work with law enforcement and to help prevent" the use of Facebook for things like child sexual exploitation, terrorism, and extortion. We welcome this commitment to consultation. As you know, our governments have engaged with Facebook on this issue, and some of us have written to you to express our views. Unfortunately, Facebook has not committed to address our serious concerns about the impact its proposals could have on protecting our most vulnerable citizens.

We support strong encryption, which is used by billions of people every day for services such as banking, commerce, and communications. We also respect promises made by technology companies to protect users' data. Law abiding citizens have a legitimate expectation that their privacy will be protected. However, as your March blog post recognized, we must ensure that technology companies protect their users and others affected by their users' online activities. Security enhancements to the virtual world should not make us more vulnerable in the physical world. We must find a way to balance the need to secure data with public safety and the need for law enforcement to access the information they need to safeguard the public, investigate crimes, and prevent future criminal activity. Not doing so hinders our law enforcement agencies' ability to stop criminals and abusers in their tracks.

Companies should not deliberately design their systems to preclude any form of access to content, even for preventing or investigating the most serious crimes. This puts our citizens and societies at risk by severely eroding a company's ability to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism, and foreign adversaries' attempts to undermine democratic values and institutions, preventing the prosecution of offenders and safeguarding of victims. It also impedes law enforcement's ability to investigate these and other serious crimes.

Risks to public safety from Facebook's proposals are exacerbated in the context of a single platform that would combine inaccessible messaging services with open profiles, providing unique routes for prospective offenders to identify and groom our children.

Facebook currently undertakes significant work to identify and tackle the most serious illegal content and activity by enforcing your community standards. In 2018, Facebook made 16.8 million reports to the US National Center for Missing & Exploited Children (NCMEC) -- more than 90% of the 18.4 million total reports that year. As well as child abuse imagery, these referrals include more than 8,000 reports related to attempts by offenders to meet children online and groom or entice them into sharing indecent imagery or meeting in real life. The UK National Crime Agency (NCA) estimates that, last year, NCMEC reporting from Facebook will have resulted in more than 2,500 arrests by UK law enforcement and almost 3,000 children safeguarded in the UK. Your transparency reports show that Facebook also acted against 26 million pieces of terrorist content between October 2017 and March 2019. More than 99% of the content Facebook takes action against -- both for child sexual exploitation and terrorism -- is identified by your safety systems, rather than by reports from users.

While these statistics are remarkable, mere numbers cannot capture the significance of the harm to children. To take one example, Facebook sent a priority report to NCMEC, having identified a child who had sent self-produced child sexual abuse material to an adult male. Facebook located multiple chats between the two that indicated historical and ongoing sexual abuse. When investigators were able to locate and interview the child, she reported that the adult had sexually abused her hundreds of times over the course of four years, starting when she was 11. He also regularly demanded that she send him sexually explicit imagery of herself. The offender, who had held a position of trust with the child, was sentenced to 18 years in prison. Without the information from Facebook, abuse of this girl might be continuing to this day.

Our understanding is that much of this activity, which is critical to protecting children and fighting terrorism, will no longer be possible if Facebook implements its proposals as planned. NCMEC estimates that 70% of Facebook's reporting -- 12 million reports globally -- would be lost. This would significantly increase the risk of child sexual exploitation or other serious harms. You have said yourself that "we face an inherent tradeoff because we will never find all of the potential harm we do today when our security systems can see the messages themselves". While this trade-off has not been quantified, we are very concerned that the right balance is not being struck, which would make your platform an unsafe space, including for children.

Equally important to Facebook's own work to act against illegal activity, law enforcement rely on obtaining the content of communications, under appropriate legal authorisation, to save lives, enable criminals to be brought to justice, and exonerate the innocent.

We therefore call on Facebook and other companies to take the following steps:

  • embed the safety of the public in system designs, thereby enabling you to continue to act against illegal content effectively with no reduction to safety, and facilitating the prosecution of offenders and safeguarding of victims

  • enable law enforcement to obtain lawful access to content in a readable and usable format

  • engage in consultation with governments to facilitate this in a way that is substantive and genuinely influences your design decisions

  • not implement the proposed changes until you can ensure that the systems you would apply to maintain the safety of your users are fully tested and operational

We are committed to working with you to focus on reasonable proposals that will allow Facebook and our governments to protect your users and the public, while protecting their privacy. Our technical experts are confident that we can do so while defending cyber security and supporting technological innovation. We will take an open and balanced approach in line with the joint statement of principles signed by the governments of the US, UK, Australia, New Zealand, and Canada in August 2018 and the subsequent communique agreed in July this year .

As you have recognised, it is critical to get this right for the future of the internet. Children's safety and law enforcement's ability to bring criminals to justice must not be the ultimate cost of Facebook taking forward these proposals.

Yours sincerely,

Rt Hon Priti Patel MP, United Kingdom Secretary of State for the Home Department

William P. Barr, United States Attorney General

Kevin K. McAleenan, United States Secretary of Homeland Security (Acting)

Hon Peter Dutton MP, Australian Minister for Home Affairs

Update: An All-Out Attack on Encryption

5th October 2019. See article from eff.org

Top law enforcement officials in the United States, United Kingdom, and Australia told Facebook today that they want backdoor access to all encrypted messages sent on all its platforms. In an open letter , these governments called on Mark Zuckerberg to stop Facebook's plan to introduce end-to-end encryption on all of the company's messaging products and instead promise that it will "enable law enforcement to obtain lawful access to content in a readable and usable format."

This is a staggering attempt to undermine the security and privacy of communications tools used by billions of people. Facebook should not comply. The letter comes in concert with the signing of a new agreement between the US and UK to provide access to allow law enforcement in one jurisdiction to more easily obtain electronic data stored in the other jurisdiction. But the letter to Facebook goes much further: law enforcement and national security agencies in these three countries are asking for nothing less than access to every conversation that crosses every digital device.

The letter focuses on the challenges of investigating the most serious crimes committed using digital tools, including child exploitation, but it ignores the severe risks that introducing encryption backdoors would create. Many people--including journalists, human rights activists, and those at risk of abuse by intimate partners--use encryption to stay safe in the physical world as well as the online one. And encryption is central to preventing criminals and even corporations from spying on our private conversations, and to ensure that the communications infrastructure we rely on is truly working as intended . What's more, the backdoors into encrypted communications sought by these governments would be available not just to governments with a supposedly functional rule of law. Facebook and others would face immense pressure to also provide them to authoritarian regimes, who might seek to spy on dissidents in the name of combatting terrorism or civil unrest, for example.

The Department of Justice and its partners in the UK and Australia claim to support "strong encryption," but the unfettered access to encrypted data described in this letter is incompatible with how encryption actually works .

 

 

Offsite Article: France becomes the first EU country to use facial recognition to create a digital ID card...


Link Here5th October 2019
No doubt soon to be the baseline ID required for logging people's porn browsing in the name of child protection.

See article from bloomberg.com

 

 

Facebook can be ordered to remove posts worldwide...

EU judges make up more internet censorship law without reference to practicality, definitions and consideration of consequences


Link Here4th October 2019
Full story: Internet Censorship in EU...EU introduces swathes of internet censorship law
Facebook and other social media can be ordered to censor posts worldwide after a ruling from the EU's highest court.

Platforms may also have to seek out similar examples of the illegal content and remove them, instead of waiting for each to be reported.

Facebook said the judgement raised critical questions around freedom of expression. What was the case about?

The case stemmed from an insulting comment posted on Facebook about Austrian politician Eva Glawischnig-Piesczek, which the country's courts claimed damaged her reputation.

Under EU law, Facebook and other platforms are not held responsible for illegal content posted by users, until they have been made aware of it - at which point, they must remove it quickly. But it was unclear whether an EU directive, saying platforms cannot be made to monitor all posts or actively seek out illegal activity, could be overridden by a court order.

Austria's Supreme Court asked Europe's highest court to clarify this. The EU curt duly obliged and ruled:

  • If an EU country finds a post illegal in its courts, it can order websites and apps to take down identical copies of the post
  • Platforms can be ordered to take down equivalent versions of an illegal post, if the message conveyed is essentially unchanged
  • Platforms can be ordered to take down illegal posts worldwide, if there is a relevant international law or treaty

Facebook has said countries would have to set out very clear definitions on what 'identical' and 'equivalent' means in practice.  It said the ruling undermines the long-standing principle that one country does not have the right to impose its laws on speech on another country.

Facebook is unable to appeal against this ruling.

 

 

Government deemed 'fake news' banned...

New internet censorship law comes into force in Singapore


Link Here4th October 2019
Full story: Internet Censorship in Singapore...Heavy handed censorship control of news websites
Singapore's sweeping internet censorship law, claimed to be targeting 'fake news' came into force this week. Under the Protection from Online Falsehoods and Manipulation Bill , it is now illegal to spread statements deemed false under circumstances in which that information is deemed prejudicial to Singapore's security, public safety, public tranquility, or to the friendly relations of Singapore with other countries, among numerous other topics.

Government ministers can decide whether to order something deemed fake news to be taken down, or for a correction to be put up alongside it. They can also order technology companies such as Facebook and Google to block accounts or sites spreading the information that the government doesn't ike.

The act also provides for prosecutions of individuals, who can face fines of up to 50,000 SGD (over $36,000), and, or, up to five years in prison.

 

 

The EU's cookie law crumbles like something from Alice in Wonderland...

The EU Court decides that websites cannot use cookies until users have actively ticked a clear consent box, neutrally presented next to a decline option. Who is then going to sign up for tracking?


Link Here 3rd October 2019
  


White Rabbit: The EU Council is now in session.

Dormouse: Anyone for a magic cookie. If you accept it you will be devoured by evil giants, if you decline, your community will be visited by pestilence and famine.

Alice: That is an impossible choice.

Mad Hatter: Only if you believe it is. Everyone wants some magical solution for their problem and everyone refuses to believe in magic.

Alice: Sometimes I've believed in as many as 6 impossible things before breakfast.

Mad Hatter: And we've legislated them into EU law by lunch
 

European lawmakers (including judges) seem to live in an Alice in Wonderland world where laws are made up on the spur of the moment by either the Mad Hatter, or the Dormouse. No thought is given to how they are supposed to work in practice or how they will pan out in reality.

For some reason EU lawmakers decided that the technology of internet cookies personified all that was bad about the internet, particularly that it is largely offered for 'free' whilst in reality being funded by the invasive extraction and exploitation of people's personal data.

Justifiably there is something to be legislated against here. But why not follow the time honoured, and effective, route of directing laws against the large companies doing the exploiting. It would have been straightforward to legislate that internet companies must not retain user data that defines their behaviour and personal information. The authorities could back this up by putting people in prison, or wiping out companies that don't comply with the law. 

But no, the EU came up with some bizarre nonsensical requirement that does little but train people to tick consent boxes without ever reading what they are consenting to. How can they call this data protection? It's data endangerment.

And unsurprisingly the first wave of implementation by internet companies was to try and make the gaining of consent for tracking cookies a one sided question, with a default answer of yes and no mechanism to say no.

Well it didn't take long to see through this silly slice of chicanery, but that doesn't matter...it takes ages for the EU legal system to gear up and put a stop to such a ploy.

So several years on, the European Court of Justice has now ruled that companies should give real options and should not lead people down the garden path towards the option required by the companies.

In an excellent summary of this weeks court judgement, the main court findings are:

  • pre-ticked boxes do not amount to valid consent,

  • expiration date of cookies and third party sharing should be disclosed to users when obtaining consent,

  • different purposes should not be bundled under the same consent ask,

  • in order for consent to be valid 'an active behaviour with a clear view' (which I  read as 'intention') of consenting should be obtained (so claiming in notices that consent is obtained by having users continuing to use the website very likely does not meet this threshold) and,

  • these rules apply to cookies regardless of whether the data accessed is personal or not.

pdpecho.com commented on what the court carefully decided was the elephant in the room, that would be better not mentioned. ie what will happen next.

The latest court judgement really says that websites should present the cookie consent question something like this.


Website cookie consent
 
YES

I consent to this website building a detailed profile of my browsing history, personal information & preferences, financial standing and political leaning, to be used to monetise this website in whatever way this website sees fit.


NO

No I  do not consent

Now it does not need an AI system the size of a planet to guess which way internet users will then vote given a clearly specified choice.

There is already a bit of discussion around the EU tea party table worrying about the very obvious outcome that websites will smply block out users who refuse to sign up for tracking cookies. The EU refers to this as a cookie wall, and there are rumblings that this approach will be banned by law.

This would lead to an Alice in Wonderland type of tea shop where customers have the right to decline consent to be charged the price of a chocolate chip cookie, and so can enjoy it for free.

Perfect in Wonderland, but in the real world, European internet businesses would soon be following in the footsteps of declining European high street businesses.

 

 

Offsite Article: Come in Google Adwords. your time is up...


Link Here2nd October 2019
UK data 'protection' censor reiterates GDPR warning to ad tech companies about the blatant use of people's web browsing history without consent

See article from digiday.com

 

 

Rules edit...

Reddit announces censorship rule changes intended t make it easier for moderators to take action against bullying and harassment


Link Here1st October 2019
Social media site Reddit has announced new censorship rules to target bullying and harassment. It explains in a blog post:

These changes, which were many months in the making, were primarily driven by feedback we received from you all, our users, indicating to us that there was a problem with the narrowness of our previous policy. Specifically, the old policy required a behavior to be continued and/or systematic for us to be able to take action against it as harassment. It also set a high bar of users fearing for their real-world safety to qualify, which we think is an incorrect calibration. Finally, it wasn't clear that abuse toward both individuals and groups qualified under the rule. All these things meant that too often, instances of harassment and bullying, even egregious ones, were left unactioned. This was a bad user experience for you all, and frankly, it is something that made us feel not-great too. It was clearly a case of the letter of a rule not matching its spirit.

The changes we're making today are trying to better address that, as well as to give some meta-context about the spirit of this rule: chiefly, Reddit is a place for conversation. Thus, behavior whose core effect is to shut people out of that conversation through intimidation or abuse has no place on our platform.

We also hope that this change will take some of the burden off moderators, as it will expand our ability to take action at scale against content that the vast majority of subreddits already have their own rules against-- rules that we support and encourage.

We all know that context is critically important here, and can be tricky, particularly when we're talking about typed words on the internet. This is why we're hoping today's changes will help us better leverage human user reports. Where previously, we required the harassment victim to make the report to us directly, we'll now be investigating reports from bystanders as well. We hope this will alleviate some of the burden on the harassee.

You should also know that we'll also be harnessing some improved machine-learning tools to help us better sort and prioritize human user reports. But don't worry, machines will only help us organize and prioritize user reports. They won't be banning content or users on their own. A human user still has to report the content in order to surface it to us. Likewise, all actual decisions will still be made by a human admin.

 

 

Cryptic reasons...

US ISPs complain to the US government about loss of snooping capabilities when Google Chrome switches to encrypted DNS


Link Here1st October 2019

We would like to bring to your attention an issue that is of concern to all our organizations. Google is beginning to implement encrypted Domain Name System lookups into its Chrome browser and Android operating system through a new protocol for wireline and wireless service, known as DNS over HTTPS (DoH). If not coordinated with others in the internet ecosystem, this could interfere on a mass scale with critical internet functions, as well as raise data competition issues. We ask that the Committee seek detailed information from Google about its current and future plans and timetable for implementing encrypted DNS lookups, as well as a commitment not to centralize DNS lookups by default in Chrome or Android without first meeting with others in the internet ecosystem, addressing the implications of browser- and operating-system-based DNS lookups, and reaching consensus on implementation issues surrounding encrypted DNS.

Google is unilaterally moving forward with centralizing encrypted domain name requests within Chrome and Android, rather than having DNS queries dispersed amongst hundreds of providers. When a consumer or enterprise uses Google's Android phones or Chrome web browser, Android or Chrome would make Google the encrypted DNS lookup provider by default and most consumers would have limited practical knowledge or ability to detect or reject that choice. Because the majority of worldwide internet traffic (both wired and wireless) runs through the Chrome browser or the Android operating system, Google could become the overwhelmingly predominant DNS lookup provider.

While we recognize the potential positive effects of encryption, we are concerned about the potential for default, centralized resolution of DNS queries, and the collection of the majority of worldwide DNS data by a single, global internet company. By interposing itself between DNS providers and the users of the Chrome browser (> 60% worldwide share) and Android phones (> 80% worldwide share of mobile operating systems), Google would acquire greater control over user data across networks and devices around the world. This could inhibit competitors and possibly foreclose competition in advertising and other industries

Moreover, the centralized control of encrypted DNS threatens to harm consumers by interfering with a wide range of services provided by ISPs (both enterprise and public-facing) and others. Over the last several decades, DNS has been used to build other critical internet features and functionality including: (a) the provision of parental controls and IoT management for end users; (b) connecting end users to the nearest content delivery networks, thus ensuring the delivery of content in the fastest, cheapest, and most reliable manner; and (c) assisting rights holders' and law enforcement's efforts in enforcing judicial orders in combatting online piracy, as well as law enforcement's efforts in enforcing judicial orders in combatting the exploitation of minors. Google's centralization of DNS would bypass these critical features, undermining important consumer services and protections, and likely resulting in confusion because consumers will not understand why these features are no longer working. This centralization also raises serious cybersecurity risks and creates a single point of failure for global Internet services that is fundamentally at odds with the decentralized architecture of the internet. By limiting the ability to spot network threat indicators, it would also undermine federal government and private sector efforts to use DNS information to mitigate cybersecurity risks.

For these reasons, we ask that the Committee call upon Google not to impose centralized, encrypted DNS as a default standard in Chrome and Android. Instead, Google should follow the Internet Engineering Task Force best practice of fully vetting internet standards, and the internet community should work together to build consensus to ensure that encrypted DNS is implemented in a decentralized way that maximizes consumer welfare and avoids the disruption to essential services identified above

Sincerely

CTIA
NCTA
The Internet & Television Association
US Telecom
The Broadband Association


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2020   2022   2023   2024   Latest 
Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    


 


TV News

Movie News

Games News

Internet News
 
Advertising News

Phone News
 

Technology News

Gambling News

Books News

Music News

Art News

Stage News
 

melonfarmers icon

Home

Index

Links

Email

Shop
 


US

World

Media

Nutters

Liberty
 

Film Cuts

Cutting Edge

Info

Sex News

Sex+Shopping
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys