Melon Farmers Unrated

Internet News


2020: January

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept    

 

Young People, Pornography and Age-verification...

Research commissioned by the BBFC reveals that internet porn is part of normal life for 16 and 17 year olds, just like the over 18s


Link Here 31st January 2020
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The most immediately interesting point is that the BBFC has elected not to promote the research that they commissioned and not to publish it on their website. Maybe this simply reflects that the BBFC no longer has the job of internet porn censor. The job looks set to be handed over to Ofcom as part of the government's upcoming online harms bill.

The study by Revealing Reality combined a statistically representative survey of secondary school-age children with in-depth interviews and focus groups with parents. It found that adult material was a prominent feature in British childhood. Almost half of teenagers aged 16 and 17 said they had recently seen pornography, with the researchers believing this figure is substantially lower than the true figure because of respondents' awkwardness when faced with the question.

While 75% of parents did not believe their children would have watched pornography, the majority of these parents' children told the researchers that they had viewed adult material.

The report also found that while parents thought their sons would watch pornography for sexual pleasure, many erroneously believed their daughters would primarily see pornography by accident. It said: This is contrary to the qualitative research findings showing that many girls were also using pornography for sexual pleasure.

The researchers said that one side effect of early exposure to online pornography is that gay, lesbian or bisexual respondents often understood their sexuality at a younger age. It was common for these respondents to start by watching heterosexual pornography, only to realise that they did not find this sexually gratifying and then gradually move to homosexual pornography.

The research very much affirms the government campaign to seek restrictions on porn access for children and notes that such measures as age verification requirements are unsurprisingly supported by parents.

However the research includes a very interesting section on the thoughts of 16 and 17 year olds who have passed the age of consent and unsurprisingly use porn on just about the same way as adults who have nominally passed the official, but not the biological and hormonal, age of maturity.

The report uses the term 'young people' to mean 16 - 18 year olds (included in the survey as speaking about their views and experiences as 16 and 17 year olds). The report notes:

While recognising the benefits of preventing younger children accessing pornography, young people had some concerns about age-verification restrictions. For example, some young people were worried that, in the absence of other adequate sources of sex education, they would struggle to find ways to learn about sex without pornography.

This was felt particularly strongly by LGB respondents in the qualitative research, who believed that pornography had helped them to understand their sexuality and learn about different types of sexual behaviours that they weren't taught in school.

Some young people also felt that the difference in the age of consent for having sex20416204and the age at which age-verification is targeted20418204was contradictory. They also struggled to understand why, for instance, they could serve in the armed forces and have a family and yet be blocked from watching pornography.

Young people also seemed well versed in knowing methods of working around age verification and website blocking:

The majority of parents and young people (aged 16 to 18) interviewed in the qualitative research felt that older children would be able to circumvent age-verification by a range of potential online workarounds. Additionally, many 16- to 18-year-olds interviewed in the qualitative work who could not identify a workaround at present felt they would be able to find a potential method for circumventing age-verification if required.

Some of the most commonly known workarounds that older children thought may potentially negate
age-verification included:

  • Using a VPN to appear as if you are accessing adult content from elsewhere in the world
  • Torrenting files by downloading the data in chunks
  • Using Tor (the ‘onion’ router) to disguise the user’s location
  • By accessing the dark web
  • By using proxy websites

Maybe the missed another obvious workaround, sharing porn amongst themselves via internet messaging or memory sticks.

 

 

Regressive Taxation...

The German implementation of link tax from the new EU Copyright Directive is even more extreme than the directive


Link Here26th January 2020
Full story: Copyright in the EU...Copyright law for Europe
Germany was a major force behind the EU's disgraceful copyright directive passed last year. It is perhaps no surprise that proposed implementation into German law is even more extreme than the directive.

In particular the link tax has been drafted so that it is nearly impossible to refer to a press article without an impossibly expensive licence to use the newspaper's content.

The former Pirate Party MEP Julia Reda has picked out the main bad ideas on Twitter.

Under the German proposals, now up for public consultation, only single words or very short extracts of a press article can be quoted without a license. Specifically, free quotation is limited to:

  • the headline
  • a small-format preview image with a resolution of 128-by-128 pixels
  • a sequence of sounds, images or videos with a duration of up to three seconds
Techdirt explains:

The proposal states that the new ancillary copyright does not apply to hyperlinks, or to private or non-commercial use of press publishers' materials by a single user. However, as we know from the tortured history of the Creative Commons non-commercial license, it is by no means clear what non-commercial means in practice.

Press publishers are quite likely to insist that posting memes on YouTube, Facebook or Twitter -- all undoubtedly commercial in nature -- is not allowed in general under the EU Copyright Directive.

We won't know until top EU courts rule on the details, which will take years. In the meantime, online services will doubtless prefer to err on the side of caution, keen to avoid the risk of heavy fines. It is likely they will configure their automated filters to block any use of press publishers' material that goes beyond the extremely restrictive limits listed above. Moreover, this will probably apply across the EU, not just in Germany, since setting up country-by-country upload filters is more expensive. Far easier to roll out the most restrictive rules across the whole region.

 

 

Offsite Article: Cloud extraction technology...


Link Here26th January 2020
The secret tech that lets government agencies collect masses of data from your apps. By Privacy International

See article from privacyinternational.org

 

 

Offsite Article: Harmful government...


Link Here26th January 2020
Full story: Online Harms White Paper...UK Government seeks to censor social media
Internet regulation is necessary but an overzealous Online Harms bill could harm our rights. By Michael Drury and Julian Hayes

See article from euronews.com

 

 

A Brexit bounty...

UK Government wisely decides not to adopt the EU's disgraceful Copyright Directive that requires YouTube and Facebook to censor people's uploads if they contain even a snippet of copyrighted material


Link Here25th January 2020
Full story: Copyright in the EU...Copyright law for Europe
Universities and Science Minister Chris Skidmore has said that the UK will not implement the EU Copyright Directive after the country leaves the EU.

Several companies have criticised the disgraceful EU law, which would hold them accountable for not removing copyrighted content uploaded by users.

EU member states have until 7 June 2021 to implement the new reforms, but the UK will have left the EU by then.

It was Article 13 which prompted fears over the future of memes and GIFs - stills, animated or short video clips that go viral - since they mainly rely on copyrighted scenes from TV and film. Critics noted that Article 13 would make it nearly impossible to upload even the tiniest part of a copyrighted work to Facebook, YouTube, or any other site.

Other articles give the news industry total copyright control of news material that people have previously been widely used in people's blogs and posts commenting on the news.

Prime Minister Boris Johnson criticised the law in March, claiming that it was terrible for the internet.

Google had campaigned fiercely against the changes, arguing they would harm Europe's creative and digital industries and change the web as we know it. YouTube boss Susan Wojcicki had also warned that users in the EU could be cut off from the video platform.

 

 

Do not snoop, do not profile, and do not earn any money...

Newspapers realise that the ICO default child protection policy may be very popular with adults too, and so it may prove tough to get them to age verify as required for monetisation


Link Here 24th January 2020
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children
News websites will have to ask readers to verify their age or comply with a new 15-point code from the Information Commissioner's Office (ICO) designed to protect children's online data, ICO has confirmed.

Press campaign groups were hoping news websites would be exempt from the new Age Appropriate Design Code so protecting their vital digital advertising revenues which are currently enhanced by extensive profiled advertising.

Applying the code as standard will mean websites putting privacy settings to high and turning off default data profiling. If they want to continue enjoying revenues from behavioural advertising they will need to get adult readers to verify their age.

In its 2019 draft ICO had previously said such measures must be robust and that simply asking readers to declare their age would not be enough.But it has now confirmed to Press Gazette that for news websites that adhere to an editorial code, such self-declaration measures are likely to be sufficient.

This could mean news websites asking readers to enter their date of birth or tick a box confirming they are over 18. An ICO spokesperson said sites using these methods might also want to consider some low level technical measures to discourage false declarations of age, but anything more privacy intrusive is unlikely to be appropriate..

But Society of Editors executive director Ian Murray predicted the new demands may prove unpopular even at the simplest level. Asking visitors to confirm their age [and hence submit to snooping and profiling] -- even a simple yes or no tick box -- could be a barrier to readers.

The ICO has said it will work with the news media industry over a 12-month transition period to enable proportionate and practical measures to be put in place for either scenario.

In fact ICO produced a separate document alongside the code to explain how it could impact news media, which it said would be allowed to apply the code in a risk-based and proportionate way.

 

 

Supreme Irony...

The lawmaker behind EU a new copyright law that massively advantages US companies now whinges about the US domination of the internet


Link Here23rd January 2020
Full story: Copyright in the EU...Copyright law for Europe
The EU is a bizarre institution. It tries to resolve fairness issues, social ills and international competition rules, all by dreaming up reams of red tape without any consideration of where it will lead.

Well red tape ALWAYS works to the advantage of the largest players who have the scale and wealth to take the onerous and expensive rules in their stride. The smaller players end up being pushed out of the market.

And in the case of the internet the largest players are American (or more latterly Chinese) and indeed they have proven to best able to take advantage of European rules. And of course the smaller players are European and are indeed being pushed out of the market.

Axel Voss is one of the worst examples of European politicians dreaming up red tape that advantages the USA. His latest effort was to push through an upcoming European law that will require social media companies to pre-censor up loaded content for copyright infringement. Of course the only way this can be done practically is to have some mega artificial intelligence effort to try and automatically scan all content for video, text and audio that may be copyrighted. And guess who are the only companies in the world that have the technology to perform such a feat...well the US and Chinese internet giants of course.

Now out of supreme irony Voss himself has had a whinge about the American and Chinese domination of the internet.

In a long whinge about the lack of European presence in the internet industry he commented on Europe's dependence on US companies. If Google decides to switch off all its services tomorrow, I would like to know what will be left in Europe, said Voss, painting a gloomy picture in which there are no search engines, no browsers and no Google Maps.

.. Read the full article from euractiv.com

 

 

Updated: Children likely to prove toxic to a website's monetisation...

ICO backs off a little from an age gated internet but imposes masses of red tape for any website that is likely to be accessed by under 18s


Link Here 23rd January 2020
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children
The Information Commissioner's Office (ICO) has just published its Age Appropriate Design Code:

The draft was published last year and was opened to a public consultation which came down heavily against ICO's demands that website users should be age verified so that the websites could tailor data protection to the age of the user.

Well in this final release ICO has backed off from requiring age verification for everything, and instead suggested something less onerous called age 'assurance'. The idea seems to be that age can be ascertained from behaviour, eg if a YouTube user watches Peppa Pig all day then one can assume that they are of primary school age.

However this does seem lead to a loads of contradictions, eg age can be assessed by profiling users behaviour on the site, but the site isn't allowed to profile people until they are old enough to agree to this. The ICO recognises this contradiction but doesn't really help much with a solution in practice.

The ICO defines the code as only applying to sites likely to be accessed by children (ie websites appealing to all ages are considered caught up by the code even though they are not specifically for children.

On a wider point the code will be very challenging to monetisation methods for general websites. The code requires website to default to no profiling, no geo-location, no in-game sales etc. It assumes that adults will identify themselves and so enable all these things to happen. However it may well be that adults will quite like this default setting and end up not opting for more, leaving the websites without income.

Note that these rules are in the UK interpretation of GDPR law and are not actually in the European directive. So they are covered by statute, but only in the UK. European competitors have no equivalent requirements.

The ICO press release reads:

Today the Information Commissioner's Office has published its final Age Appropriate Design Code -- a set of 15 standards that online services should meet to protect children's privacy.

The code sets out the standards expected of those responsible for designing, developing or providing online services like apps, connected toys, social media platforms, online games, educational websites and streaming services. It covers services likely to be accessed by children and which process their data.

The code will require digital services to automatically provide children with a built-in baseline of data protection whenever they download a new app, game or visit a website.

That means privacy settings should be set to high by default and nudge techniques should not be used to encourage children to weaken their settings. Location settings that allow the world to see where a child is, should also be switched off by default. Data collection and sharing should be minimised and profiling that can allow children to be served up targeted content should be switched off by default too.

Elizabeth Denham, Information Commissioner, said:

"Personal data often drives the content that our children are exposed to -- what they like, what they search for, when they log on and off and even how they are feeling.

"In an age when children learn how to use an iPad before they ride a bike, it is right that organisations designing and developing online services do so with the best interests of children in mind. Children's privacy must not be traded in the chase for profit."

The code says that the best interests of the child should be a primary consideration when designing and developing online services. And it gives practical guidance on data protection safeguards that ensure online services are appropriate for use by children.

Denham said:

"One in five internet users in the UK is a child, but they are using an internet that was not designed for them.

"There are laws to protect children in the real world -- film ratings, car seats, age restrictions on drinking and smoking. We need our laws to protect children in the digital world too.

"In a generation from now, we will look back and find it astonishing that online services weren't always designed with children in mind."

The standards of the code are rooted in the General Data Protection Regulation (GDPR) and the code was introduced by the Data Protection Act 2018. The ICO submitted the code to the Secretary of State in November and it must complete a statutory process before it is laid in Parliament for approval. After that, organisations will have 12 months to update their practices before the code comes into full effect. The ICO expects this to be by autumn 2021.

This version of the code is the result of wide-ranging consultation and engagement.

The ICO received 450 responses to its initial consultation in April 2019 and followed up with dozens of meetings with individual organisations, trade bodies, industry and sector representatives, and campaigners.

As a result, and in addition to the code itself, the ICO is preparing a significant package of support for organisations.

The code is the first of its kind, but it reflects the global direction of travel with similar reform being considered in the USA, Europe and globally by the Organisation for Economic Co-operation and Development (OECD).

Update: The legals

23rd January 2020. See article from techcrunch.com

Schedule

The code now has to be laid before parliament for approval for a period of 40 sitting days -- with the ICO saying it will come into force 21 days after that, assuming no objections. Then there's a further 12 month transition period after it comes into force.

Obligation or codes of practice?

Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal explained:

This is not, and will not be, 'law'. It is just a code of practice. It shows the direction of the ICO's thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it's not something with which an organisation needs to comply as such. They need to comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.

Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other applicable laws. The obligation to comply with those laws does not change because of today's code of practice. Rather, the code of practice shows the ICO's thinking on what compliance might look like (and, possibly, goldplates some of the requirements of the law too).

Comment: ICO pushes ahead with age gates

23rd January 2020. See article from openrightsgroup.org

The ICO's Age Appropriate Design Code released today includes changes which lessen the risk of widespread age gates, but retains strong incentives towards greater age gating of content.

Over 280 ORG supporters wrote to the ICO about the previous draft code, to express concerns with compulsory age checks for websites, which could lead to restrictions on content.

Under the code, companies must establish the age of users, or restrict their use of data. ORG is concerned that this will mean that adults only access websites when age verified creating severe restrictions on access to information.

The ICO's changes to the Code in response to ORG's concerns suggest that different strategies to establish age may be used, attempting to reduce the risk of forcing compulsory age verification of users.

However, the ICO has not published any assessment to understand whether these strategies are practical or what their actual impact would be.

The Code could easily lead to Age Verification through the backdoor as it creates the threat of fines if sites have not established the age of their users.

While the Code has many useful ideas and important protections for children, this should not come at the cost of pushing all websites to undergo age verification of users. Age Verification could extend through social media, games and news publications.

Jim Killock, Executive Director of Open Rights Group said:

The ICO has made some useful changes to their code, which make it clear that age verification is not the only method to determine age.

However, the ICO don't know how their code will change adults access to content in practice. The new code published today does not include an Impact Assessment. Parliament must produce one and assess implications for free expression before agreeing to the code.

Age Verification demands could become a barrier to adults reaching legal content, including news, opinion and social media. This would severely impact free expression.

The public and Parliament deserve a thorough discussion of the implications, rather than sneaking in a change via parliamentary rubber stamping with potentially huge implications for the way we access Internet content.

 

 

Commented: Floundering...

ICO takes no immediate action against the most blatant examples of people's most personal data being exploited without consent, ie profiled advertising


Link Here23rd January 2020
Blatant abuse of people's private data has become firmly entrenched in the economic model of the free internet ever since Google recognised the value of analysing what people are searching for.

Now vast swathes of the internet are handsomely funded by the exploitation of people's personal data. But that deep entrenchment clearly makes the issue a bit difficult to put right without bankrupting half of the internet that has come to rely on the process.

The EU hasn't helped with its ludicrous idea of focusing its laws on companies having to obtain people's consent to have their data exploited. A more practical lawmaker would have simply banned the abuse of personal data without bothering with the silly consent games. But the EU seems prone to being lobbied and does not often come up with the most obvious solution.

Anyway enforcement of the EU's law is certainly causing issues for the internet censors at the UK's ICO.

The ICO warned the adtech industry 6 months ago that its approach is illegal  and has now announced that it would not be taking any action against the data abuse yet, as the industry has made a few noises about improving a bit over the coming months.

Simon McDougall, ICO Executive Director of Technology and Innovation has written:

The adtech real time bidding (RTB) industry is complex, involving thousands of companies in the UK alone. Many different actors and service providers sit between the advertisers buying online advertising space, and the publishers selling it.

There is a significant lack of transparency due to the nature of the supply chain and the role different actors play. Our June 2019 report identified a range of issues. We are confident that any organisation that has not properly addressed these issues risks operating in breach of data protection law.

This is a systemic problem that requires organisations to take ownership for their own data processing, and for industry to collectively reform RTB. We gave industry six months to work on the points we raised, and offered to continue to engage with stakeholders. Two key organisations in the industry are starting to make the changes needed.

The Internet Advertising Bureau (IAB) UK has agreed a range of principles that align with our concerns, and is developing its own guidance for organisations on security, data minimisation, and data retention, as well as UK-focused guidance on the content taxonomy. It will also educate the industry on special category data and cookie requirements, and continue work on some specific areas of detail. We will continue to engage with IAB UK to ensure these proposals are executed in a timely manner.

Separately, Google will remove content categories, and improve its process for auditing counterparties. It has also recently proposed improvements to its Chrome browser, including phasing out support for third party cookies within the next two years. We are encouraged by this, and will continue to look at the changes Google has proposed.

Finally, we have also received commitments from other UK advertising trade bodies to produce guidance for their members

If these measures are fully implemented they will result in real improvements to the handling of personal data within the adtech industry. We will continue to engage with industry where we think engagement will deliver the most effective outcome for data subjects.

Comment: Data regulator ICO fails to enforce the law

18th January 2020. See article from openrightsgroup.org

Responding to ICO's announcement today that the regulator is taking minimal steps to enforce the law against massive data breaches taking place in the online ad industry through Real-Time Bidding, complainants Jim Killock and Michael Veale have called on the regulator to enforce the law.

The complainants are considering taking legal action against the regulator. Legal action could be taken against the ICO for failure to enforce, or against the companies themselves for their breaches of Data Protection law.

The Real-Time Bidding data breach at the heart of RTB market exposes every person in the UK to mass profiling, and the attendant risks of manipulation and discrimination.

As the evidence submitted by the complainants notes, the real-time bidding systems designed by Google and the IAB broadcast what virtually all Internet users read, watch, and listen to online to thousands of companies, without protection of the data once broadcast. Now, sixteen months after the initial complaint, the ICO has failed to act.

Jim Killock, Executive Director of the Open Rights Group said:

The ICO is a regulator, so needs to enforce the law. It appears to be accepting that unlawful and dangerous sharing of personal data can continue, so long as 'improvements' are gradually made, with no actual date for compliance.

Last year the ICO gave a deadline for an industry response to our complaints. Now the ICO is falling into the trap set by industry, of accepting incremental but minimal changes that fail to deliver individuals the control of their personal data that they are legally entitled to.

The ICO must take enforcement action against IAB members.

We are considering our position, including whether to take legal action against the regulator for failing to act, or individual companies for their breach of data protection law.

Dr Michael Veale said:

When an industry is premised and profiting from clear and entrenched illegality that breach individuals' fundamental rights, engagement is not a suitable remedy. The ICO cannot continue to look back at its past precedents for enforcement action, because it is exactly that timid approach that has led us to where we are now.

Ravi Naik, solicitor acting for the complainants, said:

There is no dispute about the underlying illiegality at the heart of RTB that our clients have complained about. The ICO have agreed with those concerns yet the companies have not taken adequate steps to address those conerns. Nevertheless, the ICO has failed to take direct enforcement action needed to remedy these breaches.

Regulatory ambivalence cannot continue. The ICO is not a silo but is subject to judicial oversight. Indeed, the ICO's failure to act raises a question about the adequacy of the UK Data Protection Act. Is there proper judicial oversight of the ICO? This is a critical question after Brexit, when the UK needs to agree data transfer arrangements with the EU that cover all industries.

Dr. Johnny Ryan of Brave said:

The RTB system broadcasts what everyone is reading and watching online, hundreds of billions of times a day, to thousands of companies. It is by far the largest data breach ever recorded. The risks are profound. Brave will support ORG to ensure that the ICO discharges its responsibilities.

Jim Killock and Michael Veale complained about the Adtech industry and Real Time Bidding to the UK's ICO in September 2018. Johnny Ryan of Brave submitted a parallel complaint against Google about their Adtech system to the Irish Data Protection Authority.

Update: Advertising industry will introduce a 'gold standard 2.0' for privacy towards the end of 2020

23rd January 2020. See article from campaignlive.co.uk

The Internet Advertising Bureau UK has launched a new version of what it calls its Gold Standard certification process that will be independently audited by a third party.

In a move to address ongoing privacy concerns with the digital supply chain, the IAB's Gold Standard 2.0 will incorporate the Transparency and Consent Framework, a widely promoted industry standard for online advertising.

The new process will be introduced in the fourth quarter after an industry consultation to agree on the compliance criteria for incorporating the TCF.

 

 

Rejection...

Apple TV episode banned in 10 Arabic nations and Russia


Link Here22nd January 2020
Eleven countries have banned an episode of a new Apple TV Plus series, Little America , that focuses on a gay immigrant from Syria.

The episode released Friday internationally but 10 Arabic nations and Russia are preventing it from being screened in their countries.

The episode, The Son, centers on Rafiq (Haaz Sleiman) as he applies for asylum in the United States after facing violence and family rejection for his sexuality.

The episode was filmed in Canada rather than the US due to the casting of Syrian actors.

In response to the news of Little America s censorship, writer Amrou Al-Kadhi, a drag performer from Iraq, expressed a renewed commitment toward telling stories that reflect these experiences. "We will prevail," he told Pink News.

 

 

Banning news that the government doesn't like...

Qatar announces new law banning 'false news'


Link Here22nd January 2020
Qatari Emir Tamim bin Hamad al-Thani amended Article 136 of the country's penal code to make the publication or sharing of 'false news' punishable by up to five years in prison or a 100,000 Qatari riyal fine (US$27,500)

 CPJ Senior Middle East and North Africa Researcher Justin Shilad said:

Instead of standing up for press freedom in the Gulf region, where the free flow of information is under threat, Qatari authorities have jumped on the 'false news' bandwagon. Qatar should rescind this repressive law and focus instead on legislation that enshrines press freedom in line with its international human rights law commitments.

 

 

The cost of censorship...

New Zealand Government debates how to apply age ratings to internet TV


Link Here21st January 2020
The New Zealand has been debating how to censor internet TV in the country, and it seems to have resulted in the likes of Netflix being able to self-classify their content.

The initial thought was that New Zealand's film censors at the Office of Film and Literature Classification should be given the job, but the likely expense seems to have swayed opinions.

Internal Affairs Tracey Martin has a bill in select committee which will make New Zealand classification labels like R16 mandatory for commercial on-demand video content such as Netflix, Lightbox, and the iTunes movie store. Mandatory classification will require some sort of fee for the providers which is yet to be established. The current fee is more than $1100 for an unrated film.

Officials from the Department of Internal Affairs in a regulatory impact statement said the mandatory classification presented a risk that content providers may withdraw from the market due to an increased compliance burden should they be required to classify all content via the current process. Officials also noted the risk of content providers would pass on the cost of classification to consumers through higher prices.

Officials noted that an approach that allowed the providers to classify their own content using a method prescribed by the censorship office should mitigate that risk and that no provider had yet threatened to leave the market.

In the end the Government opted to allow providers to self-classify, going against the wishes of the Children's Commissioner and the OFLC which wanted the current process followed.

 

 

Offsite Article: Does this also mean readers are being profiled as terrorists for reading sport reports?...


Link Here21st January 2020
Newspapers miss out on advertising opportunities as internet AI gets confused between a soccer report about shooting and attack gets confused with prohibited terrorist content

See article from theguardian.com

 

 

Parents TV Council recommends...

The Witcher on Netflix


Link Here20th January 2020
Parents TV Council is a US moralist campaign. The group is clearly impressed by The Witcher on Netflix and is kindly spreading the message. The group writes:

The Parents Television Council is warning families about the graphic content found in Netflix's The Witcher , a new fantasy drama based on a book series and video game that is being compared to HBO's Game of Thrones .

Using filtering data from VidAngel, the PTC found that across eight episodes of the first season of The Witcher, viewers would hear 207 instances of profanity; witness 417 scenes of violence; and be subjected to 271 instances of sex, nudity and other sexual content -- around 100 instances of adult content per one hour episode.

PTC Program Director Melissa Henson said:

While families might be drawn to a fantasy-themed TV show, The Witcher is decidedly not family-friendly given the new data highlighting the explicit content viewers can see. From frequent nudity to graphic violence, The Witcher is certainly comparable to Game of Thrones with respect to adult content, most of which appears gratuitous. We hope that Netflix and other streaming services come to realize that needless explicit adult content isn't usually what viewers seek.

Netflix should also offer content filtering options for families who might be interested in watching The Witcher -- but without the adult content. That's a win-win solution for families and for Netflix, and crucial to Netflix's long-term growth strategy.

 

 

Playing the EU's Silly Cookie Game...

Google's Chrome browser will ban 3rd party tracking cookies albeit over the course of two years


Link Here16th January 2020
Google is to restrict web pages from loading 3rd party profiling cookies when accessed via its Chrome browser. Many large websites, eg major newspapers make a call to hundreds of 3rd part profilers to allow them to build up a profile of people's browsing history, which then facilitates personalised advertising.

Now Google has said that it will block these third-party cookies within the next two years.

Tracking cookies are very much in the sights of the EU who are trying to put an end to the exploitative practise. However the EU is not willing to actually ban such practises, but instead has invented a silly game about websites obtaining consent for tracking cookies.

The issue is of course that a lot of 'free' access websites are funded by advertising and rely on the revenue from the targeted advertising. I have read estimates that if websites were to drop personalised ads, and fall back on contextual advertising (eg advertising cars on motoring pages), then they would lose about a third of their income. Surely a fall that magnitude would lead to many bankrupt or unviable websites.

Now the final position of the EU's cookie consent game is that a website would have to present two easy options before allowing access to a website:

  • Do you want to allow tracking cookies to build up a database of your browsing history
  • Do you NOT want to allow tracking cookies to build up a database of your browsing history

The simple outcome will be that virtually no one will opt for tracking, so the website will lose a third of its income. So it is rather unsurprising that websites would rather avoid offering such an easy option that would deprive them of so much of their income.

In reality the notion of consent it not practical. It would be more honest to think of the use of tracking cookies as a price for 'free' access to a website.

Perhaps when the dust has settled, a more honest and practical endgame would bea  choice more like:

  • Do you want to allow tracking cookies to build up a database of your browsing history in return for 'free' access
  • Do you want to pay a fee to enable access to the website without tracking cookies
  • Sorry you may not access this website

The EU has been complaining about companies trying to avoid the revenue destroying official consent options. A study just published observes that nearly all cookie consent pop-ups are flouting EU privacy laws.

Researchers at the Massachusetts Institute of Technology, University College London (UCL) and Aarhus University have conducted a joint study into the use of cookies. They analysed five companies which offer consent management platforms (CMP) for cookies used by the UK's top 10,000 websites.

Despite EU privacy laws stating that consent for cookies must be informed, specific and freely given, the research suggests that only 12% of the sites met the minimal requirements of GDPR (General Data Protection Regulation) law. Instead they were found to blanket data consent options in complicated site design, such as:

  • pre-ticked boxes burying decline buttons on later pages multiple clicks tracking users before consent and after pressing reject
  • Just over half the sites studied did not have rejecting all tracking as an option.
  • Of the sites which did, only 13% made it accessible through the same or fewer clicks as the option to accept all.
The researchers estimate it would take, on average, more than half an hour to read through what the third-party companies are doing with your data, and even longer to read all their privacy policies. It's a joke and there's no actual way you could do this realistically, said Dr Veale.

 

 

Exposed pussies...

Another example about how dangerous it is to provide personal data for age or identity verification related to adult websites


Link Here16th January 2020
Cyber-security researchers claim that highly sensitive personal details about thousands of porn stars have been exposed online by an adult website.

They told BBC News they had found an open folder on PussyCash's Amazon web server that contained 875,000 files.

However the live webcam porn network, which owns the brand ImLive and other adult websites, said there was no evidence anyone else had accessed the folder. And it had it removed public access as soon as it had been told of the leak.

The researchers are from vpnMentor, which is a VPN comparison site. vpnMentor said in a blog anyone with the right link could have accessed 19.95GB of data dating back over 15 years as well as from the past few weeks, including contracts revealing more than 4,000 models' including

full name address social-security number date of birth phone number height weight hips, bust and waist measurements piercings tattoos scars The files also revealed scans or photographs of their passport driving licence credit card birth certificate.

 

 

Offsite Article Searching for better privacy...


Link Here15th January 2020
Full story: Gooogle Privacy...Google's many run-ins with privacy
Google to strangle user agent strings in its chrome browse to hamper advertisers from profiling users via fingerprinting

See article from zdnet.com

 

 

Even Facebook is preinstalled to avoid users realising how many access rights it assumes...

50 rights organisations call on Google to ban exploitative apps being pre-installed on phones to work around user privacy settings


Link Here14th January 2020
Privacy International and over 50 other organisations have submitted a letter to Alphabet Inc. CEO Sundar Pichai asking Google to take action against exploitative pre-installed software on Android devices.

Dear Mr. Pichai,

We, the undersigned, agree with you: privacy cannot be a luxury offered only to those people who can afford it.

And yet, Android Partners - who use the Android trademark and branding - are manufacturing devices that contain pre-installed apps that cannot be deleted (often known as "bloatware"), which can leave users vulnerable to their data being collected, shared and exposed without their knowledge or consent.

These phones carry the "Google Play Protect" branding, but research shows that 91% of pre-installed apps do not appear in Google Play -- Google's app store.

These pre-installed apps can have privileged custom permissions that let them operate outside the Android security model. This means permissions can be defined by the app - including access to the microphone, camera and location - without triggering the standard Android security prompts. Users are therefore completely in the dark about these serious intrusions.

We are concerned that this leaves users vulnerable to the exploitative business practices of cheap smartphone manufacturers around the world.

The changes we believe are needed most urgently are as follows:

  • Individuals should be able to permanently uninstall the apps on their phones. This should include any related background services that continue to run even if the apps are disabled.

  • Pre-installed apps should adhere to the same scrutiny as Play Store apps, especially in relation to custom permissions.

  • Pre-installed apps should have some update mechanism, preferably through Google Play and without a user account. Google should refuse to certify a device on privacy grounds, where manufacturers or vendors have attempted to exploit users in this way.

We, the undersigned, believe these fair and reasonable changes would make a huge difference to millions of people around the world who should not have to trade their privacy and security for access to a smartphone.

We urge you to use your position as an influential agent in the ecosystem to protect people and stop manufacturers from exploiting them in a race to the bottom on the pricing of smartphones.

Yours sincerely,

  • American Civil Liberties Union (ACLU)

  • Afghanistan Journalists Center (AFJC)

  • Americans for Democracy and Human Rights in Bahrain (ADHRB)

  • Amnesty International

  • Asociación por los Derechos Civiles (ADC)

  • Association for Progressive Communications (APC)

  • Association for Technology and Internet (ApTI)

  • Association of Caribbean Media Workers

  • Australian Privacy Foundation

  • Center for Digital Democracy

  • Centre for Intellectual Property and Information Technology Law (CIPIT)

  • Citizen D

  • Civil Liberties Union for Europe

  • Coding Rights

  • Consumer Association the Quality of Life-EKPIZO

  • Datos Protegidos

  • Digital Rights Foundation (DRF)

  • Douwe Korff, Emeritus Professor of International Law, London Metropolitan University and Associate of the Oxford Martin School, University of Oxford

  • DuckDuckGo

  • Electronic Frontier Foundation (EFF)

  • Forbrukerrĺdet // Norwegian Consumer Council

  • Foundation for Media Alternatives

  • Free Media Movement (FMM)

  • Freedom Forum

  • Fundación Karisma

  • Gulf Centre for Human Rights (GCHR)

  • Hiperderecho

  • Homo Digitalis

  • IJC Moldova

  • Initiative for Freedom of Expression- Turkey (IFox)

  • Irish Council for Civil Liberties

  • Media Foundation for West Africa

  • Media Institute of Southern Africa (MISA)

  • Media Policy and Democracy Project (University of Johannesburg)

  • Media Policy Institute (MPI)

  • Media Watch

  • Metamorphosis Foundation for Internet and Society

  • Open Rights Group (ORG)

  • Palestinian Center For Development & Media Freedoms (MADA)

  • Panoptykon

  • Paradigm Initiative

  • PEN Canada

  • Philippine Alliance of Human Rights Advocates (PAHRA)

  • Privacy International

  • Public Citizen

  • Red en Defensa de los Derechos Digitales (R3D)

  • Syrian Center for Media and Freedom of Expression (SCM)

  • TEDIC

  • The Danish Consumer Council

  • The Institute for Policy Research and Advocacy (ELSAM)

  • The Tor Project

  • Unwanted Witness

  • Vigilance for Democracy and the Civic State

 

 

The Australian Online Harms Bill...

Australian government consults on its upcoming internet censorship plans


Link Here13th January 2020
Full story: Age Verification for Porn...Endangering porn users for the sake of the children
The Australian government writes:

We are seeking feedback on proposals for a new Online Safety Act to improve Australia's online safety regulatory framework.

The proposed reforms follow a 2018 review of online safety legislation which recommended the replacement of the existing framework with a single Online Safety Act.

Key proposals include:

  • A set of basic online safety expectations for industry (initially social media platforms), clearly stating community expectations, with associated reporting requirements.

  • An enhanced cyberbullying scheme for Australian children to capture a range of online services, not just social media platforms.

  • A new cyber abuse scheme for Australian adults to facilitate the removal of serious online abuse and harassment and introduce a new end user take-down and civil penalty regime.

  • Consistent take-down requirements for image-based abuse, cyber abuse, cyberbullying and seriously harmful online content, requiring online service providers to remove such material within 24 hours of receiving an eSafety Commissioner request.

  • A reformed online content scheme requiring the Australian technology industry to be proactive in addressing access to harmful online content. The scheme would also expand the eSafety Commissioner's powers to address illegal and harmful content on websites hosted overseas.

  • An ancillary service provider scheme to provide the eSafety Commissioner with the capacity to disrupt access to seriously harmful online material made available via search engines, app stores and other ancillary service providers.

  • An additional power for the eSafety Commissioner to respond rapidly to an online crisis event (such as the Christchurch terrorist attacks) by requesting internet service providers block access to sites hosting seriously harmful content.

The consultation runs to 5pm 19th February 2020

 

 

No reply...

Twitter considers letting authors of tweets restrict who is allowed to reply


Link Here13th January 2020
Full story: Twitter Censorship...Twitter offers country by country take downs
Speaking at CES in Las Vegas, Twitter's director of product management, Suzanne Xie, unveiled some new changes that are coming to the platform this year, focusing specifically on conversations.

Xie says Twitter is adding a new setting for conversation participants right on the compose screen. It has four options: Global, Group, Panel, and Statement. Global lets anybody reply, Group is for people you follow and mention, Panel is people you specifically mention in the tweet, and Statement simply allows you to post a tweet and receive no replies.

Xie says that Twitter is in the process of doing research on the feature. Twitter is considering the ability to quote tweets as an alternative to replying.

 

 

Offsite Article: Skype audio monitored by workers in China with no security measures...


Link Here 13th January 2020
Former Microsoft contractor says he was emailed a login after minimal vetting

See article from theguardian.com

 

 

Minister for censorship action...

Irish government outlines its own Online Harms bill


Link Here12th January 2020

Ireland's Minister for Communications, Climate Action and Environment Richard Bruton T.D. has published the general scheme of the Online Safety and Media Regulation Bill, to protect children online. Bruton said:

This new law is the start of a new era of accountability. It sets out a clear expectation for online services. They will have to comply with binding online safety codes made by an Online Safety Commissioner, who will have significant powers to sanction companies for non-compliance.

There are already significant regulatory and legal frameworks in place in relation to many online issues, including data protection and criminal justice responses to criminal activities online. However, there is a serious gap both internationally and in Ireland when it comes to addressing harmful online content. This new law will close this legal gap and establish a robust regulatory framework to deal with the spread of harmful online content.

The Online Safety Commissioner will be part of a new Media Commission which will replace the Broadcasting Authority of Ireland and will also take on the role of regulating the audiovisual sector.

The new Online Safety Commissioner will be responsible for designating which online services should be covered under the new law. These designated services will then be required to comply with binding online safety codes made by the Commissioner.

Each Online Safety Code will set out the steps the designated service provide must take to keep their users safe online and will depend on the type of service that is being offered. Codes will address a wide range of matters, including:

  • Combating cyber bullying material and material promoting eating disorders, self-harm and suicide

  • Ensuring that services operate effective complaints procedures where people can request material is taken down

  • Ensuring advertising, sponsorship and product placement are not harmful and uphold minimum standards

  • How companies are mitigating against risks to the prevalence of harmful content on their platforms.

It is a matter for the Commissioner to design the relevant codes and decide which codes apply to each designated service. Online services will be legally obliged to abide by the codes that apply to them.

The Online Safety Commissioner can:

  • Decide the appropriate reporting requirements of compliance with online safety codes by online services

  • Request information from online services about their compliance with the online safety codes that apply to them

  • Audit the complaints and/or issues handling mechanisms operated by online services

  • Appoint authorised officers to assess compliance and carry out audits

  • The Online Safety Commissioner will establish a scheme to receive "super complaints" about systemic issues with online services from nominated bodies, including expert NGOs, and may request information, investigate or audit an online service on the basis of information received through this scheme.

If an online service is not complying with their safety code, the Online Safety Commissioner will, in the first instance, issue a compliance notice setting out what they must do to bring themselves into compliance- including the removal or restoration of content.

If the Online Safety Commissioner is not satisfied with the response and action taken by the online service, the Online Safety Commissioner can issue a warning notice. Warning notices will set out what the online service must do to bring itself into compliance and what steps the Online Safety Commissioner will take if it fails to do so.

If the Online Safety Commissioner is not satisfied with the response and action taken by the online service on foot of a warning notice then the Online Safety Commissioner can seek to impose a sanction on that service.

The Online Safety Commissioner can publish compliance and warning notices.

The Media Commission can only seek to impose a sanction on an online service if the service has failed to comply with a warning notice. The sanctions that the Media Commission can impose include:

  • Financial penalties,

  • Compelling the online service to take certain actions, and,

  • Blocking an offending online service.

The application of each of these sanctions requires court approval.

 

 

Offsite Article: Political correctness is driving comedy...


Link Here12th January 2020
Ricky Gervais: why I'll never apologise for my jokes. Speaking to Andrew Doyle

See article from spectator.co.uk

 

 

Extract: Permitting the growth of monopolies is a form of government censorship...

Cory Doctorov explains how internet monopolies have facilitated the curtailment of free speech


Link Here 7th January 2020

We're often told that it's not censorship when a private actor tells you to shut up on their own private platform -- but when the government decides not to create any public spaces (say, by declining to create publicly owned internet infrastructure) and then allows a handful of private companies to dominate the privately owned world of online communications, then those companies' decisions about who may speak and what they may say become a form of government speech regulation -- albeit one at arm's length.

I don't think that the solution to this is regulating the tech platforms so they have better speech rules -- I think it's breaking them up and forcing them to allow interoperability, so that their speech rules no longer dictate what kind of discourse we're allowed to have.

Imagine two different restaurants: one prohibits any discussion of any subject the management deems political and the other has no such restriction. It's easy to see that we'd say that you have more right to freely express yourself in the Anything Goes Bistro than in the No Politics at the Table Diner across the street.

Now, the house rules at the No Politics at the Table Diner have implications for free speech, but these are softened by the fact that you can always eat at the Anything Goes Bistro, and, of course, you can always talk politics when you're not at a restaurant at all: on the public sidewalk (where the First Amendment shields you from bans on political talk), in your own home, or even in the No Politics Diner, assuming you can text covertly under the tablecloth when the management isn't looking.

See short version from boingboing.net
See full version from locusmag.com

 

 

Offsite Article: Britain's Digital Nanny State...


Link Here7th January 2020
Full story: Online Harms White Paper...UK Government seeks to censor social media
The way in which the UK is approaching the regulation of social media will undermine privacy and freedom of expression and have a chilling effect on Internet use by everyone in Britain. By Bill Dutton

See article from billdutton.me

 

 

Toxic comments...

YouTube has updated its policies about comments under videos


Link Here5th January 2020
Full story: YouTube Censorship...YouTube censor videos by restricting their reach
YouTube has posted on its blog outlining a recent changes to the moderation of comments about videos posted. YouTube writes:

Addressing toxic comments

We know that the comment section is an important place for fans to engage with creators and each other. At the same time, we heard feedback that comments are often where creators and viewers encounter harassment. This behavior not only impacts the person targeted by the harassment, but can also have a chilling effect on the entire conversation.

To combat this we remove comments that clearly violate our policies -- over 16 million in the third quarter of this year , specifically due to harassment.The policy updates we've outlined above will also apply to comments, so we expect this number to increase in future quarters.

Beyond comments that we remove, we also empower creators to further shape the conversation on their channels and have a variety of tools that help. When we're not sure a comment violates our policies, but it seems potentially inappropriate, we give creators the option to review it before it's posted on their channel. Results among early adopters were promising -- channels that enabled the feature saw a 75% reduction in user flags on comments. Earlier this year, we began to turn this setting on by default for most creators.

We've continued to fine tune our systems to make sure we catch truly toxic comments, not just anything that's negative or critical, and feedback from creators has been positive. Last week we began turning this feature on by default for YouTube's largest channels with the site's most active comment sections and will roll out to most channels by the end of the year. To be clear, creators can opt-out, and if they choose to leave the feature enabled they still have ultimate control over which held comments can appear on their videos. Alternatively, creators can also ignore held comments altogether if they prefer.

 

 

Prohibiting negative content...

China reformulates its internet censorship rules along the lines of the UK's general Online Harms approach


Link Here4th January 2020
Chinese authorities have approved a new set of comprehensive regulations that expand the scope of online censorship, emphasize the prohibition of 'negative' content and make platforms more liable for content violations.

China previously had very detailed censorship laws laying out exactly what was banned and what part of the internet the rule applied to. The new Provisions on the Governance of the Online Information Content Ecosystem rationalises them into more general rules that apply to the entire internet.

The new rules were approved in mid-December and will take effect in March. They apply to everyone and have noted that anyone who posts anything to the internet si to be considered a content producer.

Jeremy Daum, senior fellow at the Yale Law School's Paul Tsai China Center notes that the new laws for what counts as illegal or now 'negative content' are quite vague. The document lays out what constitutes illegal content in sweeping terms. Content that undermines ethnic unity or undermines the nation's policy on religions is forbidden, as is anything that disseminates rumors that disrupt economic or social order or generally harms the nation's honor and interests, among other rules.

The new regulations then go on to dictate that content producers must employ measures to prevent and resist the making, reproduction or publication of negative information. This includes the following:
  • the use of exaggerated titles, gossip,
  • improper comments on natural disasters, major accidents, or other disasters,
  • anything with sexual innuendo or that is readily associated with sex, gore or horror,
  • or things that would push minors towards behaviors that are unsafe or violate social mores.

Platforms are the ones responsible for policing all these restrictions, the rules say, and should establish mechanisms for everything from reviewing content and comments to real-time inspections to the handling of online rumors. They are to have designate a manager for such activities and improve related staff.

 

 

Offsite Article: Twelve Million Phones, One Dataset, Zero Privacy...


Link Here3rd January 2020
A interesting report on how smart phone location date is being compiled and databased in the US. By Stuart A. Thompson and Charlie Warzel

See article from nytimes.com

 

 

Work longer, pay more fuel tax, get maimed by rubber bullets, and now have your porn blocked...

More reforms from Macron as he gives porn websites 6 months to introduce parental control or else legislation will follow


Link Here2nd January 2020
French President Emmanuel Macron has said that he will legislate if necessary to get parental controls in place to block kids from porn. He said in a speech to UNESCO:

We do not take a 13-year-old boy to a sex-shop, not anything goes in the digital world.

We will clarify in the penal code that the simple fact of declaring one's age online is not a strong enough protection against access to pornography by minors.

The measure will give the websites a period of six months to set up parental control by default . I know it hurts a lot of platforms, a lot of digital operators, but if in six months we have no solution, we will pass a law for automatic parental control.

Macron's reference to age 13 is not casual, because that is reportedly the average age of access to erotic content for the first time in France.

 

 

The US Army bans Chinese App TikTok from government phones...

Given Chinese dominance of internet infrastructure plus control over ever more apps, how long before the US government changes its stance and mandates strong encryption for all US internet users?


Link Here 2nd January 2020
The US Army has banned the use of popular Chinese social media video app TikTok, with Military.com first reporting it was due to security concerns. The US Navy have followed suit.

It is considered a cyber threat, a US Army spokesperson told Military.com . We do not allow it on government phones.

The ban comes in the wake of Democrat Senator Charles Schumer and Republican Senator Tom Cotton writing a letter to US Director of National Intelligence Joseph Maguire insisting an investigation into TikTok would be necessary to determine whether the Chinese-owned social media video app poses a risk to national security.

Given these concerns, we ask that the Intelligence Community conduct an assessment of the national security risks posed by TikTok and other China-based content platforms operating in the US and brief Congress on these findings, the letter said.

 

 

'He's not the messiah, he's a very naughty boy!'...

Jordan's Royal Film Commission calls on Netflix to ban streaming of its new series Messiah


Link Here2nd January 2020
Full story: Internet Censorship in Jordan...Government push for blocking of internet porn
On 1st January 2020 Netflix started streaming Messiah, a series about a mysterious figure, Al-Masih, played by Belgian actor Mehdi Dehbi. It is not clear whether he is a divine entity ... or simply a charlatan.

But according to an on-line petition, Al-Masih is, in fact, the Muslim version of the antiChrist. The Royal Film Commission of Jordan has asked Netflix not to stream the drama in the country. The Jordanian government organisation's Managing Director, Mohannad al-Bakr, held a press conference with local media. He said:

While still standing firmly by its principles, notably the respect of creative freedom, the RFC -- as a public and responsible institution -- cannot condone or ignore messages that infringe on the Kingdom's basic laws.

The RFC's announcement represents an about-face for the organisation. Its statement acknowledges that Messiah was partially shot in the Kingdom in 2018, and that, after it had reviewed synopses for the series' episodes, it approved the shoot and granted the show a tax credit.

A spokesperson for Netflix indicated that they have not received a formal legal request to remove the series from the streamer's Jordanian service.

 

 

Do Not Sell My Personal Information...

California leads the way on internet privacy in the US as its CCPA law comes into effect


Link Here1st January 2020
A new California law has come into effect that seems to have been inspired by the EU's box ticking nighmare, the GDPR. It give's Californians rights in determining how their data is used by large internet companies.

The law gives consumers the right to know about the personal data that companies have collected about them, to demand that it be deleted, and to prevent it from being sold to third parties.

Although privacy controls only are required for Californians it seems likely that large companies will provide the same controls to all Americans.

The California Consumer Privacy Act (CCPA) will only apply to businesses that earn more than $25 million in gross revenue, that collect data on more than 50,000 people, or for which selling consumer data accounts for more than 50% of revenue.

In early December, Twitter rolled out a privacy center where users can learn more about the company's approach to the CCPA and navigate to a dashboard for customizing the types of info that the platform is allowed to use for ad targeting. Google has also created a protocol that blocks websites from transmitting data to the company. Facebook, meanwhile, is arguing that it does not need to change anything because it does not technically sell personal information. Companies must at least set up a webpage and a toll-free phone number for fielding data requests.

The personal data covered by the CCPA includes IP addresses, contact info, internet browsing history, biometrics (like facial recognition and fingerprint data), race, gender, purchasing behavior, and locations.

Many sections of the law are quite vague and awaiting further clarification in the final draft regulations, which the California attorney general's office is expected to release later in 2020.


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept    


 


TV News

Movie News

Games News

Internet News
 
Advertising News

Phone News
 

Technology News

Gambling News

Books News

Music News

Art News

Stage News
 

melonfarmers icon

Home

Index

Links

Email

Shop
 


US

World

Media

Nutters

Liberty
 

Film Cuts

Cutting Edge

Info

Sex News

Sex+Shopping
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys