Germany was a major force behind the EU's disgraceful copyright directive passed last year. It is perhaps no surprise that proposed implementation into German law is even more extreme than the directive.
In particular the link tax has been drafted so
that it is nearly impossible to refer to a press article without an impossibly expensive licence to use the newspaper's content.
The former Pirate Party MEP Julia Reda has picked out the main bad ideas on Twitter.
Under the German proposals, now
up for public consultation, only single words or very short extracts of a press article can be quoted without a license. Specifically, free quotation is limited to:
a small-format preview image with a resolution of 128-by-128 pixels
a sequence of sounds, images or videos with a duration of up to three seconds
The proposal states that the new ancillary copyright does not apply to hyperlinks, or to private or non-commercial use of press publishers' materials by a single user. However, as we know from the
tortured history of the Creative Commons non-commercial license, it is by no means clear what non-commercial means in practice.
Press publishers are quite likely to insist that posting memes on YouTube, Facebook or Twitter -- all
undoubtedly commercial in nature -- is not allowed in general under the EU Copyright Directive.
We won't know until top EU courts rule on the details, which will take years. In the meantime, online services will doubtless prefer
to err on the side of caution, keen to avoid the risk of heavy fines. It is likely they will configure their automated filters to block any use of press publishers' material that goes beyond the extremely restrictive limits listed above. Moreover, this
will probably apply across the EU, not just in Germany, since setting up country-by-country upload filters is more expensive. Far easier to roll out the most restrictive rules across the whole region.
Universities and Science Minister Chris Skidmore has said that the UK will not implement the EU Copyright Directive after the country leaves the EU.
Several companies have criticised the disgraceful EU law, which would hold them accountable for not
removing copyrighted content uploaded by users.
EU member states have until 7 June 2021 to implement the new reforms, but the UK will have left the EU by then.
It was Article 13 which prompted fears over the future of memes and GIFs - stills,
animated or short video clips that go viral - since they mainly rely on copyrighted scenes from TV and film. Critics noted that Article 13 would make it nearly impossible to upload even the tiniest part of a copyrighted work to Facebook, YouTube, or any
Other articles give the news industry total copyright control of news material that people have previously been widely used in people's blogs and posts commenting on the news.
Prime Minister Boris Johnson criticised the law in March,
claiming that it was terrible for the internet.
Google had campaigned fiercely against the changes, arguing they would harm Europe's creative and digital industries and change the web as we know it. YouTube boss Susan Wojcicki had also warned that
users in the EU could be cut off from the video platform.
News websites will have to ask readers to verify their age or comply with a new 15-point code from the Information Commissioner's Office (ICO) designed to protect children's online data, ICO has confirmed.
Press campaign groups were hoping news
websites would be exempt from the new Age Appropriate Design Code so protecting their vital digital advertising revenues which are currently enhanced by extensive profiled advertising.
Applying the code as standard will mean websites putting
privacy settings to high and turning off default data profiling. If they want to continue enjoying revenues from behavioural advertising they will need to get adult readers to verify their age.
In its 2019 draft ICO had previously said such measures
must be robust and that simply asking readers to declare their age would not be enough.But it has now confirmed to Press Gazette that for news websites that adhere to an editorial code, such self-declaration measures are likely to be sufficient.
could mean news websites asking readers to enter their date of birth or tick a box confirming they are over 18. An ICO spokesperson said sites using these methods might also want to consider some low level technical measures to discourage false
declarations of age, but anything more privacy intrusive is unlikely to be appropriate..
But Society of Editors executive director Ian Murray predicted the new demands may prove unpopular even at the simplest level. Asking visitors to confirm
their age [and hence submit to snooping and profiling] -- even a simple yes or no tick box -- could be a barrier to readers.
The ICO has said it will work with the news media industry over a 12-month transition period to enable proportionate and
practical measures to be put in place for either scenario.
In fact ICO produced a separate document alongside the code to explain how it could impact news media, which it said would be allowed to apply the code in a risk-based and proportionate way.
The EU is a bizarre institution. It tries to resolve fairness issues, social ills and international competition rules, all by dreaming up reams of red tape without any consideration of where it will lead.
Well red tape ALWAYS works to the advantage of
the largest players who have the scale and wealth to take the onerous and expensive rules in their stride. The smaller players end up being pushed out of the market.
And in the case of the internet the largest players are American (or more
latterly Chinese) and indeed they have proven to best able to take advantage of European rules. And of course the smaller players are European and are indeed being pushed out of the market.
Axel Voss is one of the worst examples of European
politicians dreaming up red tape that advantages the USA. His latest effort was to push through an upcoming European law that will require social media companies to pre-censor up loaded content for copyright infringement. Of course the only way this can
be done practically is to have some mega artificial intelligence effort to try and automatically scan all content for video, text and audio that may be copyrighted. And guess who are the only companies in the world that have the technology to perform
such a feat...well the US and Chinese internet giants of course.
Now out of supreme irony Voss himself has had a whinge about the American and Chinese domination of the internet.
In a long whinge about the lack of European presence in the
internet industry he commented on Europe's dependence on US companies. If Google decides to switch off all its services tomorrow, I would like to know what will be left in Europe, said Voss, painting a gloomy picture in which there are no search
engines, no browsers and no Google Maps.
The Information Commissioner's Office (ICO) has just published its Age Appropriate Design Code:
The draft was published last year and was opened to a public consultation which came down heavily against ICO's demands that website users should be
age verified so that the websites could tailor data protection to the age of the user.
Well in this final release ICO has backed off from requiring age verification for everything, and instead suggested something less onerous called age
'assurance'. The idea seems to be that age can be ascertained from behaviour, eg if a YouTube user watches Peppa Pig all day then one can assume that they are of primary school age.
However this does seem lead to a loads of contradictions, eg age
can be assessed by profiling users behaviour on the site, but the site isn't allowed to profile people until they are old enough to agree to this. The ICO recognises this contradiction but doesn't really help much with a solution in practice.
ICO defines the code as only applying to sites likely to be accessed by children (ie websites appealing to all ages are considered caught up by the code even though they are not specifically for children.
On a wider point the code will be very
challenging to monetisation methods for general websites. The code requires website to default to no profiling, no geo-location, no in-game sales etc. It assumes that adults will identify themselves and so enable all these things to happen. However it
may well be that adults will quite like this default setting and end up not opting for more, leaving the websites without income.
Note that these rules are in the UK interpretation of GDPR law and are not actually in the European directive. So they
are covered by statute, but only in the UK. European competitors have no equivalent requirements.
The ICO press release reads:
Today the Information Commissioner's Office has published its final Age Appropriate Design Code
-- a set of 15 standards that online services should meet to protect children's privacy.
The code sets out the standards expected of those responsible for designing, developing or providing online services like apps, connected
toys, social media platforms, online games, educational websites and streaming services. It covers services likely to be accessed by children and which process their data.
The code will require digital services to automatically
provide children with a built-in baseline of data protection whenever they download a new app, game or visit a website.
That means privacy settings should be set to high by default and nudge techniques should not be used to
encourage children to weaken their settings. Location settings that allow the world to see where a child is, should also be switched off by default. Data collection and sharing should be minimised and profiling that can allow children to be served up
targeted content should be switched off by default too.
Elizabeth Denham, Information Commissioner, said:
"Personal data often drives the content that our children are exposed to -- what
they like, what they search for, when they log on and off and even how they are feeling.
"In an age when children learn how to use an iPad before they ride a bike, it is right that organisations designing and developing
online services do so with the best interests of children in mind. Children's privacy must not be traded in the chase for profit."
The code says that the best interests of the child should be a primary
consideration when designing and developing online services. And it gives practical guidance on data protection safeguards that ensure online services are appropriate for use by children.
"One in five internet users in the UK is a child, but they are using an internet that was not designed for them.
"There are laws to protect children in the real world -- film ratings, car seats, age
restrictions on drinking and smoking. We need our laws to protect children in the digital world too.
"In a generation from now, we will look back and find it astonishing that online services weren't always designed with
children in mind."
The standards of the code are rooted in the General Data Protection Regulation (GDPR) and the code was introduced by the Data Protection Act 2018. The ICO submitted the code to the Secretary of
State in November and it must complete a statutory process before it is laid in Parliament for approval. After that, organisations will have 12 months to update their practices before the code comes into full effect. The ICO expects this to be by autumn
This version of the code is the result of wide-ranging consultation and engagement.
The ICO received 450 responses to its initial consultation in April 2019 and followed up with dozens of meetings
with individual organisations, trade bodies, industry and sector representatives, and campaigners.
As a result, and in addition to the code itself, the ICO is preparing a significant package of support for organisations.
The code is the first of its kind, but it reflects the global direction of travel with similar reform being considered in the USA, Europe and globally by the Organisation for Economic Co-operation and Development (OECD).
The code now has to be laid before parliament for approval for a period of 40 sitting days -- with the ICO saying it will come into force 21 days after that, assuming no objections. Then there's a further 12
month transition period after it comes into force.
Obligation or codes of practice?
Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal explained:
This is not, and will not be, 'law'. It
is just a code of practice. It shows the direction of the ICO's thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it's not something with which an organisation needs to comply as such. They need to
comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.
Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other
applicable laws. The obligation to comply with those laws does not change because of today's code of practice. Rather, the code of practice shows the ICO's thinking on what compliance might look like (and, possibly, goldplates some of the requirements of
the law too).
The ICO's Age Appropriate Design Code released today includes changes which lessen the risk of widespread age gates, but retains strong incentives towards greater age gating of content.
Over 280 ORG supporters wrote to the ICO
about the previous draft code, to express concerns with compulsory age checks for websites, which could lead to restrictions on content.
Under the code, companies must establish the age of users, or restrict their use of data. ORG
is concerned that this will mean that adults only access websites when age verified creating severe restrictions on access to information.
The ICO's changes to the Code in response to ORG's concerns suggest that different
strategies to establish age may be used, attempting to reduce the risk of forcing compulsory age verification of users.
However, the ICO has not published any assessment to understand whether these strategies are practical or what
their actual impact would be.
The Code could easily lead to Age Verification through the backdoor as it creates the threat of fines if sites have not established the age of their users.
While the Code has
many useful ideas and important protections for children, this should not come at the cost of pushing all websites to undergo age verification of users. Age Verification could extend through social media, games and news publications.
Jim Killock, Executive Director of Open Rights Group said:
The ICO has made some useful changes to their code, which make it clear that age verification is not the only method to determine age.
However, the ICO don't know how their code will change adults access to content in practice. The new code published today does not include an Impact Assessment. Parliament must produce one and assess implications for free expression
before agreeing to the code.
Age Verification demands could become a barrier to adults reaching legal content, including news, opinion and social media. This would severely impact free expression.
public and Parliament deserve a thorough discussion of the implications, rather than sneaking in a change via parliamentary rubber stamping with potentially huge implications for the way we access Internet content.
Blatant abuse of people's private data has become firmly entrenched in the economic model of the free internet ever since Google recognised the value of analysing what people are searching for.
Now vast swathes of the internet are handsomely
funded by the exploitation of people's personal data. But that deep entrenchment clearly makes the issue a bit difficult to put right without bankrupting half of the internet that has come to rely on the process.
The EU hasn't helped with its
ludicrous idea of focusing its laws on companies having to obtain people's consent to have their data exploited. A more practical lawmaker would have simply banned the abuse of personal data without bothering with the silly consent games. But the EU
seems prone to being lobbied and does not often come up with the most obvious solution.
Anyway enforcement of the EU's law is certainly causing issues for the internet censors at the UK's ICO.
The ICO warned the adtech industry 6 months ago
that its approach is illegal and has now announced that it would not be taking any action against the data abuse yet, as the industry has made a few noises about improving a bit over the coming months.
Simon McDougall, ICO Executive Director of
Technology and Innovation has written:
The adtech real time bidding (RTB) industry is complex, involving thousands of companies in the UK alone. Many different actors and service providers sit between the advertisers
buying online advertising space, and the publishers selling it.
There is a significant lack of transparency due to the nature of the supply chain and the role different actors play. Our June 2019 report identified a range of
issues. We are confident that any organisation that has not properly addressed these issues risks operating in breach of data protection law.
This is a systemic problem that requires organisations to take ownership for their own
data processing, and for industry to collectively reform RTB. We gave industry six months to work on the points we raised, and offered to continue to engage with stakeholders. Two key organisations in the industry are starting to make the changes needed.
The Internet Advertising Bureau (IAB) UK has agreed a range of principles that align with our concerns, and is developing its own guidance for organisations on security, data minimisation, and data retention, as well as UK-focused
guidance on the content taxonomy. It will also educate the industry on special category data and cookie requirements, and continue work on some specific areas of detail. We will continue to engage with IAB UK to ensure these proposals are executed in a
Separately, Google will remove content categories, and improve its process for auditing counterparties. It has also recently proposed improvements to its Chrome browser, including phasing out support for third party
cookies within the next two years. We are encouraged by this, and will continue to look at the changes Google has proposed.
Finally, we have also received commitments from other UK advertising trade bodies to produce guidance for
If these measures are fully implemented they will result in real improvements to the handling of personal data within the adtech industry. We will continue to engage with industry where we think engagement will
deliver the most effective outcome for data subjects.
Comment: Data regulator ICO fails to enforce the law
Responding to ICO's announcement today that the regulator is taking minimal steps to enforce the law against massive data breaches taking place in the online ad industry through Real-Time Bidding, complainants Jim Killock and Michael Veale have
called on the regulator to enforce the law.
The complainants are considering taking legal action against the regulator. Legal action could be taken against the ICO for failure to enforce, or against the companies themselves for
their breaches of Data Protection law.
The Real-Time Bidding data breach at the heart of RTB market exposes every person in the UK to mass profiling, and the attendant risks of manipulation and discrimination.
As the evidence submitted by the complainants notes, the real-time bidding systems designed by Google and the IAB broadcast what virtually all Internet users read, watch, and listen to online to thousands of companies, without
protection of the data once broadcast. Now, sixteen months after the initial complaint, the ICO has failed to act.
Jim Killock, Executive Director of the Open Rights Group said:
The ICO is a
regulator, so needs to enforce the law. It appears to be accepting that unlawful and dangerous sharing of personal data can continue, so long as 'improvements' are gradually made, with no actual date for compliance.
Last year the
ICO gave a deadline for an industry response to our complaints. Now the ICO is falling into the trap set by industry, of accepting incremental but minimal changes that fail to deliver individuals the control of their personal data that they are legally
The ICO must take enforcement action against IAB members.
We are considering our position, including whether to take legal action against the regulator for failing to act, or individual
companies for their breach of data protection law.
Dr Michael Veale said:
When an industry is premised and profiting from clear and entrenched illegality that breach individuals'
fundamental rights, engagement is not a suitable remedy. The ICO cannot continue to look back at its past precedents for enforcement action, because it is exactly that timid approach that has led us to where we are now.
Ravi Naik, solicitor acting for the complainants, said:
There is no dispute about the underlying illiegality at the heart of RTB that our clients have complained about. The ICO have agreed with
those concerns yet the companies have not taken adequate steps to address those conerns. Nevertheless, the ICO has failed to take direct enforcement action needed to remedy these breaches.
Regulatory ambivalence cannot continue.
The ICO is not a silo but is subject to judicial oversight. Indeed, the ICO's failure to act raises a question about the adequacy of the UK Data Protection Act. Is there proper judicial oversight of the ICO? This is a critical question after Brexit, when
the UK needs to agree data transfer arrangements with the EU that cover all industries.
Dr. Johnny Ryan of Brave said:
The RTB system broadcasts what everyone is reading and
watching online, hundreds of billions of times a day, to thousands of companies. It is by far the largest data breach ever recorded. The risks are profound. Brave will support ORG to ensure that the ICO discharges its responsibilities.
Jim Killock and Michael Veale complained about the Adtech industry and Real Time Bidding to the UK's ICO in September 2018. Johnny Ryan of Brave submitted a parallel complaint against Google about their Adtech system to the Irish Data
Update: Advertising industry will introduce a 'gold standard 2.0' for privacy towards the end of 2020
The Internet Advertising Bureau UK has launched a new version of what it calls its Gold Standard certification process that will be independently audited by a third party.
In a move to address ongoing privacy concerns with the digital supply chain,
the IAB's Gold Standard 2.0 will incorporate the Transparency and Consent Framework, a widely promoted industry standard for online advertising.
The new process will be introduced in the fourth quarter after an industry consultation to agree on the
compliance criteria for incorporating the TCF.
Eleven countries have banned an episode of a new Apple TV Plus series, Little America , that focuses on a gay immigrant from Syria.
The episode released Friday internationally but 10 Arabic nations and Russia are preventing it from being
screened in their countries.
The episode, The Son, centers on Rafiq (Haaz Sleiman) as he applies for asylum in the United States after facing violence and family rejection for his sexuality.
The episode was filmed in Canada rather than the US due
to the casting of Syrian actors.
In response to the news of Little America s censorship, writer Amrou Al-Kadhi, a drag performer from Iraq, expressed a renewed commitment toward telling stories that reflect these experiences. "We will
prevail," he told Pink News.
Qatari Emir Tamim bin Hamad al-Thani amended Article 136 of the country's penal code to make the publication or sharing of 'false news' punishable by up to five years in prison or a 100,000 Qatari riyal fine (US$27,500)
CPJ Senior Middle East
and North Africa Researcher Justin Shilad said:
Instead of standing up for press freedom in the Gulf region, where the free flow of information is under threat, Qatari authorities have jumped on the 'false news'
bandwagon. Qatar should rescind this repressive law and focus instead on legislation that enshrines press freedom in line with its international human rights law commitments.
The New Zealand has been debating how to censor internet TV in the country, and it seems to have resulted in the likes of Netflix being able to self-classify their content.
The initial thought was that New Zealand's film censors at the Office of Film
and Literature Classification should be given the job, but the likely expense seems to have swayed opinions.
Internal Affairs Tracey Martin has a bill in select committee which will make New Zealand classification labels like R16 mandatory for
commercial on-demand video content such as Netflix, Lightbox, and the iTunes movie store. Mandatory classification will require some sort of fee for the providers which is yet to be established. The current fee is more than $1100 for an unrated film.
Officials from the Department of Internal Affairs in a regulatory impact statement said the mandatory classification presented a risk that content providers may withdraw from the market due to an increased compliance burden should they be required to classify all content via the current process. Officials also noted the risk of content providers would pass on the cost of classification to consumers through higher prices.
Officials noted that an approach that allowed the providers to classify their own content using a method prescribed by the censorship office should mitigate that risk and that no provider had yet threatened to leave the market.
In the end the
Government opted to allow providers to self-classify, going against the wishes of the Children's Commissioner and the OFLC which wanted the current process followed.
Newspapers miss out on advertising opportunities as internet AI gets confused between a soccer report about shooting and attack gets confused with prohibited terrorist content See
article from theguardian.com
Parents TV Council is a US moralist campaign. The group is clearly impressed by The Witcher on Netflix and is kindly spreading the message. The group writes:
The Parents Television Council is warning families about the graphic
content found in Netflix's The Witcher , a new fantasy drama based on a book series and video game that is being compared to HBO's Game of Thrones .
Using filtering data from VidAngel, the PTC found that across eight
episodes of the first season of The Witcher, viewers would hear 207 instances of profanity; witness 417 scenes of violence; and be subjected to 271 instances of sex, nudity and other sexual content -- around 100 instances of adult content per one hour
PTC Program Director Melissa Henson said:
While families might be drawn to a fantasy-themed TV show, The Witcher is decidedly not family-friendly given the new data highlighting the
explicit content viewers can see. From frequent nudity to graphic violence, The Witcher is certainly comparable to Game of Thrones with respect to adult content, most of which appears gratuitous. We hope that Netflix and other streaming services come to
realize that needless explicit adult content isn't usually what viewers seek.
Netflix should also offer content filtering options for families who might be interested in watching The Witcher -- but without the adult
content. That's a win-win solution for families and for Netflix, and crucial to Netflix's long-term growth strategy.
Google is to restrict web pages from loading 3rd party profiling cookies when accessed via its Chrome browser. Many large websites, eg major newspapers make a call to hundreds of 3rd part profilers to allow them to build up a profile of people's browsing
history, which then facilitates personalised advertising.
Now Google has said that it will block these third-party cookies within the next two years.
Tracking cookies are very much in the sights of the EU who are trying to put an end to the
exploitative practise. However the EU is not willing to actually ban such practises, but instead has invented a silly game about websites obtaining consent for tracking cookies.
The issue is of course that a lot of 'free' access websites are
funded by advertising and rely on the revenue from the targeted advertising. I have read estimates that if websites were to drop personalised ads, and fall back on contextual advertising (eg advertising cars on motoring pages), then they would lose about
a third of their income. Surely a fall that magnitude would lead to many bankrupt or unviable websites.
Now the final position of the EU's cookie consent game is that a website would have to present two easy options before allowing access to a
Do you want to allow tracking cookies to build up a database of your browsing history
Do you NOT want to allow tracking cookies to build up a database of your browsing history
The simple outcome will be that virtually no one will opt for tracking, so the website will lose a third of its income. So it is rather unsurprising that websites would rather avoid offering such an easy option that would deprive them of so much of
In reality the notion of consent it not practical. It would be more honest to think of the use of tracking cookies as a price for 'free' access to a website.
Perhaps when the dust has settled, a more honest and practical
endgame would bea choice more like:
Do you want to allow tracking cookies to build up a database of your browsing history in return for 'free' access
Do you want to pay a fee to enable access to the website without tracking cookies
Sorry you may not access this
The EU has been complaining about companies trying to avoid the revenue destroying official consent options. A study just published observes that nearly all cookie consent pop-ups are flouting EU privacy laws.
Researchers at the Massachusetts
Despite EU privacy laws stating that consent for cookies must be informed, specific and freely given, the research suggests that only 12% of the sites met the minimal requirements of GDPR (General Data Protection Regulation) law. Instead
they were found to blanket data consent options in complicated site design, such as:
pre-ticked boxes burying decline buttons on later pages multiple clicks tracking users before consent and after pressing reject
Just over half the sites studied did not have rejecting all tracking as an option.
Of the sites which
did, only 13% made it accessible through the same or fewer clicks as the option to accept all.
The researchers estimate it would take, on average, more than half an hour to read through what the third-party companies are doing with your data, and even longer to read all their privacy policies. It's a joke and there's no actual way you could do
this realistically, said Dr Veale.
Cyber-security researchers claim that highly sensitive personal details about thousands of porn stars have been exposed online by an adult website.
They told BBC News they had found an open folder on PussyCash's Amazon web server that contained
However the live webcam porn network, which owns the brand ImLive and other adult websites, said there was no evidence anyone else had accessed the folder. And it had it removed public access as soon as it had been told of the leak.
The researchers are from vpnMentor, which is a VPN comparison site. vpnMentor said in a blog anyone with the right link could have accessed 19.95GB of data dating back over 15 years as well as from the past few weeks, including contracts revealing more
than 4,000 models' including
full name address social-security number date of birth phone number height weight hips, bust and waist measurements piercings tattoos scars The files also revealed scans or photographs of their passport
driving licence credit card birth certificate.
Privacy International and over 50 other organisations have submitted a letter to Alphabet Inc. CEO Sundar Pichai asking Google to take action against exploitative pre-installed software on Android devices.
Dear Mr. Pichai,
We, the undersigned, agree with you: privacy cannot be a luxury offered only to those people who can afford it.
And yet, Android Partners - who use the Android trademark and branding - are manufacturing
devices that contain pre-installed apps that cannot be deleted (often known as "bloatware"), which can leave users vulnerable to their data being collected, shared and exposed without their knowledge or consent.
These pre-installed apps can have privileged custom permissions that let them operate outside the Android security model. This means permissions can be defined by the app - including access to the microphone,
camera and location - without triggering the standard Android security prompts. Users are therefore completely in the dark about these serious intrusions.
We are concerned that this leaves users vulnerable to the exploitative
business practices of cheap smartphone manufacturers around the world.
The changes we believe are needed most urgently are as follows:
Individuals should be able to permanently uninstall the apps on their phones. This should include any related background services that continue to run even if the apps are disabled.
should adhere to the same scrutiny as Play Store apps, especially in relation to custom permissions.
Pre-installed apps should have some update mechanism, preferably through Google Play and without a user account. Google
should refuse to certify a device on privacy grounds, where manufacturers or vendors have attempted to exploit users in this way.
We, the undersigned, believe these fair and reasonable changes would make a huge difference to millions of people around the world who should not have to trade their privacy and security for access to a smartphone.
We urge you to use your position as an influential agent in the ecosystem to protect people and stop manufacturers from exploiting them in a race to the bottom on the pricing of smartphones.
American Civil Liberties Union (ACLU)
Afghanistan Journalists Center (AFJC)
Americans for Democracy and Human Rights in Bahrain (ADHRB)
Asociación por los Derechos Civiles (ADC)
Association for Progressive Communications (APC)
Association for Technology and Internet (ApTI)
Association of Caribbean Media Workers
Australian Privacy Foundation
Center for Digital Democracy
Centre for Intellectual Property and Information
Technology Law (CIPIT)
Civil Liberties Union for Europe
Consumer Association the Quality of Life-EKPIZO
Digital Rights Foundation (DRF)
Douwe Korff, Emeritus Professor of International Law, London Metropolitan University and Associate of the Oxford Martin School,
University of Oxford
Electronic Frontier Foundation (EFF)
Forbrukerrĺdet // Norwegian Consumer Council
Foundation for Media
Free Media Movement (FMM)
Gulf Centre for Human Rights (GCHR)
Initiative for Freedom of Expression- Turkey (IFox)
Irish Council for Civil Liberties
Media Foundation for West Africa
Media Institute of Southern Africa (MISA)
Media Policy and Democracy Project (University of Johannesburg)
Policy Institute (MPI)
Metamorphosis Foundation for Internet and Society
Open Rights Group (ORG)
Palestinian Center For
Development & Media Freedoms (MADA)
Philippine Alliance of Human Rights Advocates (PAHRA)
Red en Defensa de los Derechos Digitales (R3D)
Syrian Center for Media and Freedom of Expression
The Danish Consumer Council
The Institute for Policy Research and Advocacy (ELSAM)
We are seeking feedback on proposals for a new Online Safety Act to improve Australia's online safety regulatory framework.
The proposed reforms follow a 2018 review of online
safety legislation which recommended the replacement of the existing framework with a single Online Safety Act.
Key proposals include:
A set of basic online safety expectations for industry (initially social media platforms), clearly stating community expectations, with associated reporting requirements.
An enhanced cyberbullying
scheme for Australian children to capture a range of online services, not just social media platforms.
A new cyber abuse scheme for Australian adults to facilitate the removal of serious online abuse and harassment and
introduce a new end user take-down and civil penalty regime.
Consistent take-down requirements for image-based abuse, cyber abuse, cyberbullying and seriously harmful online content, requiring online service providers to
remove such material within 24 hours of receiving an eSafety Commissioner request.
A reformed online content scheme requiring the Australian technology industry to be proactive in addressing access to harmful online content.
The scheme would also expand the eSafety Commissioner's powers to address illegal and harmful content on websites hosted overseas.
An ancillary service provider scheme to provide the eSafety Commissioner with the capacity to
disrupt access to seriously harmful online material made available via search engines, app stores and other ancillary service providers.
An additional power for the eSafety Commissioner to respond rapidly to an online crisis
event (such as the Christchurch terrorist attacks) by requesting internet service providers block access to sites hosting seriously harmful content.
Speaking at CES in Las Vegas, Twitter's director of product management, Suzanne Xie, unveiled some new changes that are coming to the platform this year, focusing specifically on conversations.
Xie says Twitter is adding a new setting for conversation
participants right on the compose screen. It has four options: Global, Group, Panel, and Statement. Global lets anybody reply, Group is for people you follow and mention, Panel is people you specifically mention in the tweet, and Statement simply allows
you to post a tweet and receive no replies.
Xie says that Twitter is in the process of doing research on the feature. Twitter is considering the ability to quote tweets as an alternative to replying.