The Irish Communications Minister Richard Bruton has scrapped plans to introduce restrictions on access to porn in a new online safety bill, saying they are not a priority.
The Government said in June it would consider following a UK plan to block pornographic material until an internet user proves they are over 18. However, the British block has run into administrative problems and been delayed until later this
Bruton said such a measure in Ireland is not a priority in the Online Safety Bill, a draft of which he said would be published before the end of the year.
It's not the top priority. We want to do what we committed to do, we want to have the codes of practice, he said at the Fine Gael parliamentary party think-in. We want to have the online commissioner - those are the priorities we are committed
An online safety commissioner will have the power to enforce the online safety code and may in some cases be able to force social media companies to remove or restrict access. The commissioner will have responsibility for ensuring that large
digital media companies play their part in ensuring the code is complied with. It will also be regularly reviewed and updated.
Bruton's bill will allow for a more comprehensive complaint procedure for users and alert the commissioner to any alleged dereliction of duty. The Government has been looking at Australia's pursuit of improved internet safety.
Google has paid a fine for failing to block access to certain websites banned in Russia.
Roscomnadzor, the Russian government's internet and media censor, said that Google paid a fine of 700,000 rubles ($10,900) related to the company's refusal to fully comply with rules imposed under the country's censorship regime.
Search engines are prohibited under Russian law from displaying banned websites in the results shown to users, and companies like Google are asked to adhere to a regularly updated blacklist maintained by Roscomnadzor.
Google does not fully comply with the blacklist, however, and more than a third of the websites banned in Russia could be found using its search engine, Roscomnadzor said previously.
No doubt Russia is no working on increased fines for future transgressions.
Russia's powerful internal security agency FSB has enlisted the help of the telecommunications, IT and media censor Roskomnadzor to ask a court to block Mailbox and Scryptmail email providers.
It seems that the services failed to register with the authorities as required by Russian law. Both are marketed as focusing strongly on the privacy segment and offering end-to-end encryption.
News source RBK noted that the process to block the two email providers will in legal terms follow the model applied to the Telegram messaging service -- adding, however, that imperfections in the blocking system are resulting in Telegram's
continued availability in Russia.
On the other hand, some experts argued that it will be easier to block an email service than a messenger like Telegram. In any case, Russia is preparing to a new law to come into effect on November 1 that will see the deployment of Deep Packet
Inspection equipment, which should result in more efficient blocking of services.
A parliamentary committee initiated by the Australian government will investigate how porn websites can verify Australians visiting their websites are over 18, in a move based on the troubled UK age verification system.
The family and social services minister, Anne Ruston, and the minister for communications, Paul Fletcher, referred the matter for inquiry to the House of Representatives standing committee on social policy and legal affairs.
The committee will examine how age verification works for online gambling websites, and see if that can be applied to porn sites. According to the inquiry's terms of reference, the committee will examine whether such a system would push adults
into unregulated markets, whether it would potentially lead to privacy breaches, and impact freedom of expression.
The committee has specifically been tasked to examine the UK's version of this system, in the UK Digital Economy Act 2017.
Hopefully they will understand better than UK lawmakers that it is paramount importance that legislation is enacted to keep people's porn browsing information totally safe from snoopers, hackers and those that want to make money selling it.
One of the key learnings from recent events is that there is growing demand for privacy features. The Firefox Private Network is an extension which provides a secure, encrypted path to the web to protect your connection and your personal
information anywhere and everywhere you use your Firefox browser.
There are many ways that your personal information and data are exposed: online threats are everywhere, whether it's through phishing emails or data breaches. You may often find yourself taking advantage of the free WiFi at the doctor's office,
airport or a cafe. There can be dozens of people using the same network -- casually checking the web and getting social media updates. This leaves your personal information vulnerable to those who may be lurking, waiting to take advantage of this
situation to gain access to your personal info. Using the Firefox Private Network helps protect you from hackers lurking in plain sight on public connections. To learn more about Firefox Private Network, its key features and how it works exactly,
please take a look at
this blog post .
As a Firefox user and account holder in the US, you can
start testing the Firefox Private Network today . A Firefox account allows you to be one of the first to test potential new products and services when we make them available in Europe, so sign up
today and stay tuned for further news and the Firefox Private Network coming to your location soon!
China's internet censor has ordered online AI algorithms to promote 'mainstream values':
Systems should direct users to approved material on subjects like Xi Jinping Thought, or which showcase the country's economic and social development, Cyberspace Administration of China says
They should not recommend content that undermines national security, or is sexually suggestive, promotes extravagant lifestyles, or hypes celebrity gossip and scandals
The Cyberspace Administration of China released its draft regulations on managing the cyberspace ecosystem on Tuesday in another sign of how the ruling Communist Party is increasingly turning to technology to cement its ideological control over
The proposals will be open for public consultation for a month and are expected to go into effect later in the year.
The latest rules point to a strategy to use AI-driven algorithms to expand the reach and depth of the government's propaganda and ideology.
The regulations state that information providers on all manner of platforms -- from news and social media sites, to gaming and e-commerce -- should strengthen the management of recommendation lists, trending topics, hot search lists and push
notifications. The regulations state:
Online information providers that use algorithms to push customised information [to users] should build recommendation systems that promote mainstream values, and establish mechanisms for manual intervention and override.
Today, on World Suicide Prevention Day, we're sharing an update on what we've learned and some of the steps we've taken in the past year, as well as additional actions we're going to take, to keep people safe on our apps, especially those who are
Earlier this year, we began hosting regular consultations with experts from around the world to discuss some of the more difficult topics associated with suicide and self-injury. These include how we deal with suicide notes, the risks of sad
content online and newsworthy depictions of suicide. Further details of these meetings are available on Facebook's new Suicide Prevention page in our Safety Center.
As a result of these consultations, we've made several changes to improve how we handle this content. We tightened our policy around self-harm to no longer allow graphic cutting images to avoid unintentionally promoting or triggering self-harm,
even when someone is seeking support or expressing themselves to aid their recovery. On Instagram, we've also made it harder to search for this type of content and kept it from being recommended in Explore. We've also taken steps to address the
complex issue of eating disorder content on our apps by tightening our policy to prohibit additional content that may promote eating disorders. And with these stricter policies, we'll continue to send resources to people who post content
promoting eating disorders or self-harm, even if we take the content down. Lastly, we chose to display a sensitivity screen over healed self-harm cuts to help avoid unintentionally promoting self-harm.
And for the first time, we're also exploring ways to share public data from our platform on how people talk about suicide, beginning with providing academic researchers with access to the social media monitoring tool, CrowdTangle. To date,
CrowdTangle has been available primarily to help newsrooms and media publishers understand what is happening on Facebook. But we are eager to make it available to two select researchers who focus on suicide prevention to explore how information
shared on Facebook and Instagram can be used to further advancements in suicide prevention and support.
In addition to all we are doing to find more opportunities and places to surface resources, we're continuing to build new technology to help us find and take action on potentially harmful content, including removing it or adding sensitivity
screens. From April to June of 2019, we took action on more than 1.5 million pieces of suicide and self-injury content on Facebook and found more than 95% of it before it was reported by a user. During that same time period, we took action on
more than 800 thousand pieces of this content on Instagram and found more than 77% of it before it was reported by a user.
To help young people safely discuss topics like suicide, we're enhancing our online resources by including Orygen's #chatsafe guidelines in Facebook's Safety Center and in resources on Instagram when someone searches for suicide or self-injury
The #chatsafe guidelines were developed together with young people to provide support to those who might be responding to suicide-related content posted by others or for those who might want to share their own feelings and experiences with
suicidal thoughts, feelings or behaviors.
The New Zealand government has decided to legislate to require Internet TV services to provide age ratings using a self rating scheme overseen by the country's film censor.
Movies and shows available through internet television services such as Netflix and Lightbox will need to display content classifications in a similar way to films and shows released to cinemas and on DVD, Internal Affairs Minister Tracey Martin
The law change, which the Government plans to introduce to Parliament in November, would also apply to other companies that sell videos on demand, including Stuff Pix.
The tighter rules won't apply to websites designed to let people upload and share videos, so videos on YouTube's main site won't need to display classifications, but videos that YouTube sells through its rental service will.
In a compromise, internet television and video companies will be able to self-classify their content using a rating tool being developed by the Chief Censor, or use their own systems to do that if they first have them accredited by the
The Film and Literature Board of Review will be able to review classifications, as they do now for cinema movies and DVDs.
The Government decided against requiring companies to instead submit videos to the film censor for classification, heeding a Cabinet paper warning that this would result in hold-ups.
What's the difference between a child throwing a tantrum and religious groups asking for a ban on something that hurt religious sentiments? Absolutely nothing, except maybe the child can be cajoled into understanding that they might be wrong. Try
doing that with the religious group and you'll be facing trolls, bans, and rape, death or beheading threats. Thankfully, when it comes to the recent call for banning the streaming platform Netflix, those demanding it have taken recourse to the
law and filed a police complaint.
Their concern? According to Shiv Sena committee member Ramesh Solanki, who filed the complaint, Netflix original shows are promoting anti-Hindu propaganda. The shows in question include Sacred Games 2 (a Hindu godman encouraging
terrorism), Leila (depicts a dystopian society divided on the basis of caste) and comedian Hasan Minhaj's Patriot Act (claims how the Lok Sabha elections 2019 disenfranchised minorities).
DNS over HTTPS (DoH) is an encrypted internet protocol that makes it more difficult for ISPs and government censors to block users from being able to access banned websites It also makes it more difficult for state snoopers like GCHQ to keep tabs
on users' internet browsing history.
Of course this protection from external interference also makes it much internet browsing more safe from the threat of scammers, identity thieves and malware.
Google were once considering introducing DoH for its Chrome browser but have recently announced that they will not allow it to be used to bypass state censors.
Mozilla meanwhile have been a bit more reasonable about it and allow users to opt in to using DoH. Now Mozilla is considering using DoH by default in the US, but still with the proviso of implementing DoH only if the user is not using parental
control or maybe corporate website blocking.
Mozilla explains in a blog post:
What's next in making Encrypted DNS-over-HTTPS the Default
By Selena Deckelmann,
In 2017, Mozilla began working on the DNS-over-HTTPS (DoH) protocol, and since June 2018 we've been running experiments in Firefox to ensure the performance and user experience are great. We've also been surprised and excited by the more than
70,000 users who have already chosen on their own to explicitly enable DoH in Firefox Release edition. We are close to releasing DoH in the USA, and we have a few updates to share.
After many experiments, we've demonstrated that we have a reliable service whose performance is good, that we can detect and mitigate key deployment problems, and that most of our users will benefit from the greater protections of encrypted DNS
traffic. We feel confident that enabling DoH by default is the right next step. When DoH is enabled, users will be notified and given the opportunity to opt out.
Results of our Latest Experiment
Our latest DoH experiment was designed to help us determine how we could deploy DoH, honor enterprise configuration and respect user choice about parental controls.
We had a few key learnings from the experiment.
We found that OpenDNS' parental controls and Google's safe-search feature were rarely configured by Firefox users in the USA. In total, 4.3% of users in the study used OpenDNS' parental controls or safe-search. Surprisingly, there was little
overlap between users of safe-search and OpenDNS' parental controls. As a result, we're reaching out to parental controls operators to find out more about why this might be happening.
We found 9.2% of users triggered one of our split-horizon heuristics. The heuristics were triggered in two situations: when websites were accessed whose domains had non-public suffixes, and when domain lookups returned both public and private
(RFC 1918) IP addresses. There was also little overlap between users of our split-horizon heuristics, with only 1% of clients triggering both heuristics.
Now that we have these results, we want to tell you about the approach we have settled on to address managed networks and parental controls. At a high level, our plan is to:
Respect user choice for opt-in parental controls and disable DoH if we detect them;
Respect enterprise configuration and disable DoH unless explicitly enabled by enterprise configuration; and
Fall back to operating system defaults for DNS when split horizon configuration or other DNS issues cause lookup failures.
We're planning to deploy DoH in "fallback" mode; that is, if domain name lookups using DoH fail or if our heuristics are triggered, Firefox will fall back and use the default operating system DNS. This means that for the minority of
users whose DNS lookups might fail because of split horizon configuration, Firefox will attempt to find the correct address through the operating system DNS.
In addition, Firefox already detects that parental controls are enabled in the operating system, and if they are in effect, Firefox will disable DoH. Similarly, Firefox will detect whether enterprise policies have been set on the device and will
disable DoH in those circumstances. If an enterprise policy explicitly enables DoH, which we think would be awesome, we will also respect that. If you're a system administrator interested in how to configure enterprise policies, please find
Options for Providers of Parental Controls
We're also working with providers of parental controls, including ISPs, to add a canary domain to their blocklists. This helps us in situations where the parental controls operate on the network rather than an individual computer. If Firefox
determines that our canary domain is blocked, this will indicate that opt-in parental controls are in effect on the network, and Firefox will disable DoH automatically.
This canary domain is intended for use in cases where users have opted in to parental controls. We plan to revisit the use of this heuristic over time, and we will be paying close attention to how the canary domain is adopted. If we find that it
is being abused to disable DoH in situations where users have not explicitly opted in, we will revisit our approach.
Plans for Enabling DoH Protections by Default
We plan to gradually roll out DoH in the USA starting in late September. Our plan is to start slowly enabling DoH for a small percentage of users while monitoring for any issues before enabling for a larger audience. If this goes well, we will
let you know when we're ready for 100% deployment.
MPs and activists have urged the government to protect women through censorship. They write in a letter
Women around the world are 27 times more likely to be harassed online than men. In Europe, 9 million girls have experienced some kind of online violence by the time they are 15 years old. In the UK, 21% of women have received threats of physical
or sexual violence online. The basis of this abuse is often, though not exclusively, misogyny.
Misogyny online fuels misogyny offline. Abusive comments online can lead to violent behaviour in real life. Nearly a third of respondents to a Women's Aid survey said where threats had been made online from a partner or ex-partner, they were
carried out. Along with physical abuse, misogyny online has a psychological impact. Half of girls aged 11-21 feel less able to share their views due to fear of online abuse, according to Girlguiding UK .
The government wants to make Britain the safest place in the world to be online, yet in the online harms white paper, abuse towards women online is categorised as harassment, with no clear consequences, whereas similar abuse on the grounds of
race, religion or sexuality would trigger legal protections.
If we are to eradicate online harms, far greater emphasis in the government's efforts should be directed to the protection and empowerment of the internet's single largest victim group: women. That is why we back the campaign group Empower's
calls for the forthcoming codes of practice to include and address the issue of misogyny by name, in the same way as they would address the issue of racism by name. Violence against women and girls online is not harassment. Violence against women
and girls online is violence.
Ali Harris Chief executive, Equally Ours Angela Smith MP Independent Anne Novis Activist Lorely Burt Liberal Democrat, House of Lords Ruth Lister Labour, House of Lords Barry Sheerman MP Labour Caroline Lucas MP Green Daniel Zeichner MP Labour Darren Jones MP Labour Diana Johnson MP Labour Flo Clucas Chair, Liberal Democrat Women Gay Collins Ambassador, 30% Club Hannah Swirsky Campaigns officer, René Cassin Joan Ryan MP Independent Group for Change Joe Levenson Director of communications and campaigns, Young Women's Trust Jonathan Harris House of Lords, Labour Luciana Berger MP Liberal Democrats Mandu Reid Leader, Women's Equality Party Maya Fryer WebRoots Democracy Preet Gill MP Labour Sarah Mann Director, Friends, Families and Travellers Siobhan Freegard Founder, Channel Mum Jacqui Smith Empower
One of the Pentagon's most secretive agencies, the Defense Advanced Research Projects Agency (DARPA), is developing custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips.
DARPA now is developing a semantic analysis program called SemaFor and an image analysis program called MediFor, ostensibly designed to prevent the use of fake images or text. The idea would be to develop these technologies to help private
Internet providers sift through content.
Google have announced potentially far reaching new policies about kids' videos on YouTube. A Google blog post explains:
An update on kids and data protection on YouTube
From its earliest days, YouTube has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased. We've been taking a hard look at areas
where we can do more to address this, informed by feedback from parents, experts, and regulators, including COPPA concerns raised by the U.S. Federal Trade Commission and the New York Attorney General that we are addressing with a settlement
New data practices for children's content on YouTube
We are changing how we treat data for children's content on YouTube. Starting in about four months, we will treat data from anyone watching children's content on YouTube as coming from a child, regardless of the age of the user. This means that
we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this content entirely, and some features will no longer be available on
this type of content, like comments and notifications. In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and we'll also use machine learning to find videos that clearly
target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.
Improvements to YouTube Kids
We continue to recommend parents use YouTube Kids if they plan to allow kids under 13 to watch independently. Tens of millions of people use YouTube Kids every week but we want even more parents to be aware of the app and its benefits. We're
increasing our investments in promoting YouTube Kids to parents with a campaign that will run across YouTube. We're also continuing to improve the product. For example, we recently raised the bar for which channels can be a part of YouTube Kids,
drastically reducing the number of channels on the app. And we're bringing the YouTube Kids experience to the desktop.
Investing in family creators
We know these changes will have a significant business impact on family and kids creators who have been building both wonderful content and thriving businesses, so we've worked to give impacted creators four months to adjust before changes take
effect on YouTube. We recognize this won't be easy for some creators and are committed to working with them through this transition and providing resources to help them better understand these changes.
We are also going to continue investing in the future of quality kids, family and educational content. We are establishing a $100 million fund, disbursed over three years, dedicated to the creation of thoughtful, original children's content on
YouTube and YouTube Kids globally.
Today's changes will allow us to better protect kids and families on YouTube, and this is just the beginning. We'll continue working with lawmakers around the world in this area, including as the FTC seeks comments on COPPA . And in the coming
months, we'll share details on how we're rethinking our overall approach to kids and families, including a dedicated kids experience on YouTube.
The Swiss Lottery and Betting Board has published its first censorship list of foreign gambling websites to be blocked by the country's ISPs.
The censorship follows a change to the law on online gambling intended to preserve a monopoly for Swiss gambling providers.
Over 60 foreign websites external link have been blocked to Swiss gamblers. Last June, 73% of voters approved the censorship law. The law came into effect in January but blocking of foreign gambling websites only started in August.
Swiss gamblers can bet online only with Swiss casinos and lotteries that pay tax in the country.
Foreign service providers that voluntarily withdraw from the Swiss market with appropriate measures will not be blocked.
35 people in New Zealand have been charged by police for sharing and possession of Brenton Tarrant's Christchurch terrorist attack video.
As of August 21st, 35 people have been charged in relation to the video, according to information released under the Official Information Act. At least 10 of the charges are against minors, which have now been referred to the Youth Court.
Under New Zealand law, knowingly possessing or distributing objectionable material is a serious offence with a maximum jail term of 14 years.
So far, nine people have been issued warnings, while 14 have been prosecuted for their involvement.
After a long introduction about how open and diverse YouTube is, CEO Susan Wojcick gets down to the nitty gritty of how YouTube censorship works. SHe writes in a blog:
Problematic content represents a fraction of one percent of the content on YouTube and we're constantly working to reduce this even further. This very small amount has a hugely outsized impact, both in the potential harm for our users, as well as
the loss of faith in the open model that has enabled the rise of your creative community. One assumption we've heard is that we hesitate to take action on problematic content because it benefits our business. This is simply not true -- in fact,
the cost of not taking sufficient action over the long term results in lack of trust from our users, advertisers, and you, our creators. We want to earn that trust. This is why we've been investing significantly over the past few years in the
teams and systems that protect YouTube. Our approach towards responsibility involves four "Rs":
We REMOVE content that violates our policy as quickly as possible. And we're always looking to make our policies clearer and more effective, as we've done with pranks and challenges , child safety , and hate speech just this year. We aim to be
thoughtful when we make these updates and consult a wide variety of experts to inform our thinking, for example we talked to dozens of experts as we developed our updated hate speech policy. We also report on the removals we make in our
quarterly Community Guidelines enforcement report. I also appreciate that when policies aren't working for the creator community, you let us know. One area we've heard loud and clear needs an update is creator-on-creator harassment. I said in
my last letter that we'd be looking at this and we will have more to share in the coming months.
We RAISE UP authoritative voices when people are looking for breaking news and information, especially during breaking news moments. Our breaking and top news shelves are available in 40 countries and we're continuing to expand that number.
We REDUCE the spread of content that brushes right up against our policy line. Already, in the U.S. where we made changes to recommendations earlier this year, we've seen a 50% drop of views from recommendations to this type of content, meaning
quality content has more of a chance to shine. And we've begun experimenting with this change in the UK, Ireland, South Africa and other English-language markets.
And we set a higher bar for what channels can make money on our site, REWARDING trusted, eligible creators. Not all content allowed on YouTube is going to match what advertisers feel is suitable for their brand, we have to be sure they are
comfortable with where their ads appear. This is also why we're enabling new revenue streams for creators like Super Chat and Memberships. Thousands of channels have more than doubled their total YouTube revenue by using these new tools in
addition to advertising.
Thailand's Ministry of a Digital Economy and Society plans to open a 'Fake News' Center by November 1st at the latest. The minister has said that the centre will focus on four categories of internet censorship.
Digital Minister Puttipong Punnakanta, said that the coordinating committee of the Fake News Center has set up four subcommittees to screen the various categories of news which might 'disrupt public peace and national security':
natural disasters such as flooding, earthquakes, dam breaks and tsunamis;
economics, the financial and banking sector;
health products, hazardous items and illegal goods,
and of course, government policies.
The Fake News Center will analyse, verify and clarify news items and distribute its findings via its own website, Facebook and Line (a Whatsapp like messaging service that is the dominant in much of Asia).
The committee meeting considered protocols to be used and plans to consult with representatives of major social media platforms and all cellphone service providers. It will encourage them to take part in the delivery of countermeasures to expose