A bill that would force ISPs in Israel to censor pornographic sites by default has been amended after heavy criticism from lawmakers over privacy concerns.
AN earlier version of the bill that was unanimously approved by the Ministerial Committee for Legislation in late Octoberr but now a new version of the legislation has been passed which was sponsored by Likud MK Miki Zohar and Jewish Home MK
Shuli Moalem-Refaeli. The differences seem subtle and are whether customers opt in or opt out of network level website blocking.
Customers will have to confirm their preferences for website blocking every 3 months but may change their settings at any time.
The bill will incentivize internet companies to actively market existing website blocking software to families. ISPs will receive NIS 0.50 ($0.13 cents) for every subscriber who opts to block adult sites.
In a refreshing divergence from UK internet censorship, ISPs will be legally required to delete all data related to their users' surfing habits, to prevent creating de facto -- and easily leaked -- black lists of pornography consumers.
In comparison, internet companies are allowed to use or sell UK customer data for any purpose they so desire as long as customers tick a consent box with some woolly text about improving the customer's experience.
Facebook has added a new category of censorship, sexual solicitation. It added the update on 15thh October but no one really noticed until recently.
The company has quietly updated its content-moderation policies to censor implicit requests for sex.The expanded policy specifically bans sexual slang, hints of sexual roles, positions or fetish scenarios, and erotic art when mentioned with a sex
act. Vague, but suggestive statements such as looking for a good time tonight when soliciting sex are also no longer allowed.
The new policy reads:
15. Sexual Solicitation Policy
Do not post:
Content that attempts to coordinate or recruit for adult sexual activities including but not limited to:
Filmed sexual activities Pornographic activities, strip club shows, live sex performances, erotic dances Sexual, erotic, or tantric massages
Content that engages in explicit sexual solicitation by, including but not limited to the following, offering or asking for:
Sex or sexual partners Sex chat or conversations Nude images
Content that engages in implicit sexual solicitation, which can be identified by offering or asking to engage in a sexual act and/or acts identified by other suggestive elements such as any of the following:
Vague suggestive statements, such as "looking for a good time tonight" Sexualized slang Using sexual hints such as mentioning sexual roles, sex positions, fetish scenarios, sexual preference/sexual partner preference, state of arousal,
act of sexual intercourse or activity (sexual penetration or self-pleasuring), commonly sexualized areas of the body such as the breasts, groin, or buttocks, state of hygiene of genitalia or buttocks Content (hand drawn, digital, or real-world
art) that may depict explicit sexual activity or suggestively posed person(s).
Content that offers or asks for other adult activities such as:
Commercial pornography Partners who share fetish or sexual interests
Sexually explicit language that adds details and goes beyond mere naming or mentioning of:
A state of sexual arousal (wetness or erection) An act of sexual intercourse (sexual penetration, self-pleasuring or exercising fetish scenarios)
Comment: Facebook's Sexual Solicitation Policy is a Honeypot for Trolls
Facebook just quietly adopted a policy that could push thousands of innocent people off of the platform. The new " sexual solicitation " rules forbid pornography and other explicit sexual content (which was already functionally
banned under a different statute ), but they don't stop there: they also ban "implicit sexual solicitation" , including the use of sexual slang, the solicitation of nude images, discussion of "sexual partner
preference," and even expressing interest in sex . That's not an exaggeration: the new policy bars "vague suggestive statements, such as 'looking for a good time tonight.'" It wouldn't be a stretch to think that asking
" Netflix and chill? " could run afoul of this policy.
The new rules come with a baffling justification, seemingly blurring the line between sexual exploitation and plain old doing it:
[P]eople use Facebook to discuss and draw attention to sexual violence and exploitation. We recognize the importance of and want to allow for this discussion. We draw the line, however, when content facilitates, encourages or coordinates sexual
encounters between adults.
In other words, discussion of sexual exploitation is allowed, but discussion of consensual, adult sex is taboo. That's a classic censorship model: speech about sexuality being permitted only when sex is presented as dangerous and shameful. It's
especially concerning since healthy, non-obscene discussion about sex--even about enjoying or wanting to have sex--has been a component of online communities for as long as the Internet has existed, and has for almost as long been the target of
governmental censorship efforts .
Until now, Facebook has been a particularly important place for groups who aren't well represented in mass media to discuss their sexual identities and practices. At very least, users should get the final say about whether they want to see such
speech in their timelines.
Overly Restrictive Rules Attract Trolls
Is Facebook now a sex-free zone ? Should we be afraid of meeting potential partners on the platform or even disclosing our sexual orientations ?
Maybe not. For many users, life on Facebook might continue as it always has. But therein lies the problem: the new rules put a substantial portion of Facebook users in danger of violation. Fundamentally, that's not how platform moderation
policies should work--with such broadly sweeping rules, online trolls can take advantage of reporting mechanisms to punish groups they don't like.
Combined with opaque and one-sided flagging and reporting systems , overly restrictive rules can incentivize abuse from bullies and other bad actors. It's not just individual trolls either: state actors have systematically abused Facebook's
flagging process to censor political enemies. With these new rules, organizing that type of attack just became a lot easier. A few reports can drag a user into Facebook's labyrinthine enforcement regime , which can result in having a group page
deactivated or even being banned from Facebook entirely. This process gives the user no meaningful opportunity to appeal a bad decision .
Given the rules' focus on sexual interests and activities, it's easy to imagine who would be the easiest targets: sex workers (including those who work lawfully), members of the LGBTQ community, and others who congregate online to discuss issues
relating to sex. What makes the policy so dangerous to those communities is that it forbids the very things they gather online to discuss.
Even before the recent changes at Facebook and Tumblr , we'd seen trolls exploit similar policies to target the LGBTQ community and censor sexual health resources . Entire harassment campaigns have organized to use payment processors' reporting
systems to cut off sex workers' income . When online platforms adopt moderation policies and reporting processes, it's essential that they consider how those policies and systems might be weaponized against marginalized groups.
A recent Verge article quotes a Facebook representative as saying that people sharing sensitive information in private Facebook groups will be safe , since Facebook relies on reports from users. If there are no tattle-tales in your group, the
reasoning goes, then you can speak freely without fear of punishment. But that assurance rings rather hollow: in today's world of online bullying and brigading, there's no question of if your private group will be infiltrated by the trolls
; it's when .
Did SESTA/FOSTA Inspire Facebook's Policy Change?
The rule change comes a few months after Congress passed the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act (SESTA/FOSTA), and it's hard not to wonder if the policy is the direct result of
the new Internet censorship laws.
SESTA/FOSTA opened online platforms to new criminal and civil liability at the state and federal levels for their users' activities. While ostensibly targeted at online sex trafficking, SESTA/FOSTA also made it a crime for a platform to
"promote or facilitate the prostitution of another person." The law effectively blurred the distinction between adult, consensual sex work and sex trafficking. The bill's supporters argued that forcing platforms to clamp down on all
sex work was the only way to curb trafficking--nevermind the growing chorus of trafficking experts arguing the very opposite .
As SESTA/FOSTA was debated in Congress, we repeatedly pointed out that online platforms would have little choice but to over-censor : the fear of liability would force them not just to stop at sex trafficking or even sex work, but to take much
more restrictive approaches to sex and sexuality in general, even in the absence of any commercial transaction. In EFF's ongoing legal challenge to SESTA/FOSTA , we argue that the law unconstitutionally silences lawful speech online.
While we don't know if the Facebook policy change came as a response to SESTA/FOSTA, it is a perfect example of what we feared would happen: platforms would decide that the only way to avoid liability is to ban a vast range of discussions of sex.
Wrongheaded as it is, the new rule should come as no surprise. After all, Facebook endorsed SESTA/FOSTA . Regardless of whether one caused the other or not, both reflect the same vision of how the Internet should work--a place where certain
topics simply cannot be discussed. Like SESTA/FOSTA, Facebook's rule change might have been made to fight online sexual exploitation. But like SESTA/FOSTA, it will do nothing but push innocent people offline.
Facebook has been fined ?10m (£8.9m) by Italian authorities for misleading users over its data practices.
The two fines issued by Italy's competition watchdog are some of the largest levied against the social media company for data misuse.
The Italian regulator found that Facebook had breached the country's consumer code by:
Misleading users in the sign-up process about the extent to which the data they provide would be used for commercial purposes.
Emphasising only the free nature of the service, without informing users of the "profitable ends that underlie the provision of the social network", and so encouraging them to make a decision of a commercial nature that they would not
have taken if they were in full possession of the facts.
Forcing an "aggressive practice" on registered users by transmitting their data from Facebook to third parties, and vice versa, for commercial purposes.
The company was specifically criticised for the default setting of the Facebook Platform services, which in the words of the regulator, prepares the transmission of user data to individual websites/apps without express consent from users.
Although users can disable the platform, the regulator found that its opt-out nature did not provide a fully free choice.
The authority has also directed Facebook to publish an apology to users on its website and on its app.
Image hosting service Tumblr is banning all adult images of sex and nudity from 17th December 2018. This seems to have been sparked by the app being banned from Apple Store after a child porn image was detected being hosted by Tumblr. Tumblr
explained the censorship process in a blog post:
Starting Dec 17, adult content will not be allowed on Tumblr, regardless of how old you are. You can read more about what kinds of content are not allowed on Tumblr in our Community Guidelines. If you spot a post that you don't think belongs on
Tumblr, period, you can report it: From the dashboard or in search results, tap or click the share menu (paper airplane) at the bottom of the post, and hit Report.
Adult content primarily includes photos, videos, or GIFs that show real-life human genitals or female-presenting nipples, and any content204including photos, videos, GIFs and illustrations204that depicts sex acts.
Examples of exceptions that are still permitted are exposed female-presenting nipples in connection with breastfeeding, birth or after-birth moments, and health-related situations, such as post-mastectomy or gender confirmation surgery. Written
content such as erotica, nudity related to political or newsworthy speech, and nudity found in art, such as sculptures and illustrations, are also stuff that can be freely posted on Tumblr.
Any images identified as adult will be set as unviewable by anyone except the poster. There will be an appeals process to contest decisions held to be incorrect.
Inevitably Tumblr algorithms are not exactly accurate when it comes to detecting sex and nudity. The Guardian noted that ballet dancers, superheroes and a picture of Christ have all fallen foul of Tumblr's new pornography ban, after the images
were flagged up as explicit content by the blogging site's artificial intelligence (AI) tools.
The actor and Tumblr user Wil Wheaton posted one example:
An image search for beautiful men kissing, which was flagged as explicit within 30 seconds of me posting it.
These images are not explicit. These pictures show two adults, engaging in consensual kissing. That's it. It isn't violent, it isn't pornographic. It's literally just two adult humans sharing a kiss.
Other users chronicled flagged posts, including historical images of (clothed) women of colour, a photoset of the actor Sebastian Stan wearing a selection of suits with no socks on, an oil painting of Christ wearing a loincloth, a still of ballet
dancers and a drawing of Wonder Woman carrying fellow superhero Harley Quinn. None of the images violate Tumblr's stated policy.
Tumblr, after years of being a space for nsfw artists to reach a community of like-minded individuals to enjoy their work, has decided to close their metaphorical doors to adult content.
Solution Stop it. Let people post porn, it's 90% of the reason anybody is on the site in the first place. Or, if you really want a non-18+ tumblr, start a new one with that specific goal in mind. Don't rip down what people have spent years
The Free Speech coalition [representing the US adult trade] released the following statement regarding the recent announcement about censorship at Tumblr:
The social media platform Tumblr has announced that on December 17, it will effectively ban all adult content. Tumblr follows the lead of Facebook, Instagram, YouTube and other social media platforms, who over the past few years have meticulously
scrubbed their corners of the internet of adult content, sex, and sexuality, in the name of brand protection and child protection.
While some in the adult industry may cheer the end of Tumblr as a never-ending source of free content, specifically pirated content, it is concerning that of the major social media platforms, only Twitter and Reddit remain in any way tolerant of
adult workers -- and there are doubts as to how much longer that will last.
As legitimate platforms ban or censor adult content -- having initially benefited from traffic that adult content brought them -- illegitimate platforms for distribution take their place. The closure of Tumblr only means more piracy, more
dispersal of community, and more suffering for adult producers and performers.
Free Speech Coalition was founded to fight government censorship -- set raids and FBI entrapment, bank seizures and jail terms. The internet gave us freedom from much that had plagued us, particularly local ordinances and overzealous prosecutors.
But now, when corporate censors suspend your account, the only choice is to abandon the platform 203 there is no opportunity for arbitration or appeal.
When companies like Google and Facebook (and subsidiaries like YouTube and Instagram) control over 70% of all web traffic, adult companies are denied a market as effectively as a state-level sex toy ban. And when sites like Tumblr and Twitter can
close an account with millions of followers without warning, the effect is the same on a business -- particularly a small, performer-run one -- as an FBI seizure.
As social media companies become more powerful, we must demand recourse, but we also must look beyond our industry and continue to build alliances -- with women, with LGBTQ groups, with sex workers and sex educators, with artists -- who
implicitly understand the devastating effect of this new form of censorship.
These communities have seen the devastation wreaked when platforms use purges of adult content as a sledgehammer, broadly banning sexual health information, vibrant communities based around non-normative genders and sexualities, resources for sex
workers, and political and cultural commentary that engages with such topics.
The loss of these platforms isn't just about business, it's about the loss of vital communities and education -- and organizing. We use these platforms not only to grow our reach, but to communicate with one another, to rally, to drive awareness
of issues of sex and sexuality. They have become a central source of power. And today, we're one step closer to losing that as well.
Poland stands up to the EU to champion the livelihoods of thosands of Europeans against the disgraceful EU that wants to grant large, mostly American companies, dictatorial copyright control of the internet
In 2011, Europeans rose up over ACTA , the misleadingly named "Anti-Counterfeiting Trade Agreement," which created broad surveillance and censorship regimes for the internet. They were successful in large part thanks to the Polish
activists who thronged the streets to reject the plan, which had been hatched and exported by the US Trade Representative.
The Poles aren't having any of it:
a broad coalition of Poles from the left and the right have come together to oppose the new Directive, dubbing it "ACTA2," which should give you an idea of how they feel about the matter.
There are now enough national governments opposed to the Directive to constitute a "blocking minority" that could stop it dead. Alas, the opposition is divided on whether to reform the offending parts of the Directive, or eliminate them
outright (this division is why the Directive squeaked through the last vote, in September), and unless they can work together, the Directive still may proceed.
A massive coalition of 15,000 Polish creators whose videos, photos and text are enjoyed by over 20,000,000 Poles have signed an open letter supporting the idea of a strong, creator-focused copyright and rejecting the new Copyright Directive as a
direct path to censoring filters that will deprive them of their livelihoods.
The coalition points out that online media is critical to the lives of everyday Poles for purposes that have nothing to do with the entertainment industry: education, the continuation of Polish culture, and connections to the global Polish
Polish civil society and its ruling political party are united in opposing ACTA2; Polish President Andrzej Duda vowed to oppose it.
Early next month, the Polish Internet Governance Forum will host a roundtable on the question; they have invited proponents of the Directive to attend and publicly debate the issue.
The Daily Mail reports on large scale data harvesting of your data and notes that Paypal have been passing on passport photos used for account verification to Microsoft for their facial recognition database
Parliament's fake news inquiry has published a cache of seized Facebook documents including internal emails sent between Mark Zuckerberg and the social network's staff. The emails were obtained from the chief of a software firm that is suing the
tech giant. About 250 pages have been published, some of which are marked highly confidential.
Facebook had objected to their release.
Damian Collins MP, the chair of the parliamentary committee involved, highlighted several key issues in an introductory note. He wrote that:
Facebook allowed some companies to maintain "full access" to users' friends data even after announcing changes to its platform in 2014/2015 to limit what developers' could see. "It is not clear that there was any user consent for
this, nor how Facebook decided which companies should be whitelisted," Mr Collins wrote
Facebook had been aware that an update to its Android app that let it collect records of users' calls and texts would be controversial. "To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this
was one of the underlying features," Mr Collins wrote
Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat
there was evidence that Facebook's refusal to share data with some apps caused them to fail
there had been much discussion of the financial value of providing access to friends' data
In response, Facebook has said that the documents had been presented in a very misleading manner and required additional context.
Mastercard and Microsoft are collaborating in an identity management system that promises to remember users' identity verification and passwords between sites and services.
Mastercard highlights four particular areas of use: financial services, commerce, government services, and digital services (eg social media, music streaming services and rideshare apps). This means the system would let users manage their data
across both websites and real-world services.
However, the inclusion of government services is an eyebrow-raising one. Microsoft and Mastercard's system could link personal information including taxes, voting status and criminal record, with consumer services like social media accounts,
online shopping history and bank accounts.
As well as the stifling level of tailored advertising you'd receive if the system knew everything you did, this sets the dangerous precedent for every byte of users' information to be stored under one roof -- perfect for an opportunistic hacker
or businessman. Mastercard mention it is working closely with players like Microsoft, showing that many businesses have access to the data.
Neither Microsoft nor Mastercard have slated a release date for the system, only promising additional details on these efforts will be shared in the coming months.
Defending equal access to the free and open internet is core to Reddit's ideals, and something that redditors have told us time and again they hold dear too, from the SOPA/PIPA battle to the fight for Net Neutrality. This is why even though
we are an American company with a user base primarily in the United States, we've nevertheless spent a lot of time this year
warning about how an overbroad EU Copyright Directive could restrict Europeans' equal access to the open Internet--and to Reddit.
Despite these warnings, it seems that EU lawmakers still don't fully appreciate the law's potential impact, especially on small and medium-sized companies like Reddit. So we're stepping things up to draw attention to the problem. Users in the EU
will notice that when they access Reddit via desktop, they are greeted by a modal informing them about the Copyright Directive and referring them to
detailed resources on proposed fixes .
The problem with the Directive lies in Articles 11 (link licensing fees) and 13 (copyright filter requirements), which set sweeping, vague requirements that create enormous liability for platforms like ours. These requirements eliminate the
previous safe harbors that allowed us the leeway to give users the benefit of the doubt when they shared content. But under the new Directive, activity that is core to Reddit, like sharing links to news articles, or the use of existing content
for creative new purposes (r/photoshopbattles, anyone?) would suddenly become questionable under the law, and it is not clear right now that there are feasible mitigating actions that we could take while preserving core site functionality. Even
worse, smaller but similar attempts in various countries in Europe in the past have shown that
such efforts have actually harmed publishers and creators .
Accordingly, we hope that today's action will drive the point home that there are grave problems with Articles 11 and 13, and that the current trilogue negotiations will choose to remove both entirely. Barring that, however, we have a number of
suggestions for ways to improve both proposals. Engine and the Copia Institute have compiled them
https://dontwreckthe.net/ . We hope you will read them and consider calling your Member of European Parliament (
look yours up here ). We also hope that EU lawmakers will listen to those who use and understand the internet the most, and reconsider these problematic articles. Protecting rights holders need not come at the cost of silencing European
Chinese internet companies have started keeping detailed records of their users' personal information and online activity. The new rules from China's internet censor went into effect Friday.
The new requirements apply to any company that provides online services which can influence public opinion or mobilize the public to engage in specific activities, according to a notice posted on the Cyber Administration of China's website.
Citing the need to safeguard national security and social order, the Chinese internet censor said companies must be able to verify users' identities and keep records of key information such as call logs, chat logs, times of activity and network
Officals will carry out inspections of companies' operations to ensure compliance. But the Cyber Administration didn't make clear under what circumstances the companies might be required to hand over logs to authorities.
Morality in Media (now calling themselves the National Center on Sexual Exploitation) writes:
This Friday, Netflix will begin streaming a new show, Baby .
Based loosely on the account of the Baby Squillo scandal, the show portrays a group of teenagers entering into prostitution as a glamorized coming-of-age story. Under international and U.S. federal law, anyone engaged in commercial sex who is
under 18 years old is by definition a sex trafficking victim. In the real-life scandal that Baby is based on, the mother of one of the teenagers was arrested for sex trafficking.
In January, the National Center on Sexual Exploitation, along with 55 other survivors of sex trafficking and/or subject matter experts, social service providers, and advocates for the abolition of sexual exploitation sent a letter to Netflix
executives to express their deep concern regarding Netflix's forthcoming Italian drama, Baby, which normalizes child sexual abuse and the sex trafficking of minors as prostitution.
Despite being at ground zero of the #MeToo movement, Netflix appears to have gone completely tone-deaf on the realities of sexual exploitation, said Dawn Hawkins, executive director of the National Center on Sexual Exploitation. Despite the
outcry from survivors of sex trafficking, subject matter experts, and social service providers, Netflix promotes sex trafficking by insisting on streaming Baby. Clearly, Netflix is prioritizing profits over victims of abuse.
Erik Barmack, VP of International Originals at Netflix, has previously described the new show as edgy.
There is absolutely nothing edgy about the sexual exploitation of minors. This show glamorizes sexual abuse and trivializes the experience of countless underage women and men who have suffered through sex trafficking.
Banning of porn sites with strict terms of service in a disservice to the people of India and will only lead people to go to risky porn sites that may contain illegal content. By Corey Price, VP of Pornhub
We are Google employees. Google must drop Dragonfly.
We are Google employees and we join Amnesty International in calling on Google to cancel project Dragonfly, Google's effort to create a censored search engine for the Chinese market that enables state surveillance.
We are among thousands of employees who have raised our voices for months. International human rights organizations and investigative reporters have also sounded the alarm, emphasizing serious human rights concerns and repeatedly calling on
Google to cancel the project. So far, our leadership's response has been unsatisfactory.
Our opposition to Dragonfly is not about China: we object to technologies that aid the powerful in oppressing the vulnerable, wherever they may be. The Chinese government certainly isn't alone in its readiness to stifle freedom of expression, and
to use surveillance to repress dissent. Dragonfly in China would establish a dangerous precedent at a volatile political moment, one that would make it harder for Google to deny other countries similar concessions.
Our company's decision comes as the Chinese government is openly expanding its surveillance powers and tools of population control. Many of these rely on advanced technologies, and combine online activity, personal records, and mass monitoring to
track and profile citizens. Reports are already showing who bears the cost, including Uyghurs, women's rights advocates, and students. Providing the Chinese government with ready access to user data, as required by Chinese law, would make Google
complicit in oppression and human rights abuses.
Dragonfly would also enable censorship and government-directed disinformation, and destabilize the ground truth on which popular deliberation and dissent rely. Given the Chinese government's reported suppression of dissident voices, such controls
would likely be used to silence marginalized people, and favor information that promotes government interests.
Many of us accepted employment at Google with the company's values in mind, including its previous position on Chinese censorship and surveillance, and an understanding that Google was a company willing to place its values above its profits.
After a year of disappointments including Project Maven, Dragonfly, and Google's support for abusers, we no longer believe this is the case. This is why we're taking a stand.
We join with Amnesty International in demanding that Google cancel Dragonfly. We also demand that leadership commit to transparency, clear communication, and real accountability. Google is too powerful not to be held accountable. We deserve to
know what we're building and we deserve a say in these significant decisions.
The Australian Parliament has passed controversial amendments to copyright law. There will now be a tightened site-blocking regime that will tackle mirrors and proxies more effectively, restrict the appearance of blocked sites in Google
search, and introduce the possibility of blocking dual-use cyberlocker type sites.
Section 115a of Australia's Copyright Act allows copyright holders to apply for injunctions to force ISPs to prevent subscribers from accessing pirate sites. While rightsholders say that it's been effective to a point, they have lobbied hard for
The resulting Copyright Amendment (Online Infringement) Bill 2018 contained proposals to close the loopholes. After receiving endorsement from the Senate earlier this week, the legislation was today approved by Parliament.
Once the legislation comes into force, proxy and mirror sites that appear after an injunction against a pirate site has been granted can be blocked by ISPs without the parties having to return to court. Assurances have been given, however, that
the court will retain some oversight.
Search engines, such as Google and Bing, will also be affected. Accused of providing backdoor access to sites that have already been blocked, search providers will now have to remove or demote links to overseas-based infringing sites, along with
their proxies and mirrors.
The Australian Government will review the effectiveness of the new amendments in two years' time.
Russia's state censors have formally accused Google of breaking the law by not removing links to websites that are banned in the country.
Roskomnadzor, the state communications censor, said in a statement that the company had not connected to a database of banned sources in the country, leaving it out of compliance.
The potential penalty that Google could face is currently 700,000 roubles, or about $10,000. But Reuters reports that the Russian government has been considering more drastic actions, including fining companies up to 1 percent of annual revenue
for failing to comply with similar laws.
Deadnaming and misgendering could now get you a suspension from Twitter as it looks to sure up its safeguarding policy for people in the protected transgender category.
Twitter's recently updated censorship policy now reads:
Repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone
We prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of
According to the Ofxord English dictionary misgendering means:
Refer to (someone, especially a transgender person) using a word, especially a pronoun or form of address, that does not correctly reflect the gender with which they identify.
According to thegayuk.com:
Deadnaming is when a person refers to someone by a previous name, it could be done with malice or by accident. It mostly affects transgender people who have changed their name during their transition.
The Internet Watch Foundation (IWF) calls on the European Commission to reconsider proposed legislation on E-Privacy. This is important because if the proposal is enshrined in law, it will potentially have a direct impact on the tech companies'
ability to scan their networks for illegal online child sexual abuse images and videos.
Under Article 5 of the proposed E-Privacy legislation, people would have more control over their personal data. As currently drafted, Article 5 proposes that tech companies would require the consent of the end user (for example, the person
receiving an email or message), to scan their networks for known child sexual abuse content. Put simply, this would mean that unless an offender agreed for their communications to be scanned, technology companies would no longer be able to do
Susie Hargreaves of the IWF says:
At a time when IWF are taking down more images and videos of child sexual abuse, we are deeply concerned by this move. Essentially, this proposed new law could put the privacy rights of offenders, ahead of the rights of children - children who
have been unfortunate enough to be the victim of child sexual abuse and who have had the imagery of their suffering shared online.
We believe that tech companies' ability to scan their networks, using PhotoDNA and other forms of technology, for known child sexual abuse content, is vital to the battle to rid the internet of this disturbing material.
It is remarkable that the EU is pursuing this particular detail in new legislation, which would effectively enhance the rights of possible 'offenders', at a time when the UK Home Secretary is calling on tech companies to do more to protect
children from these crimes. The only way to stop this ill-considered action, is for national governments to call for amendments to the legislation, before it's too late. This is what is in the best interests of the child victims of this