Facebook writes about their 1st transparency report
Transparency and trust are core values at Facebook. We strive to embody them in all aspects of our services, including our approach to responding to government data requests. We want to make sure that the people who use our
service understand the nature and extent of the requests we receive and the strict policies and processes we have in place to handle them.
We are pleased to release our first Global Government Requests Report, which details the following:
The report details the following:
Which countries requested information from Facebook about our users
The number of requests received from each of those countries
The number of users/user accounts specified in those requests
The percentage of these requests in which we were required by law to disclose at least some data
The report covers the first 6 months of 2013, ending June 30.
As we have made clear in recent weeks, we have stringent processes in place to handle all government data requests. We believe this process protects the data of the people who use our service, and requires governments to
meet a very high legal bar with each individual request in order to receive any information about any of our users. We scrutinize each request for legal sufficiency under our terms and the strict letter of the law, and require a detailed
description of the legal and factual bases for each request. We fight many of these requests, pushing back when we find legal deficiencies and narrowing the scope of overly broad or vague requests. When we are required to comply with a particular
request, we frequently share only basic user information, such as name.
Data Requests (for countries with 50 or more requests)
Proportion actioned at least in part
11,000 - 12,000
20,000 - 21,000
Comment: Why do we find out more about British snooping in snippets published by American companies than we do from the British authorities themselves?
It is absurd that we learn more about Government surveillance from Microsoft, Google and Facebook than our own authorities. These figures were never mentioned during the Parliamentary debate on the draft communications data bill,
nor in the annual report of the Interception of Communications Commissioner's report.
It is particularly concerning that 32% of requests did not result in any data being provided, yet in theory these requests had been signed off as necessary and proportionate by the police force making the request. This should be addressed
by the Interception Commissioner and we will be writing to him to make this argument. It also highlights the ongoing questions about the skill base within the police to understand the data that is available -- far, far more than ever before.
What we do not know from these figures is how many requests were made through the Mutual Legal Assistance Treaty process (which involves a formal legal request being made through the US legal system) and how many were voluntarily complied with by
Facebook. This is also the case with other companies.
Ultimately, it should not be for US companies to be the ones publishing data on how our own police forces are using these powers. It is impossible to have a realistic debate about capability gaps and how powers are being used if we do not
have the data, and the Government should be far more proactive in publishing information.
Lawmakers in California are currently debating a bill targeting the posting of so called revenge porn , when compromising pictures are posted after a relationship has broken up.
The bill would make it a crime to post pictures of anyone in a state of full or partial undress even if the picture was originally taken with that person's consent. But a crime would have only been committed if the pictures were posted with the intent to cause serious emotional distress, and [that] the other person suffers serious emotional distress
The bill reads:
This bill would provide that any person who photographs or records by any means the image of another, identifiable person without with his or her consent who is in a state of full or partial undress in any area in which the person being
photographed or recorded has a reasonable expectation of privacy, and subsequently distributes the image taken, with the intent to cause serious emotional distress, and the other person suffers serious emotional distress would constitute
disorderly conduct subject to that same punishment.
The law has been passed by the State's senate and is now under consideration by the state assembly.
If convicted offenders could be fined up to $2,000, or face a month in prison - or both, according to the BBC. More severe penalties would follow if more offences were proven.
The bill been opposed by anti-censorship groups who argue its definition is too broad in the context of the US constitution.
Saeed Malekpour, an Iranian programmer residing in Canada, was sentenced to death in Iran for designing porn websites. This death sentence has now been commuted to life imprisonment, with the explanation that Malekpour had repented.
As AVN reported in January of last year, Malekpour was detained by Iranian authorities in 2008 after traveling back home to visit his father, who was ill. He was charged with helping design porn sites in Canada, and despite appeals from the
Canadian government, his family and others that he was innocent of the charges, he was convicted and sentenced to death in December 2010.
Iran had accused Malekpour of being the head of the biggest Persian-language network of pornographic websites. But credible supporters said he had simply worked as a freelance website developer and programmer, creating a program to allow
designers to upload photos to their websites.
The head of Russia's Federal Security Service (FSB) has personally ordered preparations for laws that would block the Tor anonymity network from the entire Russian sector of the Internet.
FSB director Aleksandr Bortnikov announced the initiative at a recent session of the National Anti-Terrorism Committee, saying that his agency would develop the legislative drafts together with other Russian law enforcement and security bodies.
The FSB official said that the agency initiated the move as internet anonymizers were used by weapon traffickers, drug dealers and credit card fraudsters.
At the same time, an unnamed source told the newspaper that not all Russian security specialists welcomed the idea, as various criminals often overestimated the protection provided by the Undernet, acted recklessly and allowed themselves to get
caught. The blocking would require the development of some new methods of search and control in new anonymity networks that would appear soon after the Russian audience loses access to existing ones, the source noted.
Lower House MP Ilya Kostunov noted that the problem was important but doubted that it was technically executable. As far as I know, it is impossible to block Tor, Kostunov said. The network re-tunes quickly, switches to different hubs
and starts working again.
The Tor Project administration also said that the blocking of the system was extremely difficult, adding that even Tor's own specialists could not control the information flowing through their servers or identify users.
Egypt's judiciary has turned down a court case calling for banning of internet pornography websites.
The case was filed by lawyer Ibrahim Atteya, citing article two of the now suspended 2012 constitution. The article stated that the principles of Islamic Sharia are the main source of legislation. Atteya claimed that internet websites which spread indecency
do not comply with Islamic Sharia.
The case was turned down since Atteya failed to abide by proper procedure when filing his original requests to cancel the decision.
Hassan Azhary, lawyer at the Association for Freedom of Thought and Expression (AFTE) had argued that: Technically speaking, the internet pornography ban is almost impossible . He explained that the ban is very costly; it could costs
millions of Egyptian pounds. He added that it's very difficult to list down the names of all pornography websites. Azhary also said there are some programs which can open banned websites: A ban would be a waste of public money.
A week after we met Naked Breast-feeding Yoga Mom, the hippie-fabulous, clothing-optional earth mama has been censored by Instagram.
Baby Center blogger Sara McGinnis reports that Amy tells me all of her photos are gone and her account has been locked with no explanation from the image-sharing site.
McGinnis believes Amy's @DaughterOfTheSun account was banned after curious folks worldwide saw the 'breastfeeding yoga mom' photo and objected. Instagram supposedly allows breast feeding but bans nudity.
There is now a hashtag opposing Instagram censorship, #SaveDaughterOfTheSun.
Groklaw , a respected legal analysis website, has ceased publication out of concern of inadequate privacy for its users due to US email surveillance.
The website shutdown comes just weeks after two providers of secure email, Lavabit and Silent Circle, opted to discontinue their services.
Groklaw founder and editor Pamela Jones said she cannot continue to operate her community-based website, which often relies on confidential tips, without some degree of email privacy.
Citing LavaBit founder Ladar Levison's observation that if we knew what he knew about email, we wouldn't use it either. Lavabit previously offered email privacy but seems to have been forced to close by the US authorities
Surveillance comes with an associated cost: It drives businesses away from the United States. The Information Technology and Innovation Institute, a technology think tank, estimates that U.S. cloud service providers, unable to assure privacy,
could lose between $22 billion and $35 billion to competitors in Europe over the next three years.
But that rather assumes that Europe doesn't operate equally invasive internet snooping.
Social networking site Ask.fm has unveiled changes to make its site safer after recent online bullying cases.
It said it would view all reports within 24 hours, make the report button more visible, and include bullying and harassment as a category for a report. It said some of the changes would be live on the site by September.
Ask.fm said it would:
Hire more staff, including a safety officer, to moderate comments on the site
Create a bullying/harassment category for reported comments, alongside spam or scam , hate speech , violence and pornographic content
Raise the visibility of a function to opt out of receiving anonymous questions
Limit the number of features unregistered users were able to access, and
require an email address upon sign-up for registered users
The UK Safer Internet Centre, which promotes the safe use of technology, said it was delighted by Ask.fm's proposed changes, and added the increased visibility of the anonymous opt-out option was an important development. We
strongly advise users, especially children, to switch off anonymous questions, and to report any abuse they see on the site, the group said.
The web inevitably makes available some content which is unsuitable or inappropriate for children to access. Some of this will be illegal, but much more will not, or may be suitable say for over 13s or over 16s only. A traffic light system may
therefore struggle to distinguish between these and runs the risk of imposing the strictest warning on masses of content by default.
A greater concern however, is how the new system will guard against becoming a tool to enable prejudices of one kind or another to be played out. The system can only operate if it is the crowd's decision which counts - the reason this is even
being considered is because there is too much content for a regulator or platform to consider. Relying on the crowd assumes that a collective consciousness emerges from the great mass of web users and their shared values, rather than a set of
subjective reactions. This is a dangerous assumption. As a recent MIT study reported in Science suggests, the wisdom of the crowd may be a myth, its mentality more akin to that of a mob or herd.
In the UK, rightsholders have the power to demand arbitrary censorship of websites they dislike, and ISPs are required to block those sites. The Premier League carelessly added the IP address of a major web-host to its censorship list, and as a
result blocked The Radio Times, Galaxy Zoo, and many other legitimate sites.
People who tried to visit those sites instead saw a warning saying that the sites were devoted to copyright infringement and that anyone visiting them was also infringing copyright.
ISPs were flooded with complaints, and began to unblock the sites themselves.
But the Premier League is outraged at this. They say that even if the Premier League censored the wrong sites, it isn't up to the ISPs to uncensor them, the ISPs are supposed to comply with the lists they get from rightsholders, no
People sending email to any of Google's 425 million Gmail users have no reasonable expectation that their communications are confidential, the internet giant has said in a court filing.
The advocacy group Consumer Watchdog uncovered the filing and called the revelation a stunning admission. John Simpson, Consumer Watchdog's privacy project director said:
Google has finally admitted they don't respect privacy. People should take them at their word; if you care about your email correspondents' privacy, don't use Gmail.
Google set out its case last month in an attempt to dismiss a class action lawsuit that accuses the tech giant of breaking wire tap laws when it scans emails sent from non-Google accounts in order to target ads to Gmail users. That suit quotes
Eric Schmidt, Google's executive chairman:
Google policy is to get right up to the creepy line and not cross it.
The filing continues that Google:
Unlawfully opens up, reads, and acquires the content of people's private email messages. Unbeknown to millions of people, on a daily basis and for years, Google has systematically and intentionally crossed the 'creepy line' to read private email
messages containing information you don't want anyone to know, and to acquire, collect, or mine valuable information from that mail.
A man using the British Library's wi-fi network was denied access to an online version of Shakespeare's Hamlet because the text contained violent content .
Author Mark Forsyth was writing his book in the library, and needed to check a line from the famous play. He revealed on his blog that the filter had logged his attempt to access the page.
The British Library said the fault was caused by a newly installed wi-fi service from a third-party provider.
A spokesperson for the British Library said Hamlet had since been made accessible.
Internet filters have recently come under increased scrutiny, after the government announced that pornography will be automatically blocked by UK internet providers, unless customers choose otherwise. In general the most minute examples of a few
words alluding to adult content can be enough to trigger a block. The software then errs on the side of caution and unfairly blocks many websites.
And of course these companies show little concern about legitimate businesses that suffer as a result.
Prof Ross Anderson, a security expert at Cambridge University, told the BBC that internet filters were pointless and that it was completely inappropriate to have one in the British Library. He added:
Everything that is legal should be available over the library's wi-fi network. The only things they should block are the few dozen books against which there are court judgements in the UK. One of the functions of deposit libraries is to keep
everything, including smut.
Meanwhile one filter maker has a bit of a Gerald Ratner moment
Some customers of newly filtered ISPs are finding that porn is still getting through , but bona fide sites are being blocked . That's because filter algorithms struggle to distinguish between porn and legitimate sites, like lingerie retailers.
None of these systems are perfect, says George Anderson from online filtering security firm Webroot:
If you're an underwear site that's pretty close [to a porn site] and you get blocked because of this ban, that's going to cause issues.
Ministers want parents and teenagers themselves to be able to assign age-ratings (that don't seem to align with ages) to videos and other content which has been uploaded by internet users onto websites.
The government is working with ISPS and internet censors to test how crowd sourcing such age ratings can work, The Telegraph has learnt.
David Cameron said last month that he lamented the lack of rules on age controls that apply to websites.
The government and industry has formed a working group, led by the British Board of Film Classification, to develop the new system.
Ministers accept that there is far too much material generated by users on their mobile phones or webcams for conventional censors such as the BBFC to be able to monitor.
However, a prototype has been designed, under which website users are asked to complete a simple questionnaire on the depiction of behaviour, drugs, horror, language, sex and violence for videos posted online.
The BBFC and Nicam, the Dutch media regulator, have designed a scheme which can be linked to online filters and which includes an alert feature allowing users to report content to the authorities.
A senior government source said ministers were supporting the developments, which would help protect children from potentially damaging and inappropriate material . The source said
On YouTube at the moment people put comments but there is no way of crowd sourcing, where you can say whether you think this is an appropriate clip for 12 year olds or 14 year olds.
It is a bit strange therefore that the government is supporting a scheme with no obvious differentiation between 12 and 14 year olds. One option being considered would be a traffic light system. A video could be rated as green if it is safe for
all, amber if it requires parental guidance and red if it is suitable for adults only.
Green for videos suitable for young children and red for adults only are pretty straightforward, but the large gulf between doesn't seem to make much sense. Most horror films are 15 rated and would be scary for younger children. How does one have
a combined 12 and 15 rating. It could nether be said to be either a 12 or a 15. If it is vaguely called 'parental guidance', it then does not convey enough information for parents to know whether it is suitable for their 12 year olds.
Italian viewers will soon be able to make use of this international ratings tool. Italian media giant Mediaset will shortly being trialling the rating tool for users of its 16MM website and television channel.
We will be monitoring the results of this pilot project closely. What we learn from this trial will help us as we work with other platforms to see how they might apply the tool.
Unfortunately, some of the negative effects of a report abuse button that people have feared seem already to have happened. Shortly after the function became available on certain smartphone apps, accounts like @transphobes - which retweets the
kinds of violent threats and hate trans women and men get online, to raise awareness of bigotry against them - was suspended.
Whether it was targeted by a concerted campaign of opponents or automatically suspended without being reviewed by Twitter is unclear. It certainly raises the question: how easy is it to mistake awareness-raising for bigotry?
Within 48 hours the account was reinstated. But other similar accounts have also reported problems, with accounts devoted to revealing the bigotry against sex workers and others also facing suspension in the past. However the groups raising
awareness of this kind of bigotry do not as yet gain mainstream notice - so who would notice when they disappear?
Prosecutors have charged Sedat Kapanoglu and 40 account holders with the blasphemy on the religious values of a society fraction due to their entries on Ekssiso zlu k, a popular social media website in Turkey.
Those charged face prison sentences from 6 up to 12 months.
Turkish blasphemy laws are framed with:
The purpose of protecting the sentiments of those who believe in God, religion, prophets, holy writings and sects. Individuals may certainly express their opinions and criticize certain aspects. However, they must do this while not hurting other
people's religious sentiments. Therefore, nobody has the right to damage the respect of others towards their sacred concepts.
Over 100 illegal websites have been shut down by Chinese authorities since early May. Many believe that the crackdown is aimed at independent watchdog sites in mainland China.
According to the State Internet Information Office, the 107 websites were shut down for failing to obtain official permission to establish and run sites, allegedly blackmailing government and corporate officials, and using terms such as China
and people in their names.
However the Chinese authorities didn't mention the onerous expense and conditions that make it nearly impossible for small websites to actually obtain such permission.
For individuals or small groups wanting to start their own websites, these regulations create large, often insurmountable obstacles. Many do not have the resources to comply with government requests for content removal and user data, which can
easily become a full-time job for one or more people. Others are unable to obtain the costly business licenses needed to apply for an online content provider license.
To get around these bureaucratic procedures, some choose to affiliate themselves with established institutions or corporations so that they can register as a web-branch of a legitimate entity. Currently, there are many privately-run
websites registered as web-branches of established institutions. A crackdown on these web-branches would be disastrous.
A handful of sites on the crackdown list are indeed linked to corporate extortion. But most of the so-called blackmailing activities are citizen initiatives that uncover corruption of government officials and party members.
Websites that use terms such as people , China and Chinese to name themselves are considered fraudulent and thus deemed illegal . The Chinese authorities claimed that websites such as People's Voices or
People's shopping , People's News mislead the public, giving the false impression that these sites are affiliated with the Party's mouthpiece, the People's Daily.
Among the sites recently taken down are several devoted to citizen legal rights and anti-corruption efforts, including China Legal Rights Net, Xiaoxiang Anti-corruption Forum, Legal Rights Defense Net, China Legal System Monitor, People's Rights
Monitor, Legal Report, People's Petition, and many other similar organizations.
Ask.fm is a social networking site with 60 million users. It's a place where kids can gossip without adults around. But lately the playground atmosphere has turned nasty and it has been linked to a spate of teenage suicides
Police in Thailand have opened investigations of four people for supposedly causing panic by posting rumours of a possible military coup on Facebook.
Such rumours are commonplace in Thailand and it would take more than a few articles on Facebook to create even a credible rumour. But of course authorities are prone to go over the top, and a police chief has threatened to charge anyone who even
liked the postings.
The move comes as Bangkok braces for possible political protests this week coinciding with a reconciliation bill related to the 2006 coup in the country.
Technology Crime Suppression division chief Police Maj. Gen. Pisit Paoin said that the four posted Facebook entries with false information that could damage the country. Among those accused are Sermsuk Kasitipradit, the political editor of public
television channel TPBS, and a local pro-government protest leader. The postings mentioned a possibility of a military coup and urged the public to hoard food and water. Pisit threatened:.
Those who 'liked' and 'shared' the posts will also face charges, so we would like to ask the public to contemplate very carefully about the way they use social media,
More than 1,000 anti-government protesters kicked off a rally in Bangkok on Sunday as lawmakers were scheduled to deliberate on the controversial bill on Wednesday. Last week, the government invoked the Internal Security Act in three Bangkok
districts, citing the possibility of protest violence. The act, in effect from Aug. 1 - 10, authorizes officials to seal off roads, take action against security threats, impose curfews and ban the use of electronic devices in designated areas.
Peaceful and unarmed rallies are allowed under the law.
Britain hosts the third biggest volume of internet pornography in the world and is home to more than half a million sites. There are more than 52million pages of pornographic content in the country registered under the national domain .co.uk.
There are no restrictions on pornographers registering their sites under Britain's domain name, for which a private company called Nominet UK is responsible.
John Carr, an anti-porn campaigner acting as an adviser to the government on child internet safety, called on Nominet to ban websites containing certain words like rape and said the free for all should end. He said that all porn sites
should be under the domain name .xxx and declared:
The UK should not provide succour and comfort to porn merchants. Nominet should have a policy that websites registered under the national domain name do not contain depraved or disgusting words. People should not be able to register websites
that bring disgrace to this country under the national domain name.
Ed Vaizey, the communications minister, is now writing to Nominet to ask what its plans are to prevent abusive behaviour . He added that he took Carr's complaint 'extremely seriously' .
The evidence that Britain hosts more pornography than any other country apart from the US and Holland will be presented by a web analysis company called MetaCert this week. Apparently Britain hosted six times as many porn web pages as Germany in
fourth place and ten times as many as France in fifth place. The US is home to nearly two-thirds of the world's pornography.
The boss of Twitter UK has said sorry to women who have experienced abuse on the social networking site. Tony Wang said the threats were simply not acceptable and pledged to do more to tackle abusive behaviour.
The apology came as Twitter updated its rules and confirmed it would introduce an in-tweet report abuse button on all platforms, including desktops.
In a series of tweets, Twitter UK general manager Wang said:
I personally apologize to the women who have experienced abuse on Twitter and for what they have gone through. The abuse they've received is simply not acceptable. It's not acceptable in the real world, and it's not acceptable on
Twitter. There is more we can and will be doing to protect our users against abuse. That is our commitment.
Twitter has clarified its guidance on abuse and spam - reiterating that users may not engage in targeted abuse or harassment . The report abuse button already available on the iOS Twitter app and mobile site will also be rolled out
to the main website and Android app from September, Twitter said. The bosses said in the blog that additional staff were being added to the teams that handle reports of abuse.
David Cameron's plan to protect children from obscene material online has been dismissed as absolutely ridiculous by Wikipedia co-founder Jimmy Wales. He said:
It's an absolutely ridiculous idea. It won't work. The software you would use to implement this doesn't work.
My view is that instead of spending literally billions of pounds, billions of dollars, snooping on ordinary people and gathering up all of this data in an apparently fruitless search for terrorists, we should devote a significant proportion of
that to dealing with the real criminal issues online - people stealing credit card numbers, hacking into websites and things like that.
Unfortunately we're not seeing a lot of that. We see a lot of flash and a lot of snooping. But this is, at the end of the day, going to take an investment in real, solid police work.
Wales said problems like online child abuse, hacking social media sites and abusive or threatening messages could be tackled without the introduction of new legislation.
Wales also spoke of the issue of abusive tweets. He suggested that Twitter should make it easier for users to report abuse, but rejected calls for tighter censorship of the social network. He said:
When you think about rules about verbal threats, human society has a long history of rules and laws around this, and those rules and laws are very well thought-out. They deal with complicated cases.
I do think that Twitter has needed in the past to do more to give people more control of the environment, to allow faster means for people to complain and to have people behaving badly exposed, blocked or arrested as necessary.
But it is not like we don't have a law against threatening people. We do, and people are quite rightly being called up on this.
The DCMS has published an official wide ranging paper on internet and communications policy. Many of the censorship aspects have already been described by David Cameron in his recent speech. Here are a few paragraphs fleshing out some of the
proposed censorship ideas:
Material Promoting Terrorism
The Prime Minister has convened an Extremism Task Force which will be looking closely, in the coming months, at the role the communications industry can and should play in reducing the availability of material promoting terrorism online.
A watershed for internet TV
We want to ensure that the living room remains a safe space for children.
TV remains central to our lives, with people in the UK watching on average more than four hours of broadcast TV every day. Families still get together to sit around the television and watch the latest period drama, talent competition, or catch
the latest episode of their favourite soap.
But increasingly, set-top boxes and TVs connected to the internet enable programmes and films to be viewed on-demand, to fit viewing around our own schedules. These can fall outside of regulatory frameworks. People tend to consider connected TVs
to be a TV-like experience and expect to be more protected than they are from content accessed through PCs and laptops. Yet, the technology means that it is easy to flick between regulated and unregulated spaces. Since this is not always
clear, this increases the risk of people inadvertently accessing content that may be offensive, inappropriate, or harmful to children.
The technology is already available to enable people to be provided with more information about programmes, and for locks to be put in place to prevent post- watershed programmes from being viewed by children on-demand. But more needs to
be done to make sure that these practices are adopted more widely, and to make sure that tools, like pin-protection, are straightforward and easy for people to use.
We also want it to be clear to people when they are watching TV in a protected, regulated space, and when they move with just a few clicks to an unregulated area of the internet. We want industry, broadcasters, manufacturers and platform
providers, to lead the development of consumer tools in this area, working with regulators to consider what mechanisms can be applied to clearly label regulated and unregulated content. One such mechanism, may be, for example, using the
electronic programme guide itself to define the protected space. We will work with industry to ensure that best practice is developed and can be shared and standardised. Given this is an area where we are seeing rapid developments, we will keep
progress under close review, and if necessary, we will consider the case for legislation to ensure that audiences are protected to the level that they choose
R18 on internet TV
The popularity of video-on-demand services (VoD) has grown dramatically in recent years, providing consumers with great new choices about what they want to watch when and where. But with this new opportunity comes risk, and this is particularly
the case when it comes to harmful content that is now more readily available. In hard copy, content rated R18 by the British Board of Film Classification (BBFC) is only available in licensed sex shops and content that was even stronger is banned
outright. The VOD regulations in this area do not currently provide the same level of certainty and protection as on the high street. As on-demand services become increasingly prevalent we want to make sure that regulation of on-demand content
is as robust as regulation of content on a DVD, bringing the online world into line with the high street.
We will legislate to ensure that material that would be rated R18 by the British Board of Film Classification is put behind access controls on regulated services and we will ban outright content on regulated services that is illegal even in
licensed sex shops.
More Dangerous Pictures
We will also close a loophole in the Criminal Justice and Immigration Act 2008, so that it is a criminal offence to possess extreme pornography that depicts rape.
We are seeing good progress in this area:
Where children could be accessing the internet, we need good filters that are preselected to be on, and we need parents aware and engaged in the setting of those filters. By the end of this year, when someone sets up a new broadband account,
the settings to install family friendly filters will be automatically selected; if you just click next or enter, then the filters are automatically on.
By the end of next year ISPs will have prompted all existing customers to make an unavoidable decision about whether to apply family friendly filters.
Only adult account holders will be able to change these filters once applied.
All mobile phone operators will apply adult filters to their phones. [Does this allow adults to turn off the blocking?]
90% of public Wi-Fi will have family friendly filters applied to wherever children are likely to be present.
Ofcom will regularly review the efficacy of these filters
But we are clear that industry must go further:
We expect the smaller ISPs to follow the lead being set by the larger providers.
We want industry to continue to refine and improve their filters to ensure they do not, even unintentionally, filter out legitimate content.
We want to see mobile network operators develop their child safety services further; for example, filtering by handset rather than by contract would provide greater flexibility for parents as they work to keep their children safe online.
Paying for PC advert censorship
The UK benefits from a healthy and successful advertising sector, underpinned by an exemplar of successful self-regulation, the Advertising Standards Authority (ASA). The A administers a system which is flexible and responsive, and is industry
funded, through 0.1% levy on non-broadcast advertising spend levied by the Advertising Standards Boa of Finance (ASBOF). This levy is voluntary, but is well supported by industry; however, will be important to ensure that this continues to be
sustainable in the future. The relatively recent extension of the ASA's online remit to cover marketing on companies own websites and on social media demonstrates the increasing importance of online advertising, and advertising spend in the
future is likely to increase its focus on these online markets. Therefore, it will be important to ensure that this self-regulatory, industry-funded model remains sustainable for the future, and that the regulation of online and offline
advertising alike can continue to be supported by the industry levy. Some concerns have been raised over the degree to which collection of the levy in the digital world has kept pace with the rate at which advertisers are now operating there.
We think it is incumbent upon all parts of the industry, including the digital media, to safeguard this continued funding by playing their part in the collection of the levy.