| |
New York State is considering legislation that demands gun licence applicants hand over social media and Google passwords so that these accounts can be checked for political correctness
|
|
|
 |
21st December 2018
|
|
| See
petition from actionnetwork.org See
bill from legislation.nysenate.gov |
A bill was recently introduced to the New York State Senate by Senator Kevin Parker and Brooklyn borough President, Eric Adams, that would require gun license applicants to hand over social media passwords, and 3 years of search history for review by the
State. Regardless of how you feel about gun rights, this is a clear violation of privacy, and a request like this in any context is completely inappropriate, and totally unconstitutional. Background checks are one thing, but the process outlined in this
bill goes way too far. This isn't about gun rights, this is about privacy rights. The authorities intend to check that all licence applicants are totally politically correct. The relevant text of the bill reads:
In order to ascertain whether any social media account or search engine history of an applicant presents any good cause for the denial of a license, the investigating officer shall, after obtaining the applicant's consent pursuant to subdivision three of
this section, and obtaining any log-in name, password or other means for accessing a personal account, service, or electronic communications device necessary to review such applicant's social media accounts and search engine history, review an
applicant's social media accounts for the previous three years and search engine history for the previous year and investigate an applicant's posts or searches related to:
(i) commonly known profane slurs or biased language used to describe the race, color, national origin, ancestry, gender, religion, religious practice, age, disability or sexual orientation of a person;
(ii) threatening the health or safety of another person; (iii) an act of terrorism; or (iv) any other issue deemed necessary by the investigating officer.
For the purposes of this subdivision, "social media accounts" shall only include facebook, snapchat, twitter and instagram, and "search engine" shall only include google, yahoo and bing.
Security experts have long warned that it's extremely dangerous to give your password to anyone, including your local police department. It not only exposes you to unreasonably intrusive analysis, but also exposes private details of
everyone you have ever communicated with with online. If your friend wants to buy a gun does that mean the police should get to read every message you've ever sent them? The best thing we can do is reject these ideas right now to prevent bad privacy
practices from become normalized. It makes perfect sense to require background checks and other vetting before allowing someone to purchase a weapon, but setting any precedent that allows the government to demand social media
passwords is extremely dangerous. If you care about privacy, and keeping a close eye on overreaching state power, please sign this petition and tell the NY State Senate that you oppose bill S9191. Sign the
petition from actionnetwork.org
|
| |
|
|
|
 |
19th December 2018
|
|
|
Turning Off Facebook Location Tracking Doesn't Stop It From Tracking Your Location See article from gizmodo.com.au
|
| |
|
|
|
 | 14th December 2018
|
|
|
Amazon submits patent application for a doorbell that has a camera and facial recognition system See
article from aclu.org |
| |
Italian authorities have fined Facebook for their abuse of people's personal data
|
|
|
 | 8th
December 2018
|
|
| See article from theguardian.com
|
Facebook has been fined ?10m (£8.9m) by Italian authorities for misleading users over its data practices. The two fines issued by Italy's competition watchdog are some of the largest levied against the social media company for data misuse. The
Italian regulator found that Facebook had breached the country's consumer code by:
- Misleading users in the sign-up process about the extent to which the data they provide would be used for commercial purposes.
- Emphasising only the free nature of the service, without informing users of the "profitable ends that
underlie the provision of the social network", and so encouraging them to make a decision of a commercial nature that they would not have taken if they were in full possession of the facts.
- Forcing an "aggressive practice" on
registered users by transmitting their data from Facebook to third parties, and vice versa, for commercial purposes.
The company was specifically criticised for the default setting of the Facebook Platform services, which in the words of the regulator, prepares the transmission of user data to individual websites/apps without express consent from users.
Although users can disable the platform, the regulator found that its opt-out nature did not provide a fully free choice. The authority has also directed Facebook to publish an apology to users on its website and on its app. |
| |
|
|
|
 | 6th December 2018
|
|
|
The Daily Mail reports on large scale data harvesting of your data and notes that Paypal have been passing on passport photos used for account verification to Microsoft for their facial recognition database See
article from dailymail.co.uk |
| |
Parliament publishes a set of enlightening emails about Facebook's pursuit of revenue and how it allows people's data to be used by app developers
|
|
|
 | 5th December 2018
|
|
| See article from bbc.co.uk See
Facebook emails [pdf] from parliament.uk See
Mark Zuckerberg's response on Facebook |
Parliament's fake news inquiry has published a cache of seized Facebook documents including internal emails sent between Mark Zuckerberg and the social network's staff. The emails were obtained from the chief of a software firm that is suing the tech
giant. About 250 pages have been published, some of which are marked highly confidential. Facebook had objected to their release. Damian Collins MP, the chair of the parliamentary committee involved, highlighted several key issues in an
introductory note. He wrote that:
- Facebook allowed some companies to maintain "full access" to users' friends data even after announcing changes to its platform in 2014/2015 to limit what developers' could see. "It is not clear that there was any user consent for this,
nor how Facebook decided which companies should be whitelisted," Mr Collins wrote
- Facebook had been aware that an update to its Android app that let it collect records of users' calls and texts would be controversial. "To mitigate any
bad PR, Facebook planned to make it as hard as possible for users to know that this was one of the underlying features," Mr Collins wrote
- Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps
were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat
- there was evidence that Facebook's refusal to share data with some apps caused them to fail
- there
had been much discussion of the financial value of providing access to friends' data
In response, Facebook has said that the documents had been presented in a very misleading manner and required additional context. See Mark
Zuckerberg's response on Facebook
Offsite Analysis: New Documents Show That Facebook Has Never Deserved Your Trust 7th December 2018. See article from eff.org by
Bennett Cyphers and Gennie Gebhart
|
| |
Mastercard and Microsoft get together to pool their data on you in the name of identity verification
|
|
|
 | 5th December
2018
|
|
| See article from alphr.com
|
Mastercard and Microsoft are collaborating in an identity management system that promises to remember users' identity verification and passwords between sites and services. Mastercard highlights four particular areas of use: financial services,
commerce, government services, and digital services (eg social media, music streaming services and rideshare apps). This means the system would let users manage their data across both websites and real-world services. However, the inclusion of
government services is an eyebrow-raising one. Microsoft and Mastercard's system could link personal information including taxes, voting status and criminal record, with consumer services like social media accounts, online shopping history and bank
accounts. As well as the stifling level of tailored advertising you'd receive if the system knew everything you did, this sets the dangerous precedent for every byte of users' information to be stored under one roof -- perfect for an opportunistic
hacker or businessman. Mastercard mention it is working closely with players like Microsoft, showing that many businesses have access to the data. Neither Microsoft nor Mastercard have slated a release date for the system, only promising
additional details on these efforts will be shared in the coming months. |
| |
The EFF is opposing the censorship of film stars ages
|
|
|
 | 29th November
2018
|
|
| See article from eff.org
|
California is still trying to gag websites from sharing true, publicly available, newsworthy information about actors. While this effort is aimed at the admirable goal of fighting age discrimination in Hollywood, the law unconstitutionally punishes
publishers of truthful, newsworthy information and denies the public important information it needs to fully understand the very problem the state is trying to address. So we have once again filed a friend of the court brief opposing that effort.
The case, IMDB v. Becerra , challenges the constitutionality of California Civil Code section 1798.83.5 , which requires "commercial online entertainment employment services providers" to remove an actor's date of birth or
other age information from their websites upon request. The purported purpose of the law is to prevent age discrimination by the entertainment industry. The law covers any "provider" that "owns, licenses, or otherwise possesses
computerized information, including, but not limited to, age and date of birth information, about individuals employed in the entertainment industry, including television, films, and video games, and that makes the information available to the public or
potential employers." Under the law, IMDb.com, which meets this definition because of its IMDb Pro service, would be required to delete age information from all of its websites, not just its subscription service. We filed a
brief in the trial court in January 2017, and that court granted IMDb's motion for summary judgment, finding that the law was indeed unconstitutional. The state and the Screen Actors Guild, which intervened in the case to defend the law, appealed the
district court's ruling to the U.S. Court of Appeals for the Ninth Circuit. We have now filed an amicus brief with that court. We were once again joined by First Amendment Coalition, Media Law Resource Center, Wikimedia Foundation, and Center for
Democracy and Technology. As we wrote in our brief, and as we and others urged the California legislature when it was considering the law, the law is clearly unconstitutional. The First Amendment provides very strong protection to
publish truthful information about a matter of public interest. And the rule has extra force when the truthful information is contained in official governmental records, such as a local government's vital records, which contain dates of birth.
This rule, sometimes called the Daily Mail rule after the Supreme Court opinion from which it originates, is an extremely important free speech protection. It gives publishers the confidence to publish important information
even when they know that others want it suppressed. The rule also supports the First Amendment rights of the public to receive newsworthy information. Our brief emphasizes that although IMDb may have a financial interest in
challenging the law, the public too has a strong interest in this information remaining available. Indeed, if age discrimination in Hollywood is really such a compelling issue, and EFF does not doubt that it is, hiding age information from the public
makes it difficult for people to participate in the debate about alleged age discrimination in Hollywood, form their own opinions, and scrutinize their government's response to it.
|
| |
The EU is proposing new legislation to stop the big internet companies from snooping on our messaging. The IWF is opposing this as they will lose leads about child abuse
|
|
|
 | 25th November 2018
|
|
| See article from iwf.org.uk
|
The IWF writes: The Internet Watch Foundation (IWF) calls on the European Commission to reconsider proposed legislation on E-Privacy. This is important because if the proposal is enshrined in law, it will potentially have a direct
impact on the tech companies' ability to scan their networks for illegal online child sexual abuse images and videos. Under Article 5 of the proposed E-Privacy legislation, people would have more control over their personal data.
As currently drafted, Article 5 proposes that tech companies would require the consent of the end user (for example, the person receiving an email or message), to scan their networks for known child sexual abuse content. Put simply, this would mean that
unless an offender agreed for their communications to be scanned, technology companies would no longer be able to do that. Susie Hargreaves of the IWF says: At a time when IWF are taking down
more images and videos of child sexual abuse, we are deeply concerned by this move. Essentially, this proposed new law could put the privacy rights of offenders, ahead of the rights of children - children who have been unfortunate enough to be the victim
of child sexual abuse and who have had the imagery of their suffering shared online. We believe that tech companies' ability to scan their networks, using PhotoDNA and other forms of technology, for known child sexual abuse
content, is vital to the battle to rid the internet of this disturbing material. It is remarkable that the EU is pursuing this particular detail in new legislation, which would effectively enhance the rights of possible
'offenders', at a time when the UK Home Secretary is calling on tech companies to do more to protect children from these crimes. The only way to stop this ill-considered action, is for national governments to call for amendments to the legislation,
before it's too late. This is what is in the best interests of the child victims of this abhorrent crime.
|
| |
Google announces the shutdown of Google+ as it is beset by similar privacy failures to Facebook
|
|
|
 | 9th October 2018
|
|
| See article from dailymail.co.u
|
The Google+ social network exposed the personal information of hundreds of thousands of people using the site between 2015 and March 2018, according to a report in the Wall Street Journal. But managers at the company chose not to go public with the
failures, because they worried that it would invite scrutiny from regulators, particularly in the wake of Facebook's security failures. Shortly after the report was published, Google announced that it would be shutting down Google+ by August 2019. In
the announcement, Google also announced raft of new security features for Android, Gmail and other Google platforms that it has taken as a result of privacy failures.. Google said it had discovered the issues during an internal audit called
Project Strobe. Ben Smith, Google's vice president of engineering, wrote in a blog post: Given these challenges and the very low usage of the consumer version of Google+, we decided to sunset the consumer version of
Google+.
The audit found that Goggle+ APIs allowed app developers to access the information of Google+ users' friends, even if that data was marked as private by the user. As many as 438 applications had access to the unauthorized
Google+ data, according to the Journal. Now, users will be given greater control over what account data they choose to share with each app. Apps will be required to inform users what data they will have access to. Users have to provide explicit
permission in order for them to gain access to it. Google is also limiting apps' ability to gain access to users' call log and SMS data on Android devices.Additionally, Google is limiting which apps can seek permission to users' consumer Gmail data. Only
email clients, email backup services and productivity services will be able to access this data. Google will continue to operate Google+ as an enterprise product for companies. |
| |
|
|
|
 |
7th October 2018
|
|
|
We believe in privacy...BUT...you can't have it as it stops the IWF and internet companies from snooping on your messages See
article from iwf.org.uk |
| |
|
|
|
 | 6th
October 2018
|
|
|
As someone who has tracked technology and human rights over the past ten years, I am convinced that digital ID, writ large, poses one of the gravest risks to human rights of any technology that we have encountered. . By Brett Soloman See
article from wired.com |
| |
|
|
|
 | 5th October 2018
|
|
|
Privacy Badger Now Fights More Sneaky Google Tracking See article from eff.org |
| |
|
|
|
 | 1st October 2018
|
|
|
The inventor of the World Wide Web has unveiled a plan for a new secure internet taking back control of one's data from the likes of Facebook See
article from techspot.com |
| |
You Gave Facebook Your Number For Security. They Used It For Ads.
|
|
|
 | 28th September 2018
|
|
| See article from eff.org |
Add a phone number I never gave Facebook for targeted advertising to the list of deceptive and invasive ways Facebook makes money off your personal information. Contrary to user expectations and Facebook representatives' own previous statements, the
company has been using contact information that users explicitly provided for security purposes--or that users never provided at all --for targeted advertising. A group of academic researchers from Northeastern University and
Princeton University , along with Gizmodo reporters , have used real-world tests to demonstrate how Facebook's latest deceptive practice works. They found that Facebook harvests user phone numbers for targeted advertising in two disturbing ways:
two-factor authentication (2FA) phone numbers, and shadow contact information. Two-Factor Authentication Is Not The Problem First, when a user gives Facebook their number for security purposes--to set up 2FA , or to receive alerts
about new logins to their account--that phone number can become fair game for advertisers within weeks. (This is not the first time Facebook has misused 2FA phone numbers .) But the important message for users is: this is not a
reason to turn off or avoid 2FA. The problem is not with two-factor authentication. It's not even a problem with the inherent weaknesses of SMS-based 2FA in particular . Instead, this is a problem with how Facebook has handled users' information and
violated their reasonable security and privacy expectations. There are many types of 2FA . SMS-based 2FA requires a phone number, so you can receive a text with a second factor code when you log in. Other types of 2FA--like
authenticator apps and hardware tokens--do not require a phone number to work. However, until just four months ago , Facebook required users to enter a phone number to turn on any type of 2FA, even though it offers its authenticator as a more secure
alternative. Other companies-- Google notable among them --also still follow that outdated practice. Even with the welcome move to no longer require phone numbers for 2FA, Facebook still has work to do here. This finding has not
only validated users who are suspicious of Facebook's repeated claims that we have complete control over our own information, but has also seriously damaged users' trust in a foundational security practice. Until Facebook and
other companies do better, users who need privacy and security most--especially those for whom using an authenticator app or hardware key is not feasible--will be forced into a corner. Shadow Contact Information Second, Facebook
is also grabbing your contact information from your friends. Kash Hill of Gizmodo provides an example : ...if User A, whom we'll call Anna, shares her contacts with Facebook, including a previously unknown phone number for User B,
whom we'll call Ben, advertisers will be able to target Ben with an ad using that phone number, which I call shadow contact information, about a month later. This means that, even if you never directly handed a particular phone
number over to Facebook, advertisers may nevertheless be able to associate it with your account based on your friends' phone books. Even worse, none of this is accessible or transparent to users. You can't find such shadow contact
information in the contact and basic info section of your profile; users in Europe can't even get their hands on it despite explicit requirements under the GDPR that a company give users a right to know what information it has on them.
As Facebook attempts to salvage its reputation among users in the wake of the Cambridge Analytica scandal , it needs to put its money where its mouth is . Wiping 2FA numbers and shadow contact data from non-essential use would be a
good start.
|
| |
Facebook's upcoming home portal will feature an AI driven camera that follows you around the room and uses facial recognition to identify who's in the room
|
|
|
 | 28th September 2018
|
|
| See article from cheddar.com |
Facebook plans to unveil its Portal video chat device for the home next week. Facebook originally planned to announce Portal at its annual F8 developer conference in May of this year. But the company's scandals, including the Cambridge Analytica data
breach led executives to shelve the announcement at the last minute. Portal will feature a wide-angle video camera, which uses artificial intelligence to recognize people in the frame and follow them as they move throughout a room. In response to
the breakdown in trust for Facebook, the company has recently added a privacy shutter which can physically block the camera. |
| |
The EU ePrivacy regulation due in a few months is set to require websites to be more open about tracking cookies and more strict in gaining consent for their use
|
|
|
 | 13th September 2018
|
|
| See article from smallbusiness.co.uk |
The so called cookie law, a moniker for the proposed new EU ePrivacy regulation due to come into play before the year is out, is expected to severely impact the use of cookies online and across digital marketing. As such, it could pose an even bigger
test to businesses than GDPR . It's a regulation that will create a likely deficit in the customer information they collect even post-GDPR. Current cookie banner notifications, where websites inform users of cookie collection,
will make way for cookie request pop-ups that deny cookie collection until a user has opted in or out of different types of cookie collection. Such a pop-up is expected to cause a drop in web traffic as high as 40 per cent. The good news is that it will
only appear should the user not have already set their cookie preferences at browser level. The outcome for businesses whose marketing and advertising lies predominantly online is the inevitable reduction in their ability to
track, re-target and optimise experiences for their visitors. ... For any business with a website and dependent on cookies, the new regulations put them at severe risk of losing this vital source of
consumer data . As a result, businesses must find a practical, effective and legal alternative to alleviate the burden on the shoulders of all teams involved and to offset any drastic shortfall in this crucial data. ....
Putting the power in the hands of consumers when it comes to setting browser-level cookie permissions will limit a business's ability to extensively track the actions users take on company websites and progress targeted cookie-based
advertising. Millions of internet users will have the option to withdraw their dataset from the view of businesses, one of the biggest threats ePrivacy poses. ...Read the full
article from smallbusiness.co.uk
|
| |
|
|
|
 | 12th
September 2018
|
|
|
UK internet domain controller Nominet consults on proposals to ensure that copyright holders and the UK authorities can obtains the identity of website owners even when privacy proxy services are used See
article from surveys.nominet.org.uk |
| |
|
|
|
 | 27th August 2018
|
|
|
Under GPDR requirements for data transparency, Facebook is being challenged to reveal what data it holds on people's website browsing from its Facebook Pixel snooping cookie See
article from theregister.co.uk |
| |
In light of Facebook's disgraceful disregard for its users' digital wellbeing, Trump's government seems to be stepping in and preparing a GDPR style privacy law
|
|
|
 | 30th July 2018
|
|
| See article from
foxnews.com |
The US Federal Government is quietly meeting with top tech company representatives to develop a proposal to protect web users' privacy amid the ongoing fallout globally of scandals that have rocked Facebook and other companies. Over the past month,
the Commerce Department has met with representatives from Facebook and Google, along with Internet providers like AT&T and Comcast, and consumer advocates, sources told the Washington Post. The goal of these meetings is to come up with a data
privacy proposal at the federal level that could serve as a blueprint for Congress to pass sweeping legislation in the mode of the European Union GDPR. There are currently no laws that govern how tech companies harness and monetize US users' data.
A total of 22 meetings with more than 80 companies have been held on this topic over the last month. One official at the White House told the Post this week that recent developments have been seismic in the privacy policy world, prompting the
government to discuss what a modern U.S. approach to privacy protection might look like. |
| |
|
|
|
 | 26th July 2018
|
|
|
Microsoft comes clean over Windows 10 snooping as part of its GDPR compliance See
article from v3.co.uk |
| |
|
|
|
 | 6th June 2018
|
|
|
The only thing worse than getting a bad night's sleep is to subsequently get a report from my smart-bed telling me I got a low score and missed my sleep goal. See
article from gizmodo.com |
| |
|
|
|
 | 3rd June
2018
|
|
|
The social media giant collects huge quantities of data to target advertising--and that has implications for our lives, our society, and our democracy. By University of King's College See
article from thewalrus.ca |
| |
|
|
|
 | 2nd June 2018
|
|
|
Which? investigation reveals staggering level of smart home surveillance See article from
which.co.uk |
| |
Active Shooter, a school shooter video game on Steam
|
|
|
 | 31st May 2018
|
|
| 24th May 2018. See article from bbc.com |
Anti-gun campaigners are highlighting a school-shooting simulator video game available on Steam. According to its listing on the Steam, the game lets players slaughter as many civilians as possible in a school environment. InferTrust called on
Valve, the company behind the Steam games store - to take the title down before it goes on sale, on 6 June. The BBC report omits the name of the game but in fact it is titled Active Shooter . The school-shooting game is described as
realistic and impressive. And the developer has suggested it will include 3D models of children to shoot at. However, the creator also says: Please do not take any of this seriously. This is only meant to be the simulation and nothing else. A
spokeswoman for InferTrust said: It's in very bad taste. There have been 22 school shootings in the US since the beginning of this year. It is horrendous. Why would anybody think it's a good idea to market something
violent like that, and be completely insensitive to the deaths of so many children? We're appalled that the game is being marketed.
Update: Deactivated 26th May 2018. See
article from variety.com Active Shooter comes out June 6 and calls itself a dynamic S.W.A.T. simulator where the player can be
either a S.W.A.T. team member or the shooter. Developer Revived Games also plans to release a civilian survival mode where the player takes on the role of a civilian during a shooting. Revived Games, the developer of Active Shooter have responded
to the controversy. Due to the high amount of criticism the game's received, Revived Games added it will likely remove the shooter's role from the game before launch unless it can be kept as it is right now.
Update: Banned 31st May 2018. See article from bbc.com Active Shooter has been banned from Steam's online store ahead
of release. The title had been criticised by parents of real-life school shooting victims, and an online petition opposing its launch had reached about 180,000 signatures. The PC game's publisher had tried to distance itself from the
controversy ahead of Valve's intervention. Although the original listing had explicitly described the title as being a school shooting simulation, the reference was dropped. In addition, a promise that gamers could slaughter as many civilians as possible
if they chose to control the attacker rather than a police officer, was also removed. |
| |
|
|
|
 | 30th May 2018
|
|
|
US internet authority sues EU domain register for breaking contract to publish personal details on WhoIs. But GDPR makes it illegal to publish such details See
article from theregister.co.uk |
| |
|
|
|
 | 28th May 2018
|
|
|
Browsing porn in incognito mode isn't nearly as private as you think. By Dylan Curran See
article from theguardian.com |
| |
|
|
|
 |
23rd May 2018
|
|
|
Google accused of bypassing default browser Safari's privacy settings to collect a broad range of data and deliver targeted advertising. See
article from alphr.com |
| |
|
|
|
 | 17th May 2018
|
|
|
Facebook lets advertisers target users based on sensitive interests by categorising users based on inferred interests such as Islam or homosexuality See
article from theguardian.com |
| |
Facebook report that 200 apps have been suspended in the wake of the Cambridge Analytica data slurp
|
|
|
 | 15th May 2018
|
|
| See article from newsroom.fb.com |
Here is an update on the Facebook app investigation and audit that Mark Zuckerberg promised on March 21. As Mark explained, Facebook will investigate all the apps that had access to large amounts of information before we changed
our platform policies in 2014 -- significantly reducing the data apps could access. He also made clear that where we had concerns about individual apps we would audit them -- and any app that either refused or failed an audit would be banned from
Facebook. The investigation process is in full swing, and it has two phases. First, a comprehensive review to identify every app that had access to this amount of Facebook data. And second, where we have concerns, we will conduct
interviews, make requests for information (RFI) -- which ask a series of detailed questions about the app and the data it has access to -- and perform audits that may include on-site inspections. We have large teams of internal
and external experts working hard to investigate these apps as quickly as possible. To date thousands of apps have been investigated and around 200 have been suspended -- pending a thorough investigation into whether they did in fact misuse any data.
Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website. It will show people if they or their friends installed an app that misused data before 2015 -- just as we did for Cambridge Analytica.
There is a lot more work to be done to find all the apps that may have misused people's Facebook data -- and it will take time. We are investing heavily to make sure this investigation is as thorough and timely as possible. We
will keep you updated on our progress.
|
| |
The US media industry can't get its head round the fact that European GDPR privacy laws will prevent the internet whois service from revealing private contact details
|
|
|
 | 26th April 2018
|
|
| See article from theregister.co.uk |
|
| |
WhatsApp set to ask users to say that they are 16+
|
|
|
 | 25th April 2018
|
|
| See article from bbc.com See also
WhatsApp plans to ban under-16s. The mystery is how. from theguardian.com |
Popular messaging service WhatsApp is introducing a minimum age restriction of 16yo, at least in Europe. The Facebook owned service is changing the rules ahead of the introduction of new EU data privacy regulations in May. The app,will ask
users to confirm their age when prompted to agree new terms of service in the next few weeks. It has not said if the age limit will be enforced. At present, WhatsApp does not ask users their age when they join, nor does it cross-reference their
Facebook or Instagram accounts to find out. About a third of all UK-based 12- to 15-year-olds active on social media use WhatsApp, according to a 2017 report by the media regulator Ofcom. That made it the fifth most popular social network with the age
group after Facebook, Snapchat, Instagram and YouTube. The EU's General Data Protection Regulation (GDPR) includes specific rules to protect youngsters whose personal data is processed in order to provide them with online services. Such websites
and apps are obliged to make reasonable efforts to verify that a parent or guardian has given consent for their child's data to be handled. The law says this obligation applies to under-16s, although some countries - including the UK - have been allowed
to set the cut-off limit lower, at 13. Facebook, which has also been criticised for its handling of personal data, is taking a different approach to younger users on its main service. To comply with GDPR , the social network is asking those aged
13 to 15 to nominate a parent or guardian to give permission for them to share information on the platform. If they do not, they will not see a fully personalised version of the platform. The policy changes implemented in response to GDPR will
surely have profound impact on the take up of social media services. Age restrictions (or the ability to ignore age restrictions) are incredibly important. For some apps, the dominant services are those that connect the most people, (whilst others become
dominant because the effectively exclude parents). A messaging app will be diminished for many if the kids are banned from it. And as you start chipping away at the reach of the network so it would be less attractive to others on the network. Users could
soon rift away to less restrictive alternatives. |
| |
|
|
|
| 30th March 2018
|
|
|
The harvesting of our personal details goes far beyond what many of us could imagine. So I braced myself and had a look. By Dylan Curran See
article from theguardian.com |
| |
People deleting Facebook reveal just how wide your permissions you have granted to Facebook are
|
|
|
 | 27th
March 2018
|
|
| See article from alphr.com |
For years, privacy advocates have been shouting about Facebook, and for years the population as a whole didn't care. Whatever the reason, the ongoing Cambridge Analytica saga seems to have temporarily burst this sense of complacency, and people are
suddenly giving the company a lot more scrutiny. When you delete Facebook, the company provides you with a compressed file with everything it has on you. As well as every photo you've ever uploaded and details of any advert you've ever interacted
with, some users are panicking that Facebook seems to have been tracking all of their calls and texts. Details of who you've called, when and for how long appear in an easily accessible list -- even if you don't use Facebook-owned WhatsApp or Messenger
for texts or calls. Although it has been put around that Facebook have been logging calls without your permission, but this is not quite the case. In fact Facebook do actually follow Facebook settings and permissions, and do not track your calls
if you don't give permission. So the issue is people not realising quite how wide permissions are granted when you have ticked permission boxes. Facebook seemed to confirm this in a statement in response: You
may have seen some recent reports that Facebook has been logging people's call and SMS (text) history without their permission. This is not the case. Call and text history logging is part of an opt-in feature for people using Messenger or Facebook Lite
on Android. People have to expressly agree to use this feature. If, at any time, they no longer wish to use this feature they can turn it off.
So there you have it, if you use Messenger of Facebook Lite on Android you have indeed
given the company permission to snoop on ALL your calls, not just those made through Facebook apps, |
| |
7-Eleven convenience stores to snoop on customers using facial recognition technology
|
|
|
 | 17th March 2018
|
|
| See article from uk.businessinsider.com
|
The convenience store 7-Eleven is rolling out artificial intelligence at its 11,000 stores across Thailand. 7-Eleven will use facial-recognition and behavior-analysis technologies for multiple purposes. The ones it has decided to reveal to the public
are to identify loyalty members, analyze in-store traffic, monitor product levels, suggest products to customers, and even measure the emotions of customers as they walk around. The company announced it will be using technology developed by
US-based Remark Holdings, which says its facial-recognition technology has an accuracy rate of more than 96%. Remark, which has data partnerships with Alibaba, Tencent, and Baidu, has a significant presence in China. The rollout at Thailand's
7-Eleven stores remains unique in scope. It could potentially be the largest number of facial-recognition cameras to be adopted by one company. No corporate entity is so entrenched in Thai lives, according to a report from Public Radio International. And
that may be crucial not only to the success of facial recognition in 7-Eleven stores in Thailand, but across the region. |
| |
|
|
|
 | 15th March 2018
|
|
|
The Tails operating system provides privacy and anonymity and it runs from a memory stick See article
from komando.com |
| |
Facebook is commendably refusing to hand over private Facebook data to researchers who want to see how fake news (and no doubt other politically incorrect content) spreads
|
|
|
 |
12th March 2018
|
|
| See
article from politico.eu |
|
| |
MIT details new privacy service where web browsers are served with encrypted images that leaves little for trackers and snoopers
|
|
|
 | 27th February 2018
|
|
| See article from news.mit.edu by Larry Hardesty, MIT News Office
|
Today, most web browsers have private-browsing modes, in which they temporarily desist from recording the user's browsing history. But data accessed during private browsing sessions can still end up tucked away in a computer's
memory, where a sufficiently motivated attacker could retrieve it. This week, at the Network and Distributed Systems Security Symposium, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and
Harvard University presented a paper describing a new system, dubbed Veil, that makes private browsing more private. Veil would provide added protections to people using shared computers in offices, hotel business centers, or
university computing centers, and it can be used in conjunction with existing private-browsing systems and with anonymity networks such as Tor, which was designed to protect the identity of web users living under repressive regimes.
"Veil was motivated by all this research that was done previously in the security community that said, 'Private-browsing modes are leaky -- Here are 10 different ways that they leak,'" says Frank Wang, an MIT graduate
student in electrical engineering and computer science and first author on the paper. "We asked, 'What is the fundamental problem?' And the fundamental problem is that [the browser] collects this information, and then the browser does its best
effort to fix it. But at the end of the day, no matter what the browser's best effort is, it still collects it. We might as well not collect that information in the first place." Wang is joined on the paper by his two thesis
advisors: Nickolai Zeldovich, an associate professor of electrical engineering and computer science at MIT, and James Mickens , an associate professor of computer science at Harvard. Shell game With
existing private-browsing sessions, Wang explains, a browser will retrieve data much as it always does and load it into memory. When the session is over, it attempts to erase whatever it retrieved. But in today's computers, memory
management is a complex process, with data continuously moving around between different cores (processing units) and caches (local, high-speed memory banks). When memory banks fill up, the operating system might transfer data to the computer's hard
drive, where it could remain for days, even after it's no longer being used. Generally, a browser won't know where the data it downloaded has ended up. Even if it did, it wouldn't necessarily have authorization from the operating
system to delete it. Veil gets around this problem by ensuring that any data the browser loads into memory remains encrypted until it's actually displayed on-screen. Rather than typing a URL into the browser's address bar, the
Veil user goes to the Veil website and enters the URL there. A special server -- which the researchers call a blinding server -- transmits a version of the requested page that's been translated into the Veil format. The Veil page
looks like an ordinary webpage: Any browser can load it. But embedded in the page is a bit of code -- much like the embedded code that would, say, run a video or display a list of recent headlines in an ordinary page -- that executes a decryption
algorithm. The data associated with the page is unintelligible until it passes through that algorithm. Decoys Once the data is decrypted, it will need to be loaded in memory for as long as it's
displayed on-screen. That type of temporarily stored data is less likely to be traceable after the browser session is over. But to further confound would-be attackers, Veil includes a few other security features. One is that the
blinding servers randomly add a bunch of meaningless code to every page they serve. That code doesn't affect the way a page looks to the user, but it drastically changes the appearance of the underlying source file. No two transmissions of a page served
by a blinding sever look alike, and an adversary who managed to recover a few stray snippets of decrypted code after a Veil session probably wouldn't be able to determine what page the user had visited. If the combination of
run-time decryption and code obfuscation doesn't give the user an adequate sense of security, Veil offers an even harder-to-hack option. With this option, the blinding server opens the requested page itself and takes a picture of it. Only the picture is
sent to the Veil user, so no executable code ever ends up in the user's computer. If the user clicks on some part of the image, the browser records the location of the click and sends it to the blinding server, which processes it and returns an image of
the updated page. The back end Veil does, of course, require web developers to create Veil versions of their sites. But Wang and his colleagues have designed a compiler that performs this conversion
automatically. The prototype of the compiler even uploads the converted site to a blinding server. The developer simply feeds the existing content for his or her site to the compiler. A slightly more demanding requirement is the
maintenance of the blinding servers. These could be hosted by either a network of private volunteers or a for-profit company. But site managers may wish to host Veil-enabled versions of their sites themselves. For web services that already emphasize the
privacy protections they afford their customers, the added protections provided by Veil could offer a competitive advantage. "Veil attempts to provide a private browsing mode without relying on browsers," says Taesoo
Kim, an assistant professor of computer science at Georgia Tech, who was not involved in the research. "Even if end users didn't explicitly enable the private browsing mode, they still can get benefits from Veil-enabled websites. Veil aims to be
practical -- it doesn't require any modification on the browser side -- and to be stronger -- taking care of other corner cases that browsers do not have full control of."
|
|
|