Showing all posts tagged privacy:

The Enemy Within The Browser

At what point do the downsides of Javascript in the browser exceed the upsides? Have we already passed that point?

If you have any concept of security, the idea of downloading code from the Internet and immediately executing it, sight unseen, on your local machine, should give you the screaming heebie-jeebies. A lot of work has gone into sandboxing the browser processes so that Javascript cannot escape the browser itself, and later, the individual web page that it came from. However, this only dealt with the immediate and obvious vulnerability.

These days, the problem with Javascript is that it is used to track users all over the internet and serve them ads for the same products on every site. Quite why this requires 14 MB and 330 HTTP requests for 537 words is not entirely clear.

Actually, no, it is entirely clear: it is because the copro-grammers ("writers of feces") who produce this stuff have no respect for the users. The same utter disrespect underlies the recent bloat in iOS apps:

One Friday I turned off auto-update for apps and let the update queue build up for a week. The results shocked me.
After the first week I had 7.59GB of updates to install, spread across 67 apps – averaging 113MB per app.

Okay, so maybe you say who cares, you only update apps over wifi - but do you only browse on wifi? 14 MB for a few hundred words - that adds up fast.

And what else is that Javascript up to, beyond wasting bytes - both over the air, and in local storage?

How about snaflling data entered into a form, regardless of whether it has been submitted?

Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

My house, my rules. I look forward to iOS 11, and enabling every blocking feature I can.

I really want media sites to earn money so that they can continue to exist, but they cannot do it at my expense. A banner ad is fine, but 14 MB of Javascript to serve me the same banner ad everywhere - at my expense! - is beyond the pale.

Javascript delenda est.

How To Lose Friends and Influence People

I am a huge fan of Evernote. I have used their software for many years, and its many features are key parts of my workflow. I take notes on multiple devices, use tagging to sync between those devices, take snapshots of business cards and let the OCR and the access to LinkedIn sort out the details, annotate images and PDFs, and more.

I should say, I used to be a fan of Evernote. They recently made some changes to their privacy policy that have users up in arms. Here is the relevant entry from their changelog:

Privacy Policy

January 23, 2017 updates to the October 4, 2016 version:

We clarified that in building a more personalized Evernote service that can adapt to the way you think and work, a small group of engineers may need to oversee these automated technologies to ensure they are working as intended. Also, we added that we will be using data from other sources to tailor your Evernote experience and explain how you can get more out of your Evernote account. Please see our FAQ for more information on these changes.

Updates to our legal documents | Evernote

This may be fairly inoffensive, but it is worrying to me and to many users. These days, "personalisation" is often code for "gathering data indiscriminately for obscure purposes that may change at any time". This exchange is generally presented as a bargain where users sacrifice (some) privacy to the likes of Google in exchange for free use of their excellent services such as Gmail or Maps.

Evernote's case is different. As a paid app, we users like to assume that we are opting out of that bargain, and paying directly for our services - instead of paying indirectly by authorising Evernote to resell our personal data to advertisers.

In addition, we use Evernote to store data that may be personal, sensitive, or both. Evernote have always had some weasel words in their Privacy Policy about their employees having access to our notes:

  • We believe our Terms of Service has been violated and confirmation is required or we otherwise have an obligation to review your account Content as described in our Terms of Service;
  • We need to do so for troubleshooting purposes or to maintain and improve the Service;
  • Where necessary to protect the rights, property or personal safety of Evernote and its users (including to protect against potential spam, malware or other security concerns); or
  • In order to comply with our legal obligations, such as responding to warrants, court orders or other legal process. We vigilantly protect the privacy of your account Content and, whenever we determine it possible, we provide you with notice if we believe we are compelled to comply with a third party’s request for information about your account. Please visit our Information for Authorities page for more information.

So basically, Evernote employees have always had access to our stuff. This part of the privacy policy has not changed substantially, but the changes are worrying (emphasis mine):

  • New: Do Evernote Employees Access or Review My Data?
  • Old: Do Evernote Employees Access or Review My Notes?

  • New: Below are the limited circumstances in which we may need to access or review your account information or Content:

  • Old: As a rule, Evernote employees do not monitor or view your personal information or Content stored in the Service, but we list below the limited circumstances in which our employees may need to access or review your personal information or account Content:

  • New: We need to do so for troubleshooting purposes or to maintain and improve the Service;

  • Old: We need to do so for troubleshooting purposes;

Privacy Policy | Evernote
Privacy Policy - 2017 update

Now, here is why people are all up in arms. We would like service providers to tread extremely carefully when it comes to our personal data, accessing it only when warranted. 2016 has provided plenty of object lessons in why we are so sensitive; just today I received an email from Yahoo detailing their latest hack. Yahoo hack: Should I panic? - BBC News

In this case, Evernote appear to have made two mistakes. First, they designed and built a new functionality that requires access to users’ personal data and content in order to do… well, it’s not entirely clear what they want to do, beyond the fact that it involves machine learning.

Secondly, they completely mis-handled the communication of this change. I mean, they even removed the disclaimer that "As a rule, Evernote employees do not monitor or view your personal information or Content stored in the Service"! How tone-deaf can you get?

It’s also very unclear why they even made this change. In their response to the outrage, they say this:

We believe we can make our users even more productive with technologies such as machine learning that will allow you to automate functions you now have to do manually, like creating to-do lists or putting together travel itineraries.

A Note From Chris O’Neill about Evernote’s Privacy Policy

The problem is, users are perfectly capable of managing to-do lists and itineraries, and based on an informal sample of Twitter reactions to this new policy, do not see enough value to want to give unknown Evernote employees access to their data.

An unforced error

This is such a short-sighted decision by Evernote. As one of the few cloud services which is used primarily through fat clients, Evernote is in a privileged position when it comes to pushing processing out to the users’ devices.

Apple have the same advantage, and do the right thing with it: instead of snooping around in my mail and calendar on the server side, my local Mail app can detect dates in messages and offer to create appointments in Calendar. Also, CloudKit’s sync services are encrypted, so nobody at Apple has access to my data - not even if law enforcement asks.

Evernote have chosen not to take that approach, and have not (yet) clarified any benefit that they expect or promise to deliver by doing so. This mis-step has now caused loyal, paying users like me to re-evaluate everything else about the service. At this point, even cancelling the new machine-learning service would not be enough to mollify users; nothing short of a new and explicit commitment to complete encryption of user data - including from Evernote employees! - would suffice.

Evernote's loss will be someone else’s gain

One possible winner from this whole mess is Bear, a new note-taking app that does use CloudKit and therefore is able to provide that encryption at rest that Evernote does not.

Bear - Notes for iPhone, iPad and Mac

The Bear team have even been having some fun on Twitter at Evernote’s expense:

I composed this post in Bear, and I have to say, it is very nice. I copied it over to Evernote to publish here, but it’s the first crack. Before this mess, I was a vocal advocate of Evernote. Now? I am actively evaluating alternatives.

Respect your users, yo.

One more time

I have worked all my career in enterprise IT, either as a sysadmin, or for vendors of enterprise IT tools. There are many annoyances in big-company IT, but one of the most frustrating is when people miss key aspects of what makes corporate IT tick.

One area is the difference between a brand-new startup whose entire IT estate consists of a handful of bestickered MacBooks, and a decades-old corporation with the legacy IT that history brings. Case in point: Google is stealing away Microsoft’s future corporate customers.

Basically, it turns out that - to absolutely nobody's surprise - startups overwhelmingly use Google's email services. Guess what? Running your own email server for just a few users is not a highly differentiating activity, so it makes sense to hand it off to Google. Big companies on the other hand have a legacy that means it makes sense to stick with what they have and know, which generally means Microsoft Exchange.

So far, so good. The key factor that is missing in this analysis is time. What happens when those startups grow to become mid-size companies or even join the Fortune 50 themselves? Do they stick with Google's relatively simple services, or do they need to transition at some point to an "enterprise" email solution?

It is now clear that Google does deep inspection of email contents. So far, this appears to be done for good: Paedophile snared as Google scans Gmail for images of child abuse. However, if I were in a business that competes with Google - and these days, that could be anything - I would feel distinctly uncomfortable about that.

There are also problems of corporate policy and compliance that apply to proper grown-up companies. At the simplest level, people often have their own personal Gmail accounts as well, and with Google's decision to use that login for all their services, there is enormous potential for bleed-over between the two domains. At a more complex level, certain types of data may be required to be stored in such a way that no third parties (such as Google) have access to them. Gmail would not work for that requirement either.

Simply put, startups have different needs from big established corporations. The impact of full-time IT staff on small startup is huge. The alternative of doing your own support doesn't work either, because every hour spent setting up, maintaining or troubleshooting IT infrastructure is an hour that you don't spend working on your actual product. For a big corporation with thousands of employees, on the other hand, it makes a lot of sense to dedicate a few to in-house IT support, especially if the alternatives include major fines or even seeing managers go to jail. The trend Quartz identified is interesting, but it's a snapshot of a point in time. What would be more interesting would be to see the trend as those companies grow and change from one category to another.

Corollary to this is that business IT is not consumer IT. Trying to mix the two is a recipe for disaster. Big B2B vendors end up looking very silly when they try to copy Apple, and journalists look just as silly when they fail to understand key facts about the differences between B2B and consumer IT, and between small-company IT and big-company IT.


Image by Philipp Henzler via Unsplash

Privacy? on the Internet?

Periodically something happens that gets everyone very worked up about privacy online. Of course anyone who has ever administered a mail server has to leave the room when that conversation starts, because our mocking laughter apparently upsets people.1

The latest outrage is that Facebook has apparently been messing with people's feeds. No, I don't mean the stuff about filtering out updates from pages that aren't paying for placement.

No, I don't mean the auto-playing videos either. Yes, they annoy me too.

No, it seems that Facebook manipulated the posts that showed up in certain users' feeds, sending them more negative information to see whether this would affect their mood - as revealed, naturally, through their Facebook postings.

Now, it has long been a truism that online, and especially when it comes to Facebook, privacy is dead. The simplistic response is of course "if you wanted it to be a secret, then why did you share it on Facebook?". This is, of course, a valid point as far as it goes. The problem is that the early assumptions about Facebook no longer hold true.

Time was, Facebook knew about what you did on Facebook, but once you left the site, you were free to get up to things you might not want to share with everybody. Then those "Like" buttons started proliferating everywhere. Brands and website operators wanted to garner "likes" from users to prove their popularity, or at least the effectiveness of their latest marketing gimmick ("like our site for the chance to win an iPad!").

It turns out that on top of tracking what you actually "like", Facebook can track any page you look at that has a Like button embedded. Given that the things are absolutely everywhere, that gives them probably the most complete picture of any ad network out there.

Then Facebook changed their news delivery options. It used to be that "liking" a page meant that you would see all their updates. Now, it means that about 2% of the people who "like" the page see the updates - unless the page operators choose to pay to amplify their reach... Note that these pages do not necessarily belong to brands and advertisers. If your old school has a page that you "like", in the expectation that you will now receive their updates, you're out of luck. Guess you'd better arrange a fundraiser at your next reunion to gather cash to pay Facebook. On the plus side, you have a built-in excuse for poor attendance at the reunion: "ah, I guess they were in the 98% that Facebook didn't deliver the notifications to".

And now Facebook have gone whole-hog, not just preventing information from reaching users' feeds, but actively changing the contents of the users' feeds - in the name of Science, sure.

This is far beyond what people think they have signed up for. There is a big difference between being tracked on Facebook, and being tracked by Facebook, everywhere you go. The difference is not just moral, but commercial. After all, tracking users across multiple websites has been standard operating procedure for ad networks for a long time now. If you've ever shopped online for something and then seen nothing but ads for that one thing for a month thereafter, you have experienced this first-hand. It's mildly creepy, but at this point everyone is pretty well inured to this level of tracking.

Being tracked by ad networks is different from being tracked by Facebook in one very important way. So far, nobody seems to have figured out a good way to make money with content on the internet. A few people do okay with subscriptions, but it tends to be a niche thing. Otherwise, pretty much everything is ad-funded in some way. Now, banner ads can be annoying, and the tracking can get creepy, but at least the money from the ad impressions is going to the site operator, who provides the content that keeps us all coming back.

The "like" button subverts this mechanism, because it's just as creepy and Big-Brotherish, but none of the money goes to the site's operator. All the money and data go only to Facebook, who are even now trying to figure out how to modify your feed to make you want to buy things. Making you feel bad was only step 1, but not everyone goes straight to retail therapy as a remedy. Step 2 is hacking our exocortices (hosted on Facebook) to manipulate the "buy now!" instinct directly.

If you enjoyed this article, please like it on Facebook.


  1. If you don't know what I'm talking about, let's just say I really, really know what I'm talking about when I say you shouldn't send credit card numbers in the clear, and leave it at that. 

Why can't Big Brother ever lift a finger to help out?

It strikes me that the NSA and their counterparts missed a trick.

Maybe it's because I'm in the throes of moving house, with all the associated change-of-address shenanigans, but it strikes me that it would be very useful if the government actually operated a single information repository. I mean, that's what they already do, right?

So, why not do it in a way that serves the public? The government sets up an IDDB which has all of everyone's information in it; so far, so icky. But here's the thing: set it up so that individuals can grant access to specific data in that DB - such as the address. Instead of telling various credit card companies, utilities, magazine companies, Amazon, and everyone else my new address, I just update it in the IDDB, and bam, those companies' tokens automatically update too - assuming I don't revoke access in the mean time.

This could also be useful for all sorts of other things, like marital status, insurance, healthcare, and so on. Segregated, granular access to the information is the name of the game. Instead of letting government agencies and private companies read all the data, users each get access only to those data they need to do their jobs.

People have a problem with any Tom, Dick, or Harry being able to read all their information, but the objection isn't intrinsically to the gathering of the information, it's to the unrestricted access. The idea that any busybody can log in and see any information they care about is what we object to.

Offering an IDDB service would go a long way to solving the PR problem of programmes like PRISM and its ilk. Of course there are enormous issues with abuse of such a system, but since it seems governments cannot be prevented from building (and abusing) these systems anyway, couldn't we at least get some convenience out of it?

Clouded Prism

One of the questions raised as a part of the PRISM discussion has been the impact on the internet and specifically cloud computing industries. For instance, Julie Craig of EMA wrote a post titled "PRISM: The End of the Cloud?"

I think these fears are a bit overblown. While there will probably be some blowback, most of the people who care about this sort of thing were already worried enough about the Patriot Act without needing to know more about PRISM. I think the number of people who will start to care about privacy and data protection as a result of PRISM will be fairly small. All Things D's Joy of Tech cartoonnailed it, as usual.

The same kind of thing applies in business. Many companies don't really care very much either way about the government reading their files. They might get more exercised about their competitors having access, but apart from perhaps some financial information, the government is low down the list of potential threats.

Of course, most analysis focuses on the question of US citizens and corporations using US services. What happens in the case of foreign users, whether private or corporate, using US services? There has been some overheated rhetoric on this point as well, but I don't think it's a huge factor. Much like Americans, people in the rest of the world already knew about the Patriot act, and most of them voted with their feet, showing that they did not care. As for corporations, most countries have their own restrictions on what data can be stored, processed or accessed across borders, quite possibly to make it easier to run their own versions of PRISM, so companies are already pretty constrained in terms of what they could even put into these US services.

For companies already using the public cloud or looking into doing so, this is a timely reminder that not all resources are created equal, and that there are factors beyond the purely technical and financial ones that need to be considered. The PRISM story might provide a boost for service providers outside the US, who can carve out a niche for themselves as giving the advantages of public cloud, but in a local or known jurisdiction. This could mean within a specific country, within a wider region such as the EU, or completely offshore. Sealand may have been ahead of its time, but soon enough there must emerge a "Switzerland of the cloud". The argument that only unsavoury types would use such a service doesn't really hold water, given that criminals already have the Russian Business Network and its ilk.

Bottom line, PRISM is nothing new, and it doesn't really bring any new facts. Given the Patriot Act, the sensible assumption had to be that the US government was doing something like this - and so were other technologically sophisticated governments around the world. The only impact it might have is in perception, if it blows up into a big enough story and stays around for long enough. In terms of actual rational grounds for decision-making, my personal expectation is for impact to be extremely limited.

Through a Prism, Darkly

Because everyone must have an opinion!

secure.jpg

First of all, let me just say that as a non-US citizen, I try not to comment in public on US politics. It's kind of hard, because it's a bit like trying not to comment on Roman politics in the first century AD, but there it is. Therefore, while the whole PRISM debacle is what prompted this post, what I have to say is not specific to PRISM.

Back in the day, three blogs ago and lost in the mists of Internet time, there was Total Information Awareness. This was a DARPA project from ten years ago, which ended up being defunded by Congress after a massive public outcry, not least about its totally creepy name. Basically the idea was to sift through all communications, or as many as was feasible, looking for patterns that indicated terrorist activity. The problem people had with TIA is much the same as the problem they have with PRISM: the idea that the government will look through everything, and then decide what is important.

On the one hand, this is actually a positive development. No, wait, let me finish! The old way of doing surveillance and Information Awareness that was less than Total was to let humans access all those communications. This method has problems with scale; even enthusiastic adopters of surveillance like East Germany and North Korea only succeeded to the degree they did because East Germany didn't have the internet and North Korea keeps it out. This guarantees abuse, if only because humans can't be told to ignore or forget information. The agent listening to the take from the microphones set up to catch subversive planning can't help also hearing intimate details of the subjects' lives that are not relevant to any investigation.

An automated system is preferable, then, since it doesn't "listen" the way a human does, and discards any data that do not match its patterns. Privacy is actually invaded less by an automated system than by human agents, given the same inputs.

That last clause is kind of important, though. Once you have the automated system set up, it is no longer constrained by scale. Governments around the world already have huge datacenters of their own, and could also take advantage of public cloud resources at a pinch, so such a system is guaranteed to expand, rapidly and endlessly, unless actively checked. A system that just looks for correlation clusters around terrorist organisations, subversive literature, and bulk purchases of fertiliser and ball-bearings will quickly be expanded to look for people behind on their student debt or parking tickets. Think this is an exaggerated slippery-slope argument? The US (sorry) Department of Education conducts SWAT raids over unpaid loans

As with all tools, the problem is the uses that the tool might be put to. Let's say you trust the current government absolutely not to do anything remotely shady, so you approve all these powers. Then next election, the Other Guys get in. What might they get up to? Are you sure you want them to have this sort of power available?

It is already very difficult to avoid interacting with the government or breaking any laws. Today, I briefly drove over twice the speed limit. Put like that it sounds terrible, doesn't it? But what actually happened was that the speed limit suddenly dropped by more than half. This is a very familiar route for me, I could see clear road ahead, and it was a lovely sunny day, so instead of jamming on my brakes right at the sign I exercised my judgment and braked more gradually. A human policeman, unless in a very bad mood, would have had nothing much to say about this. A black box in my car, though, might have revoked my license before I had finished braking (exceeding the speed limit by over 40 km/hour).

This is the zero-tolerance future of automated surveillance and enforcement. Laws and policies designed to be applied and enforced with judgment and common sense will show their weaknesses if they are to be applied by unthinking, unfeeling machines. I haven't even gone into the translation from fuzzy human language to deterministic computer language. The only solution would be to require lawmakers to submit a reference implementation of their law, which would have the advantage of allowing for debugging against test cases in silico instead of in the real world, with actual human beings. The ancillary benefit of massively slowing down the production of new laws is merely a fortunate side effect.

To recap: as usual, the weakest link is the human, not the machine. Systems like PRISM are probably inevitable and may even be desirable, but they need some VERY tight safeguards on them, which to date have not been in evidence. The problem is of course that discussing such systems in public risks disclosing information about how to evade them, but as we have seen in infosec, security by obscurity doesn't work nearly as well as full disclosure. If instead of feeling Big Brother watching over them, citizens felt that they and their government were working together to ensure common security, all of us would feel much happier about working to strengthen and improve such systems. Wouldn't you rather have guys like Bruce Schneier
inside the tent, as the saying goes?