Showing all posts tagged aws:

Amazon's Private Cloud

Last week was AWS re:Invent, and I’m still dealing with the email hangover.1 AWS always announce a thousand and one new offerings and services at their show, and this year was no exception. However, there is one announcement that I wanted to reflect upon briefly, out of however many there were during the week.

AWS Outposts are billed as letting users "Run AWS infrastructure on-premises for a truly consistent hybrid experience". This of course provoked a certain amount of hilarity in the parts of Twitter that have been earnestly debating the existence of hybrid cloud since the term was first coined.

On the surface, it might indeed seem somewhat strange for AWS, the archetypal public cloud in most people’s minds, to start offering hardware to be deployed on customers’ premises. However, to me it makes perfect sense.

Pace some ten-year-old marketing slogans which have not aged well, most companies do not start out with a hybrid cloud strategy. Instead, they find themselves forced by circumstances to formulate one in order to deal with all of the various departments that are out there doing their own thing. In this situation, the hybrid cloud strategy is simply recognition that different teams have different requirements and have made their own decisions based on those. All that corporate IT can do is try to gain overall visibility and attempt to ensure that all the various flavours of compute infrastructure are at least being used in ways which are sane, secure, and fiscally responsible (the order of the priorities may change, but that’s the list).

Some of the more wild-eyed predictions around hybrid cloud instead expected that workloads would be easily moved, not only between on- and off-premises compute infrastructure, but even between different cloud providers. In fact, it would be so easy that it would be possible to make minute-by-minute assessments of the cost of running workloads with different providers, and move them from one to another in order to take advantage of lower prices.

Obviously, that did not happen.

For the cloud broker model to work, several laws of both economics and physics would have to be suspended or circumvented, and nobody seems to have made the requisite breakthroughs.

To take just a few of the more obvious objections:

The Speed Of Light

Moving any meaningful amount of data around the public internet still takes time. If you are used to your local 100 Gb-E LAN, it can be easy to forget this, but it is going to be a factor out there in the wild wild Web. This objection was obvious when we were talking about moving monolithic VMs around, but even if you assume truly immutable infrastructure, you are still going to have to shift at least some snapshot of the application state, and that adds up fast – let alone the rate of configuration drift of your "immutable" infrastructure with each new micro-release.

Transparent Pricing

The units of measure of different cloud providers are not easily comparable. How does the performance of an AWS M5 instance compare to an Azure Dv2-series? Well, you’d better know before you move production over there… And AWS has 24 instance types, whereas Azure has 7 different series, each with sub-types and options/) – and let’s not even talk about all the weird and wonderful single-use configurations in your local VMware or Openstack service catalogue! How portable is your workload, really?

Leaving Money On The Table

Or let’s take it from the other side: assume you have carefully architected your thing to use only minimum-common-denominator components that are, if not identical, at least similar enough across all of the various substrates they might find themselves running on. By definition, this means that you are not taking full advantage of the more advanced capabilities of each of those platforms. This limitation is not only at the ingredient level; you also have to make worst-case assumptions about the sorts of network bandwidth and latency you might have access to, or the sort of regulatory and policy compliance environment that you might find yourself operating within.

For all of these reasons and more, the dream of real-time cloud pricing arbitrage died a quick death, regardless of whether individual companies might use different cloud providers in various parts of their business.

Amazon Outposts is not that. For a start, despite running physically on the customer’s premises, it is driven entirely from the (remote) AWS control plane. Instead, it has the potential to address concerns about physical location together with associated concerns about latency and legal jurisdiction. Being AWS (with some help from VMware) it avoids the concern about different units of measure. For now, it only goes part of the way to resolving the final question about minimum common denominator ingredients, since at launch it only supports EC2. Additional features are expected shortly, however, especially including various storage options.

So yes, hybrid cloud. Turns out, it’s not only still a thing, but you can even get it from AWS. Who’d have thunk it?


  1. I managed to avoid any hangovers of the alcoholic variety; staying well hydrated in Las Vegas is good for multiple purposes. My inbox, however, is a mess

Enterprise IT on the shelf

Cross-posted to my work blog and to Linkedin


If there has been one overarching theme of the last few years in IT, it has been the changing relationship between enterprise IT departments and the users that they support.

Users have always wanted more IT faster, and this has always driven advances in the field. Minicomputers were the shadow IT of their day, democratising access to computing that had previously been locked up in mainframes. (By the way, did you know that the mainframe is fifty years young and still going strong?)

Departments would purchase their own minicomputers to avoid having to share time on the big corporate machines with others. This new breed of machine introduced application compatibility for the first time. In other words, it was no longer necessary to program for a specific machine. Higher-level languages also made that task of programming much easier.

Microcomputers and personal desktop computers were the next step in that evolution. At this stage it became feasible for people to have their own personal machine and run their own tasks in their own time, and for a while IT departments lost much of their control. The arrival of computer networks swung the balance the other way, until the widespread adoption of mobile devices started the swing back again.

Seen in this way, cloud computing is just the latest move in a long dance. The tempo is increasing, however, and it becomes more critical to make the right moves.

One make-or-break move is the very first public one, when a company decides to shift at least some of its workloads to the public cloud. It’s important to remember that Amazon was not designed to be traditional IT and trying to treat it that way is a route to failure.

IT-on-the-shelf.jpeg.jpg

To get an idea of the sort of problems we want to avoid, here’s an example from a completely different domain. If you have ever furnished a house or a flat, the odds are good that you have wandered around IKEA, feeling lost and disoriented, and possibly having a furious argument with your significant other as well.

Assuming the shopping trip didn’t end in mayhem and disaster - and personally I always count it as a success when I get out of IKEA without either of those - you may well have bought an Expedit shelving unit. The things are ubiquitous, together with their cousins, the Billy shelving units. I should know, I own both.

The bad news is, IKEA is discontinuing the Expedit and replacing it with a slightly different unit, the Kallax. This has infuriated customers who liked being able to replace or extend their existing furniture with additional bits.

What has this got to do with IT? What IKEA has done is break backwards compatibility in their products: you can no longer just get “more of the same", and unless you are furnishing an entire new home, you will probably have to deal with both the old and the new model at the same time.

Enterprise IT departments are facing the same problem with cloud computing. They want to take advantage of the fantastic capabilities of this new model, but they need to do it without breaking the things that are working for their users today. They don’t have the luxury that startups do of engineering their entire operation from the ground up for cloud. They have a history, and all sorts of things that are built on top of that history.

On the other hand, they can’t just treat a virtual server in the public cloud as being the same as the physical blade server humming away in their datacenter. For a start, much of the advantage of the public cloud is based around a fundamentally different operating model. It has been said that servers used to be pets, given individual names, pampered and hand-reared, while in the cloud we treat them like cattle, giving them numbers and putting them down as soon as it’s convenient.

The public cloud is great, but it works best for certain workloads. On the other hand, there are plenty of workloads that are still better off running on-premises, or even (gasp!) directly on physical hardware. The trick is knowing the difference, and managing your entire IT estate that way.

This is part and parcel of BMC’s New IT: make it easy for users to get what they need, when they need it. To find out more about what BMC can do to make your cloud initiative successful, please visit www.bmc.com/cloud.