Showing all posts tagged cloud:

AWS re:Invent 2022

At this time of year, with the nights drawing in, thoughts turn inevitably to… AWS' annual Las Vegas extravaganza, re:Invent. This year I'm attending remotely again, like it's 2020 or something, which is probably better for my liver, although I am definitely feeling the FOMO.

Day One: Adam Selipsky Keynote

I skipped Monday Night Live due to time zones, but as usual, this first big rock on the re:Invent calendar is a barrage of technical updates, with few hints of broader strategy. That sort of thing comes in the big Tuesday morning keynote with Adam Selipsky.

Last year was his first time taking over after Andy Jassy's ascension to running the whole of Amazon, not just AWS. This year’s delivery was more polished, plus it looks like we have seen the last of the re:Invent House Band. Adam Selipsky himself though was still playing the classics, talking up the benefits of cloud computing for cost savings and using examples such as Carrier or Airbnb to allude to companies' desire to be agile with fewer resources.

Still, it's a bit of a double-take to hear AWS still talking about cloud migration in 2022 — even if, elsewhere in Vegas, there was a memorable endorsement of migration to the cloud from Ukraine's Minister for Digital Transformation. Few AWS customers have to contend with the sorts of stress and time pressure that Mykhailo Fedorov did!

In the keynote, the focus was mostly on exhortations to continue investing in the cloud. I didn't see Andy Jassy's signature move of presenting a slide that shows cloud penetration as still being a tiny proportion of the market, but that was definitely the spirit: no reason to slow down, despite economic headwinds; there's lots more to do.

Murdering the Metaphors

We then got to the first of various metaphors that would be laboriously and at length tortured to breaking point and beyond. The first was space exploration, and admittedly there were some very pretty visuals to go with the point being belaboured: namely, that just like images captured in different wavelengths show different data to astronomers, different techniques used to explore data can deliver additional results.

There were some good customer examples in this segment: Expedia Group making 600B predictions on 70 Petabytes of data, and Pinterest storing 1 Exabyte of data on S31. That sort of scale is admittedly impressive, but this was the first hint that the tempo of this presentation would be slower, with a worse ratio of content to time than we had been used to in the Jassy years.

Tools, Integration, Governance, Insights

This led to a segment on the right tools, integration, and governance for working with data, and the insights that would be possible. The variety of tools is something I had focused on in my report from re:Invent 2021, in which I called out AWS' "one database engine for each use case" approach and questioned whether this was what developers actually wanted.

Initially, it seemed that we were getting more of the same, with Amazon Aurora getting top billing. The metrics in particular were very much down in the weeds, mentioning that Aurora offered 1/10 the cost of commercial DBMS, while also having up to 3x performance of PostgreSQL and 5x the performance of MySQL2.

We then heard about how customers also need analytics tools, not just transactional ones, such as EMR, MSK, and Redshift for high performance on structured data - 5x better price performance than "other cloud data warehouses" (a not-particularly-veiled dig at Snowflake, here — more of a Jassy move, I felt).

The big announcement in this section was OpenSearch Serverless. This launch means that AWS offers serverless options for all of its analytics services. According to Selipsky, "no-one else can say that". However, it is worth checking the fine print. In common with many "serverless" offerings, OpenSearch Serverless has a minimum spend of 4 OCUs — or $700 in real money. Scaling to zero is a key requirement and expectation of serverless, so it is disappointing to see so many offerings like this one that devolve to elastic scalability on top of a fixed base. Valuable, to be sure, but not quite so revolutionary.

ETL Phone Home

Then things got interesting.

Adam Selipsky made an example of a retail company running its operations on DynamoDB and Aurora and needing to move data to Redshift for analysis. This is exactly the sort of situation I decried in last year's report for The New Stack: too many single-purpose databases, leaving users trying to copy data back and forth, with the attendant risk of loss of control over their data.

It seems that AWS product managers had been hearing the same feedback that I had, but instead of committing to one general-purpose database, they are doubling down on their best-of-breed approach. Instead, they enabled federated query in Redshift and Athena to query other services — including third-party ones.

The big announcement was zero-ETL integration between Aurora and Redshift. This was advertised as being "near real time", with latency measured in seconds — good enough for most use cases, although something to be aware of for more demanding situations. The integration also works with multiple Aurora instances all feeding into one Redshift instance, which is what you want. Finally, the integration was advertised as being "all serverless", scaling up and down in response to data volume.

Take Back Control

So that's the integration — but that only addresses questions of technical complexity and maybe cost of storage. What about governance? Removing the need for ETL from one system into another does remove one big issue, which is the creation of a second copy of the data without the access controls and policy enforcement applied to the original. However, there is still a need to track metadata — data about the data itself.

Enter Amazon DataZone, which enables users to discover, catalog, share, and govern data across organisations. What this means in practice is that there is a catalog of available data, with metadata, labels, and descriptions. Authorised consumers of the data can search, browse, and request access, using existing tools: Redshift, Athena, and Quicksight. There is also a partner API for third-party tools; Snowflake and Tableau were mentioned specifically.

The Obligatory AI & ML Segment

I was not the only attendee to note that AWS spent an inordinate amount of time on AI & ML, given AWS' relatively weak position in that market.

Adam Selipsky talked up the "most complete set of machine learning and AI services", as well as claiming that Sagemaker is the most popular IDE for ML. A somewhat-interesting example is ML-powered forecasting: take a metric on a dashboard and extend it into the future, using ML to include seasonal fluctuations and so on. Of course this is only slightly more realistic than just using a ruler to extend the line, but at least it saves the time needed to make the line look credibly irregular.

More Metaphors

Then we got another beautiful video segment, which Adam Selipsky used to bridge somehow from underwater exploration to secure global infrastructure and GuardDuty. The main interesting announcement in this segment was Amazon SecurityLake, a "dedicated data lake to combine security data at petabyte scale". Data in the lake can be queried with Athena, OpenSearch, and Sagemaker, as well as third-party tools.

It didn’t sound like there was massive commitment to this offering, so the whole segment ended up sounding opportunistic. The whole thing reminded me of Tim Bray's recent tale of how AWS never did get into blockchain stuff: as long as people are going to do something, you might as well make it easy.

In this case, what people are doing is dumping all their logs into one place in the hope that they can find the right algorithm to sift them with and find interesting patterns that map to security issues. The most interesting aspect of SecurityLake is that it is the first tool to support the new Open Cybersecurity Schema Framework format. This is a nominally open format (Cisco and Splunk were mentioned as contributors), but it is notable that the examples in the OCSF white paper are all drawn from AWS services. OCSF is a new format, only launched in August 2022, so ultimate adoption by the industry is still unclear.

Trekking Towards The End

By this point in the presentation I was definitely flagging, but there was another metaphor to torture, this time about polar exploration. Adam Selipsky contrasted the Scott and Amundsen expeditions, which seemed in remarkably poor taste, what with all the ponies and people dying — although the anecdote about Amundsen bringing a tin-smith to make sure his cans of fuel stayed sealed was admittedly a good one, and the only non-morbid part of the whole segment. Anyway, all of this starvation and death — of the explorers, I mean, not the keynote audience, although if I had gone before breakfast I would have been regretting it by this point — was in service of making the point that specific tools are better than general ones.

We got a tour of what felt like a large proportion of AWS' 600+ instance types, with shade thrown at would-be Graviton competitors that have not yet appeared, more ML references with Inferentia chips, and various stories about HPC. Here it was noticeable that the customer example use case uses Intel Xeon chips, despite all of those earlier Graviton references.

One More Metaphor

There was one more very pretty video on imagination, but it was completely wasted on supply chains and call centres.

There was one last interesting offering, though, building on that earlier point about governance and access. This was AWS Clean Rooms, a solution to enable secure collaboration on datasets without sharing access to the underlying data itself. This is useful when working across organisational boundaries, because instead of copying data (which means losing control over the copy), it reads data in place, and thereby maintains restrictions on that data. Quicksight, Sagemaker, and Redshift all integrate with this service at launch.

There was one issue hanging over this whole segment, though. The Clean Rooms example was from advertising, which leads to a potential (perception of) conflict of interest with Amazon's own burgeoning advertising business. Like another new service, AWS Supply Chain, it's easy to imagine this offering being a non-starter simply because of the competitive aspect, much like retailers prefer to work with other cloud providers than AWS.

Turn It To Eleven

All in all, nothing earth-shattering — certainly nothing like Andy Jassy's cavalcade of product announcements, upending client and vendor roadmaps every minute or so. Maybe that is as it should be, though, for an event which is in its eleventh year. And this may well be why Adam Selipsky opted for a different approach to "the cloud is still in its infancy", when it is so clearly a market that is maturing fast. In particular, we are seeing a maturation in the treatment of data, from a purely technical focus on specific tasks to a more holistic lifecycle view. This shift is very much in line with the expectations of the market; however, at least based on this keynote, AWS is playing catch-up rather than defining the field of competition. In particular, all of the governance tools only work with analytical (OLAP) tools, not with real-time transactional (OLTP) tools. That would be a truly transformative move, especially if it can be accomplished without too much of a performance penalty.

The other thing that is maturing is AWS' own approach, moving inexorably up the stack from simple technical building blocks to full-on turnkey business applications. This shift does imply a change in target buyers, though; AWS' old IT audience may have been happy to swipe a credit card, read the docs, and start building, but the new audience they are quoting with Supply Chain and Clean Rooms certainly will not. It will be interesting to watch this transformation take place.


  1. It was not clarified how much of that data is used to poison image search engines. 

  2. Relevant because Aurora (and RDS which it is built on) is based on PostgreSQL and MySQL, with custom storage enhancements to give that speed improvement. 

Piercing The Clouds

Now here’s an interesting document: "A Measurement Study of Server Utilization in Public Clouds". Okay, it’s from 2011, but otherwise seems legit.

Basically it’s a study of total CPU utilisation in both AWS and Azure (plus a brief reference to GoGrid, a now-defunct provider acquired by Datapipe, who in turn were acquired by Rackspace). The problem is that very few people out there are doing actual studies like this one; it’s mostly comparisons between on-prem and remote clouds, or between different cloud providers, rather than absolute utilisation. However, it’s interesting because it appears to undermine one of the biggest rationales for a move to the cloud: higher server utilisation.

Note that Y-axis: utilisation is peaking at 16%.

The study’s conclusion is as follows:

Apparently, the cost of a cloud VM is so low that some users choose to keep the VM on rather than having to worry about saving/restoring the state.

I wonder if this study would bring any substantially different results if repeated in 2019, with all the talk of serverless and other models that are much less dependent on maintaining state. It is plausible that in 2011 most workloads, even in public clouds, were the result of "lifting and shifting" older architectures onto new infrastructure. The interesting question would be, how many of those are still around today, and how many production workloads have been rearchitected to take advantage of these new approaches.

This is not just an idle question, although there is plenty of scope for snarkily comparing monolithic VMs to mainframes. Cloud computing, especially public cloud, has been able to claim the mantle of Green IT, in large part because of claims of increased utilisation – more business value per watts consumed. If that is not the case, many organisations may want to re-evaluate how they distribute their workloads. Measuring processor cycles per dollar is important and cannot be ignored, but these days the big public cloud providers are within shouting distance of one another on price, so other factors start to enter into the equation – such as environmental impacts.


Image by Samuel Zeller via Unsplash

Amazon's Private Cloud

Last week was AWS re:Invent, and I’m still dealing with the email hangover.1 AWS always announce a thousand and one new offerings and services at their show, and this year was no exception. However, there is one announcement that I wanted to reflect upon briefly, out of however many there were during the week.

AWS Outposts are billed as letting users "Run AWS infrastructure on-premises for a truly consistent hybrid experience". This of course provoked a certain amount of hilarity in the parts of Twitter that have been earnestly debating the existence of hybrid cloud since the term was first coined.

On the surface, it might indeed seem somewhat strange for AWS, the archetypal public cloud in most people’s minds, to start offering hardware to be deployed on customers’ premises. However, to me it makes perfect sense.

Pace some ten-year-old marketing slogans which have not aged well, most companies do not start out with a hybrid cloud strategy. Instead, they find themselves forced by circumstances to formulate one in order to deal with all of the various departments that are out there doing their own thing. In this situation, the hybrid cloud strategy is simply recognition that different teams have different requirements and have made their own decisions based on those. All that corporate IT can do is try to gain overall visibility and attempt to ensure that all the various flavours of compute infrastructure are at least being used in ways which are sane, secure, and fiscally responsible (the order of the priorities may change, but that’s the list).

Some of the more wild-eyed predictions around hybrid cloud instead expected that workloads would be easily moved, not only between on- and off-premises compute infrastructure, but even between different cloud providers. In fact, it would be so easy that it would be possible to make minute-by-minute assessments of the cost of running workloads with different providers, and move them from one to another in order to take advantage of lower prices.

Obviously, that did not happen.

For the cloud broker model to work, several laws of both economics and physics would have to be suspended or circumvented, and nobody seems to have made the requisite breakthroughs.

To take just a few of the more obvious objections:

The Speed Of Light

Moving any meaningful amount of data around the public internet still takes time. If you are used to your local 100 Gb-E LAN, it can be easy to forget this, but it is going to be a factor out there in the wild wild Web. This objection was obvious when we were talking about moving monolithic VMs around, but even if you assume truly immutable infrastructure, you are still going to have to shift at least some snapshot of the application state, and that adds up fast – let alone the rate of configuration drift of your "immutable" infrastructure with each new micro-release.

Transparent Pricing

The units of measure of different cloud providers are not easily comparable. How does the performance of an AWS M5 instance compare to an Azure Dv2-series? Well, you’d better know before you move production over there… And AWS has 24 instance types, whereas Azure has 7 different series, each with sub-types and options – and let’s not even talk about all the weird and wonderful single-use configurations in your local VMware or Openstack service catalogue! How portable is your workload, really?

Leaving Money On The Table

Or let’s take it from the other side: assume you have carefully architected your thing to use only minimum-common-denominator components that are, if not identical, at least similar enough across all of the various substrates they might find themselves running on. By definition, this means that you are not taking full advantage of the more advanced capabilities of each of those platforms. This limitation is not only at the ingredient level; you also have to make worst-case assumptions about the sorts of network bandwidth and latency you might have access to, or the sort of regulatory and policy compliance environment that you might find yourself operating within.

For all of these reasons and more, the dream of real-time cloud pricing arbitrage died a quick death, regardless of whether individual companies might use different cloud providers in various parts of their business.

Amazon Outposts is not that. For a start, despite running physically on the customer’s premises, it is driven entirely from the (remote) AWS control plane. Instead, it has the potential to address concerns about physical location together with associated concerns about latency and legal jurisdiction. Being AWS (with some help from VMware) it avoids the concern about different units of measure. For now, it only goes part of the way to resolving the final question about minimum common denominator ingredients, since at launch it only supports EC2. Additional features are expected shortly, however, especially including various storage options.

So yes, hybrid cloud. Turns out, it’s not only still a thing, but you can even get it from AWS. Who’d have thunk it?


  1. I managed to avoid any hangovers of the alcoholic variety; staying well hydrated in Las Vegas is good for multiple purposes. My inbox, however, is a mess

Cloud Adoption Is Still Not A Done Deal

I have some thoughts on this new piece from 451 Research about IT provisioning. The report is all about how organisations that are slow to deliver IT resources will struggle to achieve their other goals. As business becomes more and more reliant on IT, the performance of IT becomes a key controlling factor for the overall performance of the entire business.

This connection between business and IT is fast becoming a truism; very few businesses could exist without IT, and most activities are now IT-enabled to some extent. If you’re selling something, you’ll have a website. People need to be able to access that website, and you need to make regular changes as you roll out new products, run sales promotions, or whatever. All of that requires IT support.

Where things get interesting is in the diagnosis of why some organisations succeed and others do not:

Just as internal IT culture and practices have an impact on provisioning time, they can also severely impact acceptance of technologies. Although the promise of machine learning and artificial intelligence (AI) is emerging among IT managers who took early steps toward machine-enabled infrastructure control, much work remains in convincing organizations of the technologies' benefits. In fact, the more manual the processes are for IT infrastructure management, the less likely that IT managers believe that machine learning and AI capabilities in vendor products will simplify IT management. Conversely, most managers in highly automated environments are convinced that these technologies will improve IT management.

If the IT team is still putting hands on keyboards for routine activities, that’s a symptom of some deeper rot.

It may appear easy to regard perpetual efforts of organizations to modernize their on-premises IT environments as temporary measures to extract any remaining value from company-owned datacenters before complete public cloud migration occurs. However, the rate of IT evolution via automation technologies is accelerating at a pace that allows organizations to ultimately transform their on-premises IT into cloudlike models that operate relatively seamlessly through hybrid cloud deployments.

The benefits of private cloud are something I have been writing about for a long time:

The reason this type of organisation might want to look at private cloud is that there’s a good chance that a substantial proportion of that legacy infrastructure is under- or even entirely un-used. Some studies I’ve seen even show average utilisation below 10%! This is where they get their elasticity: between the measured service and the resource pooling, they get a much better handle on what that infrastructure is currently used for. Over time, private cloud users can then bring their average utilisation way up, while also increasing customer satisfaction.

The bottom line is, if you already own infrastructure, and if you have relatively stable and predictable workloads, your best bet is to figure out ways to use what you already have more efficiently. If you just blindly jump into the public cloud, without addressing those cultural challenges, all you will end up with is a massive bill from your public cloud provider.

Large organisations have turning circles that battleships would be embarrassed by, and their radius is largely determined by culture, not by technology. Figuring out new ways to use internal resources more efficiently (private cloud), perhaps in combination with new types of infrastructure (public cloud), will get you where you need to be.

That cultural shift is the do-or-die, though. The agility of a 21st century business is determined largely by the agility of its IT support. Whatever sorts of resources the IT department is managing, they need to be doing so in a way which delivers the kinds of speed and agility that the business requires. If internal IT becomes a bottleneck, that’s when it gets bypassed in favour of that old bugbear of shadow IT.

IT is becoming more and more of a differentiator between companies, and it is also a signifier of which companies will make it in the long term – and which will not. It may already be too late to change the culture at organisations still mired in hands-on, artisanal provisioning of IT resources, but it is certain that completing that transition should be a priority.


Photo by Amy Skyer on Unsplash

When is a Cloud not a Cloud?

Further thoughts on yesterday’s post, prompted by some of the conversations about it

click.jpg

I realised that in yesterday’s post I implied that the key difference was between different markets; that in some markets, a full-on enterprise sales push is required, while in others you can rely on word of mouth to allow good products to emerge.

I do believe that there are macro divisions like that, but even within a more circumscribed group of products, you can still see big differences that are driven at least in part by sales activity.

Let's talk about cloud.

The conventional wisdom is that Amazon’s AWS dominates because of under-the-radar adoption by developers. In this view, teams who wish to move faster than corporate IT’s procurement and delivery cycles use their company credit cards to get resources directly from AWS and release their code directly to the cloud. By the time the suits realise this has happened, the developers have enough users on their side that it’s easier just to let them keep on doing what they’re doing.

There is a fair amount of truth to this story, and I have seen it play out more or less in this way many times. What is neglected in this simple scenario is the other cloud vendors. There was a while back there when it wasn’t obvious that AWS was going to be the big winner. Google Compute Engine seemed like a much better bet; developers already had a high comfort level with using Google services. In addition, AWS initially offered bare-metal systems, while GCE had a full-stack PaaS. Conventional wisdom was that developers would prefer the way a PaaS abstracts away all the messy details of the infrastructure.

Of course it didn’t work out that way. Today GCE is an also-ran in this market, and even that only thanks to a pivot in its strategy. AWS dominates, but right behind them and growing fast we see Microsoft’s Azure.

CIS_Q415.jpg

Data from Synergy Research Group

And look who’s right behind Microsoft: IBM! Microsoft and IBM of course have huge traditional sales forces - but what some commentators seem to miss is that the bulk of AWS’ success is driven by its own big enterprise and channel sales force. A developer getting a dozen AMIs on the company AmEx might get usage off the ground, but it doesn’t drive much volume growth. Getting an enterprise contract for several thousand machines, plus a bunch of ancillary services? Now we’re talking.

Also note who’s missing from this list - anything driven by OpenStack. There are as many opinions on OpenStack technology as there are people working on it - which seems to be part of the problem. The one thing that seems clear is that it has not (yet?) achieved widespread adoption. I am seeing some interest on the SDN/NFV side, but most of those projects are still exploratory, so it remains to be seen how that market shakes out - especially with competition from commercial offerings from Cisco and VMware ramping up.

A good sales force won’t be able to push a terrible product, not least because sales people will jump ship in order to have a better product to sell, which makes their job easier. However, a good sales force can make the difference between a good product emerging from the churn, or not.

Underestimate sales at your peril.


Image by Jan Schulz via Unsplash

The curve points the way to our future

url.png

Just a few days ago, I wrote a post about how technology and services do not stand still. Whatever model we can come up with based on how things are right now, it will soon be obsolete, unless our model can accomodate change.

One of the places where we can see that is with the adoption curve of Docker and other container architectures. Anyone who thought that there might be time to relax, having weathered the virtualisation and cloud storms, is in for a rude awakening.

Who is using Docker?

Sure, the latest Docker adoption survey still shows that most adoption is in development, with 47% of respondents classifying themselves as "Developer or Dev Mgr", and a further 15% as "DevOps or Release Eng". In comparison, only 12% of respondents were in "SysAdmin / Ops / SRE" roles.

Also, 56% of respondents are from companies with fewer than 100 employees. This makes sense: long-established companies have too much history to be able to adopt the hot new thing in a hurry, no matter what benefits it might promise.

What does happen is that small teams within those big companies start using the new cool tech in the lab or for skunkworks projects. Corporate IT can maybe ignore these science experiments for a while, but eventually, between the pressure of those research projects going into production, and new hires coming in from smaller startups that have been working with the new technology stack for some time, they will have to figure out how they are going to support it in production.

Shipping containers

If the teams in charge of production operations have not been paying attention, this can turn into Good news for Dev, bad news for Ops, as my colleague Sahil wrote on the official Moogsoft blog. When it comes to Docker specifically, one important factor for Ops is that containers tend to be very short-lived, continuing and accelerating the trend that VMs introduced. Where physical servers had a lifespan of years, VMs might last for months - but containers have been reported to have a lifespan four times shorter than VMs.

That’s a huge change in operational tempo. Given that shorter release cycles and faster scaling (up and down) in response to demand are among the main benefits that people are looking for from Docker adoption, this rapid churn of containers is likely to continue and even accelerate.

VMs were sometimes used for short-duration tasks, but far more often they were actually forklifted physical servers, and shoe-horned into that operational model. This meant that VMs could sometimes have a longer lifespan than physical servers, as it was possible for them simply to be forgotten.

Container-based architectures are sufficiently different that there is far less risk of this happening. Also, the combination of experience and generational turnover mean that IT people are far more comfortable with the cloud as an operational model, so there is less risk of backsliding.

The Bow Wave

The legacy enterprise IT departments that do not keep up with the new operational tempo will find themselves in the position of the military, struggling to adapt to new realities because of its organisational structure. Armed forces set up for Cold War battles of tanks, fighters and missiles struggle to deal with insurgents armed with cheap AK-47s and repurposed consumer technology such as mobile phones and drones.

In this analogy, shadow IT is the insurgency, able to pop up from nowhere and be just as effective as - if not more so than - the big, expensive technological solutions adopted by corporate. On top of that, the spiralling costs of supporting that technological legacy will force changes sooner or later. This is known as the "bow wave" of technological renewal:

"A modernization bow wave typically forms as the overall defense budget declines and modernization programs are delayed or stretched in the future," writes Todd Harrison of the Center for Strategic and International Studies. He continues: "As this happens the underlying assumption is that funding will become available to cover these deferred costs." These delays push costs into the future, like a ship’s bow pushes a wave forward at sea.

(from here)

What do we do?

The solution is not to throw out everything in the data centre, starting from the mainframe. Judiciously adapted, upgraded, and integrated, old tech can last a very long time. There are B-52 bombers that have hosted three generations from the same family. In the same way, ancient systems like SABRE have been running since the 1960s, and still (eventually) underpin every modern Web 3.0 travel-planning web site you care to name.

What is required is actually something much harder: thought and consideration.

Change is going to happen. It’s better to make plans up front that allow for change, so that we can surf the wave of change. Organisations that wipe out trying to handle (or worse, resist) change that they had not planned for may never surface again.

Lowering the Barrier to Cloud

The 451 Group might not have the name recognition of some of the bigger analyst firms, with their magic quadrants and what-not, but there is a lot of value in their approach to the business. In particular, they have the only "cloud economist" I know, in the person of Dr Owen Rogers. Dr Rogers actually did his PhD on the economics of cloud computing, so he knows what he is talking about.

Dr Rogers also defies the stereotype of economists by being fun to talk to. He's also good on his personal blog - see this recent post for instance. I'll let you read the setup yourself - it's worth it - but I just wanted to comment on the closing paragraph:

Moving to the cloud might make cost-savings. But actually, it might just mean you consume more IT resources than you might have otherwise. This isn’t a bad thing in itself - just make sure you’re prepared, and that this extra consumption is deriving something real in return.

This is something that I have seen in action time and time again - although not so much recently. Certainly in the early days of cloud computing, when it was still widely seen as "virtualisation 2.0", many people jumped in thinking that cloud would substantially lower the cost of IT, by keeping the volume constant - or even shrinking it by controlling virtualisation sprawl - while lowering costs.

Unfortunately for people who built their business cases around this model, it didn't quite work out that way. Done right, cloud computing certainly lowers the unit cost of IT - the cost to deliver a certain quantum of IT service. Note that the unit here is not "a server", otherwise straight virtualisation would have been sufficient to deliver the expected benefits. People outside of IT cannot consume "a server* directly; they need a lot more to be done before it is useful to them:

  • Install and configure database
  • Install and configure middleware
  • Deploy application code
  • Reserve storage
  • Set up networking (routing, load balancer, firewall/NAT access, …)
  • Security hardening
  • Compliance checks
  • And so on and so forth

Doing all of this in a pre-cloud way was expensive. Even if all the IT infrastructure was in-house, it was expensive in opportunity costs - all the other tasks that those various teams had on their plates - and in the simple time necessary to deliver all of those different parts. Worse, it wasn't just a one-off cost, but an ongoing cost. This is where another term from economics gets introduced: technical debt, or the future work that IT commits itself to in order to maintain what they deliver.

All of this translated to a high barrier to access IT services. The only applications (in the business sense, not the App Store sense) that could be implemented were ones that could clear the high hurdle of being able to justify not only the initial outlay and delay, but all the future maintenance costs.

Cloud computing changes that equation by lowering the barrier to entry. The most expensive component of IT delivery, both in resources and in time, is manual human action. By automating that away, the unit cost of IT drops dramatically.

This is where Jevons' Paradox comes in. Instead of lowering the total cost of IT, this reduction in the unit cost unlocks all sorts of applications that were previously unthinkable. The result is that instead of delivering the same amount of IT for less money, companies end up delivering much more IT for the same budget.

How to ensure that this flowering of IT delivers business value? In yet another intersection of IT and economics, let us turn to the Financial Times and an article entitled "Big service providers turn to the cloud":

According to Forrester Research, technologies with a direct impact on a company’s business, such as customer relationship management services and analytics, eat up only about 20 per cent of IT spending.

That is where the value of cloud computing comes from: the good old 80/20 rule. Done right, cloud computing acts on both parts of the rule, making it easy to increase the 20% of IT that actually delivers value - by lowering the barrier to entry - while automating or outsourcing the keep-the-lights-on activity that consumes the other 80% of the IT budget.

So much for the dismal science!

Dark Security

Brian Krebs reports a spike in payment card fraud following the now-confirmed Home Depot security breach.

This is actually good news.

Wait, what?

Bear with me. There has always been a concern that many security breaches in the cloud are not being reported or disclosed. The fact that there are no other unexplained spikes in card fraud would tend to indicate that there are no huge breaches that have not been reported, frantic stories about billions of stolen accounts

notwithstanding.

The day we should really start to worry is when we see spikes in card fraud that are not related to reported breaches.

Cloud Elephant

There are fashions in IT (and don't let anyone tell you us nerds are all perfectly rational actors). That goes double in IT marketing, where metaphors get adopted rapidly and even more rapidly perverted. If you need an example, look no further than the infamous "cloud computing" itself.

There is a new trend I am seeing, of calling cloud "the elephant in the room". I heard this the other day and went off into a little dwam, thinking of the cloud as an actual elephant.

There's an old story about six blind men who are asked by a king to determine what an elephant looked like by feeling different parts of the elephant's body. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe.

The king explains to them: "All of you are right. The reason every one of you is telling it differently is because each one of you touched the different part of the elephant. So, actually the elephant has all the features you mentioned."

Cloud is much the same. All the rival "cloud experts" are blind men feeling up different parts of the cloud elephant and describing radically different animals. Here is my little taxonomy of the Blind People1 of Cloud.

Public Cloud Purists

There is no such thing as a private cloud!

If I had a euro for every tweet expressing that sentiment… well, I could buy my own Instagram for sure. I have already set out my own view of why they have got hold of the wrong end of the elephant (TL;DR: Must be nice to start from a clean sheet, but most people have datacenters full of legacy, and private cloud at least lets them use what they have more efficiently).

SaaSholes

Servers are out, platforms are in! Point and laugh at the server huggers!

Alright, clever-clogs: what do you think your "platforms" run on? Just because you are choosing to run at a far remove from the infrastructure doesn't mean it's not there. SaaS is great for getting stuff done fast with fairly standard business processes, but beyond a certain point you'll need to roll your own.

Cloud FUDdy Duddies

The cloud is insecure by definition! You can't use it for anything! The NSA and the Chinese military are vying with each other and with Romanian teenagers to be the first to take your business down!.

Well, yes and no. I'd hazard that many traditional datacenters are quite a bit less secure than the big public clouds. Mostly this is a result of complexity stemming from legacy services, not to mention lack of sufficient dedicated resources for security - but does that matter for every single service? I'm going to go out on a limb here and say no.

Cloud Washers

Email with a Web UI in front of it - that's cloud, right? Can I have some money now?

Thankfully this trend seems to be dying down a bit. It's been a while since I have seen any truly egregious examples of cloud-washing.

Cloudframers

I was doing the same thing on my Model 317 Mark XXV, only with vacuum tubes! Now get off my lawn!

Sorry, mainframe folks - this is a little bit unfair, because the mainframe did indeed introduce many concepts that we in the open world are only adopting now. However, denying that cloud is significantly different from the mainframe is not helpful.

Flat Clouders

My IT people tell me all the servers are virtualised and that means we have cloud, right? When I send them an email asking them for something, the response I get a couple of weeks later says "cloud" right in the subject line…

Cloud is not just an IT project, and if it's treated as such, it'll fail, and fail badly. However I still hear CIOs planning out a cloud without involving or even consulting the business, or allowing for any self-service capabilities at all.


This elephant is pretty big, though, and I am sure there are more examples out there. Why don't you share your own?


  1. Because it's the twenty-first century, and we believe in giving women equal opportunities to make fools of themselves. Somehow they mostly manage to resist taking that particular opportunity, though… 

Signalling

I've been blogging a lot about messaging lately, which I suppose is to be expected from someone in marketing. In particular, I have been focusing on how messaging can go wrong.

The process I outlined in "SMAC My Pitch Up" went something like this:

  • Thought Leaders (spit) come up with a cool new concept
  • Thought Leaders discuss the concept amongst themselves, coming up with jargon, abbreviations, and acronyms (oh my!)
  • Thought Leaders launch the concept on an unsuspecting world, forgetting to translate from jargon, abbreviations and acronyms
  • Followers regurgitate half-understood jargon, abbreviations and acronyms
  • Much clarity is lost

Now the cynical take is that the Followers are doing this in an effort to be perceived as Thought Leaders themselves - and there is certainly some of that going on. However, my new corollary to the theory is that many Followers are not interested in the concept at all. They are name-checking the concept to signal to their audience that they are aware of it and gain credibility for other initiatives, not to jump on the bandwagon of the original concept. This isn't the same thing as "cloudwashing", because that is at least about cloud. This is about using the cloud language to justify doing something completely different.

This is how we end up with actual printed books purporting to explain what is happening in the world of mobile and social. By the time the text is finalised it's already obsolete, never mind printed and distributed - but that's not the point. The point is to be seen as someone knowledgeable about up-to-date topics so that other, more traditional recommendations gain some reflected shine from the new concept.

The audience is in on this too. There will always be rubes taken in by a silver-tongued visionary with a high-concept presentation, but a significant part of the audience is signalling - to other audience members and to outsiders who are aware of their presence in that audience - that they too are aware of the new shiny concept.

It's cover - a way of saying "it's not that I don't know what the kids are up to, it's that I have decided to do something different". This is how I explain the difficulties in adoption of new concepts such as cloud computing1 or DevOps. It's not the operational difficulties - breaking down the silos, interrupting the blamestorms, reconciling all the differing priorities; it's that many of the people talking about those topics are using them as cover for something different.


Images from Morguefile, which I am using as an experiment.


  1. Which my fingers insist on typing as "clod computing", something that is far more widespread but not really what we should be encouraging as an industry.