Showing all posts tagged cloud:

When is a Cloud not a Cloud?

Further thoughts on yesterday’s post, prompted by some of the conversations about it

click.jpg

I realised that in yesterday’s post I implied that the key difference was between different markets; that in some markets, a full-on enterprise sales push is required, while in others you can rely on word of mouth to allow good products to emerge.

I do believe that there are macro divisions like that, but even within a more circumscribed group of products, you can still see big differences that are driven at least in part by sales activity.

Let's talk about cloud.

The conventional wisdom is that Amazon’s AWS dominates because of under-the-radar adoption by developers. In this view, teams who wish to move faster than corporate IT’s procurement and delivery cycles use their company credit cards to get resources directly from AWS and release their code directly to the cloud. By the time the suits realise this has happened, the developers have enough users on their side that it’s easier just to let them keep on doing what they’re doing.

There is a fair amount of truth to this story, and I have seen it play out more or less in this way many times. What is neglected in this simple scenario is the other cloud vendors. There was a while back there when it wasn’t obvious that AWS was going to be the big winner. Google Compute Engine seemed like a much better bet; developers already had a high comfort level with using Google services. In addition, AWS initially offered bare-metal systems, while GCE had a full-stack PaaS. Conventional wisdom was that developers would prefer the way a PaaS abstracts away all the messy details of the infrastructure.

Of course it didn’t work out that way. Today GCE is an also-ran in this market, and even that only thanks to a pivot in its strategy. AWS dominates, but right behind them and growing fast we see Microsoft’s Azure.

CIS_Q415.jpg

Data from Synergy Research Group

And look who’s right behind Microsoft: IBM! Microsoft and IBM of course have huge traditional sales forces - but what some commentators seem to miss is that the bulk of AWS’ success is driven by its own big enterprise and channel sales force. A developer getting a dozen AMIs on the company AmEx might get usage off the ground, but it doesn’t drive much volume growth. Getting an enterprise contract for several thousand machines, plus a bunch of ancillary services? Now we’re talking.

Also note who’s missing from this list - anything driven by OpenStack. There are as many opinions on OpenStack technology as there are people working on it - which seems to be part of the problem. The one thing that seems clear is that it has not (yet?) achieved widespread adoption. I am seeing some interest on the SDN/NFV side, but most of those projects are still exploratory, so it remains to be seen how that market shakes out - especially with competition from commercial offerings from Cisco and VMware ramping up.

A good sales force won’t be able to push a terrible product, not least because sales people will jump ship in order to have a better product to sell, which makes their job easier. However, a good sales force can make the difference between a good product emerging from the churn, or not.

Underestimate sales at your peril.


Image by Jan Schulz via Unsplash

The curve points the way to our future

url.png

Just a few days ago, I wrote a post about how technology and services do not stand still. Whatever model we can come up with based on how things are right now, it will soon be obsolete, unless our model can accomodate change.

One of the places where we can see that is with the adoption curve of Docker and other container architectures. Anyone who thought that there might be time to relax, having weathered the virtualisation and cloud storms, is in for a rude awakening.

Who is using Docker?

Sure, the latest Docker adoption survey still shows that most adoption is in development, with 47% of respondents classifying themselves as "Developer or Dev Mgr", and a further 15% as "DevOps or Release Eng". In comparison, only 12% of respondents were in "SysAdmin / Ops / SRE" roles.

Also, 56% of respondents are from companies with fewer than 100 employees. This makes sense: long-established companies have too much history to be able to adopt the hot new thing in a hurry, no matter what benefits it might promise.

What does happen is that small teams within those big companies start using the new cool tech in the lab or for skunkworks projects. Corporate IT can maybe ignore these science experiments for a while, but eventually, between the pressure of those research projects going into production, and new hires coming in from smaller startups that have been working with the new technology stack for some time, they will have to figure out how they are going to support it in production.

Shipping containers

If the teams in charge of production operations have not been paying attention, this can turn into Good news for Dev, bad news for Ops, as my colleague Sahil wrote on the official Moogsoft blog. When it comes to Docker specifically, one important factor for Ops is that containers tend to be very short-lived, continuing and accelerating the trend that VMs introduced. Where physical servers had a lifespan of years, VMs might last for months - but containers have been reported to have a lifespan four times shorter than VMs.

That’s a huge change in operational tempo. Given that shorter release cycles and faster scaling (up and down) in response to demand are among the main benefits that people are looking for from Docker adoption, this rapid churn of containers is likely to continue and even accelerate.

VMs were sometimes used for short-duration tasks, but far more often they were actually forklifted physical servers, and shoe-horned into that operational model. This meant that VMs could sometimes have a longer lifespan than physical servers, as it was possible for them simply to be forgotten.

Container-based architectures are sufficiently different that there is far less risk of this happening. Also, the combination of experience and generational turnover mean that IT people are far more comfortable with the cloud as an operational model, so there is less risk of backsliding.

The Bow Wave

The legacy enterprise IT departments that do not keep up with the new operational tempo will find themselves in the position of the military, struggling to adapt to new realities because of its organisational structure. Armed forces set up for Cold War battles of tanks, fighters and missiles struggle to deal with insurgents armed with cheap AK-47s and repurposed consumer technology such as mobile phones and drones.

In this analogy, shadow IT is the insurgency, able to pop up from nowhere and be just as effective as - if not more so than - the big, expensive technological solutions adopted by corporate. On top of that, the spiralling costs of supporting that technological legacy will force changes sooner or later. This is known as the "bow wave" of technological renewal:

"A modernization bow wave typically forms as the overall defense budget declines and modernization programs are delayed or stretched in the future," writes Todd Harrison of the Center for Strategic and International Studies. He continues: "As this happens the underlying assumption is that funding will become available to cover these deferred costs." These delays push costs into the future, like a ship’s bow pushes a wave forward at sea.

(from here)

What do we do?

The solution is not to throw out everything in the data centre, starting from the mainframe. Judiciously adapted, upgraded, and integrated, old tech can last a very long time. There are B-52 bombers that have hosted three generations from the same family. In the same way, ancient systems like SABRE have been running since the 1960s, and still (eventually) underpin every modern Web 3.0 travel-planning web site you care to name.

What is required is actually something much harder: thought and consideration.

Change is going to happen. It’s better to make plans up front that allow for change, so that we can surf the wave of change. Organisations that wipe out trying to handle (or worse, resist) change that they had not planned for may never surface again.

Lowering the Barrier to Cloud

The 451 Group might not have the name recognition of some of the bigger analyst firms, with their magic quadrants and what-not, but there is a lot of value in their approach to the business. In particular, they have the only "cloud economist" I know, in the person of Dr Owen Rogers. Dr Rogers actually did his PhD on the economics of cloud computing, so he knows what he is talking about.

Dr Rogers also defies the stereotype of economists by being fun to talk to. He's also good on his personal blog - see this recent post for instance. I'll let you read the setup yourself - it's worth it - but I just wanted to comment on the closing paragraph:

Moving to the cloud might make cost-savings. But actually, it might just mean you consume IT resources that you might have otherwise. This isn’t a bad thing in itself - just make sure you’re prepared, and that this extra consumption is deriving something real in return.

This is something that I have seen in action time and time again - although not so much recently. Certainly in the early days of cloud computing, when it was still widely seen as "virtualisation 2.0", many people jumped in thinking that cloud would substantially lower the cost of IT, by keeping the volume constant - or even shrinking it by controlling virtualisation sprawl - while lowering costs.

Unfortunately for people who built their business cases around this model, it didn't quite work out that way. Done right, cloud computing certainly lowers the unit cost of IT - the cost to deliver a certain quantum of IT service. Note that the unit here is not "a server", otherwise straight virtualisation would have been sufficient to deliver the expected benefits. People outside of IT cannot consume "a server* directly; they need a lot more to be done before it is useful to them:

  • Install and configure database

  • Install and configure middleware

  • Deploy application code

  • Reserve storage

  • Set up networking (routing, load balancer, firewall/NAT access, …)

  • Security hardening

  • Compliance checks

  • And so on and so forth

Doing all of this in a pre-cloud way was expensive. Even if all the IT infrastructure was in-house, it was expensive in opportunity costs - all the other tasks that those various teams had on their plates - and in the simple time necessary to deliver all of those different parts. Worse, it wasn't just a one-off cost, but an ongoing cost. This is where another term from economics gets introduced: technical debt, or the future work that IT commits itself to in order to maintain what they deliver.

All of this translated to a high barrier to access IT services. The only applications (in the business sense, not the App Store sense) that could be implemented were ones that could clear the high hurdle of being able to justify not only the initial outlay and delay, but all the future maintenance costs.

Cloud computing changes that equation by lowering the barrier to entry. The most expensive component of IT delivery, both in resources and in time, is manual human action. By automating that away, the unit cost of IT drops dramatically.

This is where Jevons' Paradox comes in. Instead of lowering the total cost of IT, this reduction in the unit cost unlocks all sorts of applications that were previously unthinkable. The result is that instead of delivering the same amount of IT for less money, companies end up delivering much more IT for the same budget.

How to ensure that this flowering of IT delivers business value? In yet another intersection of IT and economics, let us turn to the Financial Times and an article entitled Big service providers turn to the cloud:

According to Forrester Research, technologies with a direct impact on a company’s business, such as customer relationship management services and analytics, eat up only about 20 per cent of IT spending.

That is where the value of cloud computing comes from: the good old 80/20 rule. Done right, cloud computing acts on both parts of the rule, making it easy to increase the 20% of IT that actually delivers value - by lowering the barrier to entry - while automating or outsourcing the keep-the-lights-on activity that consumes the other 80% of the IT budget.

So much for the dismal science!

Dark Security

Brian Krebs reports a spike in payment card fraud following the now-confirmed Home Depot security breach.

This is actually good news.

Wait, what?

Bear with me. There has always been a concern that many security breaches in the cloud are not being reported or disclosed. The fact that there are no other unexplained spikes in card fraud would tend to indicate that there are no huge breaches that have not been reported, frantic stories about billions of stolen accounts notwithstanding.

The day we should really start to worry is when we see spikes in card fraud that are not related to reported breaches.

Cloud Elephant

There are fashions in IT (and don't let anyone tell you us nerds are all perfectly rational actors). That goes double in IT marketing, where metaphors get adopted rapidly and even more rapidly perverted. If you need an example, look no further than the infamous "cloud computing" itself.

There is a new trend I am seeing, of calling cloud "the elephant in the room". I heard this the other day and went off into a little dwam, thinking of the cloud as an actual elephant.

There's an old story about six blind men who are asked by a king to determine what an elephant looked like by feeling different parts of the elephant's body. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe.

The king explains to them: "All of you are right. The reason every one of you is telling it differently is because each one of you touched the different part of the elephant. So, actually the elephant has all the features you mentioned."

Cloud is much the same. All the rival "cloud experts" are blind men feeling up different parts of the cloud elephant and describing radically different animals. Here is my little taxonomy of the Blind People1 of Cloud.

Public Cloud Purists

There is no such thing as a private cloud!

If I had a euro for every tweet expressing that sentiment… well, I could buy my own Instagram for sure. I have already set out my own view of why they have got hold of the wrong end of the elephant (TL;DR: Must be nice to start from a clean sheet, but most people have datacenters full of legacy, and private cloud at least lets them use what they have more efficiently).

SaaSholes

Servers are out, platforms are in! Point and laugh at the server huggers!

Alright, clever-clogs: what do you think your "platforms" run on? Just because you are choosing to run at a far remove from the infrastructure doesn't mean it's not there. SaaS is great for getting stuff done fast with fairly standard business processes, but beyond a certain point you'll need to roll your own.

Cloud FUDdy Duddies

The cloud is insecure by definition! You can't use it for anything! The NSA and the Chinese military are vying with each other and with Romanian teenagers to be the first to take your business down!.

Well, yes and no. I'd hazard that many traditional datacenters are quite a bit less secure than the big public clouds. Mostly this is a result of complexity stemming from legacy services, not to mention lack of sufficient dedicated resources for security - but does that matter for every single service? I'm going to go out on a limb here and say no.

Cloud Washers

Email with a Web UI in front of it - that's cloud, right? Can I have some money now?

Thankfully this trend seems to be dying down a bit. It's been a while since I have seen any truly egregious examples of cloud-washing.

Cloudframers

I was doing the same thing on my Model 317 Mark XXV, only with vacuum tubes! Now get off my lawn!

Sorry, mainframe folks - this is a little bit unfair, because the mainframe did indeed introduce many concepts that we in the open world are only adopting now. However, denying that cloud is significantly different from the mainframe is not helpful.

Flat Clouders

My IT people tell me all the servers are virtualised and that means we have cloud, right? When I send them an email asking them for something, the response I get a couple of weeks later says "cloud" right in the subject line…

Cloud is not just an IT project, and if it's treated as such, it'll fail, and fail badly. However I still hear CIOs planning out a cloud without involving or even consulting the business, or allowing for any self-service capabilities at all.


This elephant is pretty big, though, and I am sure there are more examples out there. Why don't you share your own?


  1. Because it's the twenty-first century, and we believe in giving women equal opportunities to make fools of themselves. Somehow they mostly manage to resist taking that particular opportunity, though… 

Signalling

I've been blogging a lot about messaging lately, which I suppose is to be expected from someone in marketing. In particular, I have been focusing on how messaging can go wrong.

The process I outlined in "SMAC my pitch up" went something like this:

  • Thought Leaders (spit) come up with a cool new concept
  • Thought Leaders discuss the concept amongst themselves, coming up with jargon, abbreviations, and acronyms (oh my!)
  • Thought Leaders launch the concept on an unsuspecting world, forgetting to translate from jargon, abbreviations and acronyms
  • Followers regurgitate half-understood jargon, abbreviations and acronyms
  • Much clarity is lost

Now the cynical take is that the Followers are doing this in an effort to be perceived as Thought Leaders themselves - and there is certainly some of that going on. However, my new corollary to the theory is that many Followers are not interested in the concept at all. They are name-checking the concept to signal to their audience that they are aware of it and gain credibility for other initiatives, not to jump on the bandwagon of the original concept. This isn't the same thing as "cloudwashing", because that is at least about cloud. This is about using the cloud language to justify doing something completely different.

This is how we end up with actual printed books purporting to explain what is happening in the world of mobile and social. By the time the text is finalised it's already obsolete, never mind printed and distributed - but that's not the point. The point is to be seen as someone knowledgeable about up-to-date topics so that other, more traditional recommendations gain some reflected shine from the new concept.

The audience is in on this too. There will always be rubes taken in by a silver-tongued visionary with a high-concept presentation, but a significant part of the audience is signalling - to other audience members and to outsiders who are aware of their presence in that audience - that they too are aware of the new shiny concept.

It's cover - a way of saying "it's not that I don't know what the kids are up to, it's that I have decided to do something different". This is how I explain the difficulties in adoption of new concepts such as cloud computing1 or DevOps. It's not the operational difficulties - breaking down the silos, interrupting the blamestorms, reconciling all the differing priorities; it's that many of the people talking about those topics are using them as cover for something different.


Images from Morguefile, which I am using as an experiment.


  1. Which my fingers insist on typing as "clod computing", something that is far more widespread but not really what we should be encouraging as an industry. 

They put me in the Zoo

On Friday I had the chance to sit down with Alf, of Alf’s Zoo fame. We had a great chat about automation, cloud and… cheese? You’ll just have to watch the show!

If you’re wondering about the art behind me, here are the two prints: Ski Pluto and Visit Mars. Both are by Steve Thomas. I had seen them linked ages ago and filed the bookmark, and when I was furnishing my home office in the new house I finally had somewhere to put them.


On a related note, every time I try to do something with Google Hangouts, I gain a better understanding of why WebEx has been so successful. Recording a ten-minute show took half an hour of futzing around. It’s one thing to do this if guest and host know each other already, but this would make a terrible first impression.

Cloud as utility

People keep talking about cloud as being, or needing to become, like a utility. The analogy is that users don't want to own a power station, they want to close a switch and have the light come on.

I love analogies, and I especially love following them to their logical conclusions - so that's what I'm going to do.

Let's look at an existing utility like electricity. At least in developed countries, it's true that users don't spend a lot of time worrying about the generation and transmission of electricity; they just turn on the light.

Businesses, however, can't afford to do that. The potential consequences of anything happening to disrupt the electricity supply are just too drastic, so businesses mitigate that risk with batteries and generators. Serious businesses test their equipment regularly to make sure their IT can keep operating for a while and shutdown gracefully if the electricity supply is ever interrupted.

The fact you have a contract for electricity to be delivered over the grid doesn't mean you don't need UPS and gensets on site, and Schneider Electric, Rolls-Royce, and many others are doing very well selling that sort of kit despite the fact that the electricity grid has been a reliable reality for decades now.

The same applies to cloud: even if you have a public cloud that is as reliable as the electricity grid - a high bar indeed! - you will still need some amount of private cloud for the services that absolutely cannot go down or be disrupted in any way.

The Efficiency of Inefficiency

Yesterday I wrote about how the value of private cloud is enabled by past inefficiencies: Hunting the Elusive Private Cloud. There is another side to that coin that's worth looking at. Vendors - especially, but not only, hardware vendors - made handsome profits catering to that inefficiency. If your datacenter utilisation rate is below 10%, then 90%+ of your hardware spending is… well, not quite wasted, but there are possible improvements there.

A decade or so ago, all the buzz was about virtualisation. Instead of running one physical server with low utilisation, we would put a number of virtual servers on the same bit of physical kit, and save money! Of course that’s not how it worked out, because the ease of creating new virtual servers meant that the things started multiplying like bunnies, and the poor sysadmins found themselves worse off, with even more servers to manage instead of fewer!

Now the promise of private cloud is that all the spare capacity can finally be put to good use. But what does this mean for the vendors who were relying on pushing all that extra tin?

Well, we don’t know yet. Most private cloud projects are, frankly, still at a very low level of maturity, so the impact on hardware sales is limited so far. One interesting indicator, though, is what happens as public cloud adoption ramps up.

Michael Coté, of 451 Research, flagged this (emphasis mine):

Buried in this piece on Cisco doing some public cloud stuff is this little description about how the shift to public cloud creates a strategic threat to incumbent vendors:

Cloud computing represented an interesting opportunity to equipment companies like Cisco, as it aggregated the market down to fewer buyers. There are approximately 1,500 to 2,000 infrastructure providers worldwide verses millions of businesses; reducing the buyers to a handful would lower the cost of sales. And, as cloud sales picked up, reducing on-premises equipment spending, those providers would represent an increasing share of sales and revenue.

The problem with this strategy, as companies like Cisco Systems and Juniper Networks discovered, is the exchange of on-premises buyers to cloud buyers is not one to one. Cloud providers can scale investments further than any individual enterprise or business buyer, resulting in a lower need for continually adding equipment. This phenomenon is seen in server sales, which saw unit shipments fall 6 percent last year and value fall nearly twice as fast.

Even if we assume that a company offloading some servers to the public cloud instead of buying or replacing them in its own datacenter is doing so on a 1:1 basis - one in-house physical server replaced by one virtual server in the public cloud - the economics mean that the replacement will be less profitable for the equipment vendor. The public cloud provider will be able to negotiate a much better price per server because of their extremely high purchasing volume - and this doesn’t even consider the mega-players in cloud, who build their own kit from scratch.

Since I mentioned Cisco, though, I should point out that they seem to be weathering the transition better than most. According to Forrester’s Richard Fichera, Cisco UCS at five years is doing just fine:

HP is still number one in blade server units and revenue, but Cisco appears to be now number two in blades, and closing in on number three world-wide in server sales as well. The numbers are impressive:

  • 32,000 net new customers in five years, with 14,000 repeat customers
  • Claimed $2 Billion+ annual run-rate
  • Order growth rate claimed in “mid-30s" range, probably about three times the growth rate of any competing product line.

To me, it looks like the UCS server approach of very high memory density works very well for customers who aren’t at the level of rolling their own servers, but have outgrown traditional architectures. Let’s see what the next five years bring.

Hunting the Elusive Private Cloud

While I work for a cloud management vendor, the following represents my personal opinion - which is why it’s published here and not at my work blog.

It seems that in IT we spend a lot of time re-fighting the same battles. The current example is “private cloud is not a cloud".

Some might expect me to disagree, but in fact I think there’s more than a grain of truth in that assertion. The problem is in the definition of what is a cloud in the first place.

If I may quote the NIST definition yet again: (revs up motorcycle, lines up on shark tank)

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

The item most people point to when making the claim that "private cloud is not a cloud" is the fourth in that list: elasticity. Public clouds have effectively infinite elasticity for any single tenant: even Amazon itself cannot saturate AWS. By definition, private cloud does not have infinite elasticity, being constrained to whatever the capacity of the existing datacenter is.

So it’s proved then? Private cloud is indeed not a cloud?

Not so fast. There are two very different types of cloud user. If you and your buddies founded a startup last week, and your entire IT estate is made up bestickered MacBooks, there is very little point in looking at building a private cloud from scratch. At least while you are getting started and figuring out your usage patterns, public cloud is perfect.

However, what if you are, say, a big bank, with half a century’s worth of legacy IT sitting around? It’s all very well to say “shut down your data centre, move it all to the cloud", but these customers still have mainframes. They’re not shuttering their data centres any time soon, even if all the compliance questions can be answered.

The reason this type of organisation might want to look at private cloud is that there’s a good chance that a substantial proportion of that legacy infrastructure is under- or even entirely un-used. Some studies I’ve seen even show average utilisation below 10%! This is where they get their elasticity: between the measured service and the resource pooling, they get a much better handle on what that infrastructure is currently used for. Over time, private cloud users can then bring their average utilisation way up, while also increasing customer satisfaction.

Each organisation will have its own utilisation target, although 100% utilisation is unlikely for a number of reasons. In the same way, each organisation will have its own answer as to what to do next: whether to invest in additional data centre capacity for their private cloud, or to add public cloud resources to the mix in a hybrid model.

The point remains though that private cloud is unquestionably “real" and a viable option for these types of customers. Having holy wars about it among the clouderati is entertaining, but ultimately unhelpful.