Showing all posts tagged cloud:

Cloud Adoption Is Still Not A Done Deal

I have some thoughts on this new piece from 451 Research about IT provisioning. The report is all about how organisations that are slow to deliver IT resources will struggle to achieve their other goals. As business becomes more and more reliant on IT, the performance of IT becomes a key controlling factor for the overall performance of the entire business.

This connection between business and IT is fast becoming a truism; very few businesses could exist without IT, and most activities are now IT-enabled to some extent. If you’re selling something, you’ll have a website. People need to be able to access that website, and you need to make regular changes as you roll out new products, run sales promotions, or whatever. All of that requires IT support.

Where things get interesting is in the diagnosis of why some organisations succeed and others do not:

Just as internal IT culture and practices have an impact on provisioning time, they can also severely impact acceptance of technologies. Although the promise of machine learning and artificial intelligence (AI) is emerging among IT managers who took early steps toward machine-enabled infrastructure control, much work remains in convincing organizations of the technologies' benefits. In fact, the more manual the processes are for IT infrastructure management, the less likely that IT managers believe that machine learning and AI capabilities in vendor products will simplify IT management. Conversely, most managers in highly automated environments are convinced that these technologies will improve IT management.

If the IT team is still putting hands on keyboards for routine activities, that’s a symptom of some deeper rot.

It may appear easy to regard perpetual efforts of organizations to modernize their on-premises IT environments as temporary measures to extract any remaining value from company-owned datacenters before complete public cloud migration occurs. However, the rate of IT evolution via automation technologies is accelerating at a pace that allows organizations to ultimately transform their on-premises IT into cloudlike models that operate relatively seamlessly through hybrid cloud deployments.

The benefits of private cloud are something I have been writing about for a long time:

The reason this type of organisation might want to look at private cloud is that there’s a good chance that a substantial proportion of that legacy infrastructure is under- or even entirely un-used. Some studies I’ve seen even show average utilisation below 10%! This is where they get their elasticity: between the measured service and the resource pooling, they get a much better handle on what that infrastructure is currently used for. Over time, private cloud users can then bring their average utilisation way up, while also increasing customer satisfaction.

The bottom line is, if you already own infrastructure, and if you have relatively stable and predictable workloads, your best bet is to figure out ways to use what you already have more efficiently. If you just blindly jump into the public cloud, without addressing those cultural challenges, all you will end up with is a massive bill from your public cloud provider.

Large organisations have turning circles that battleships would be embarrassed by, and their radius is largely determined by culture, not by technology. Figuring out new ways to use internal resources more efficiently (private cloud), perhaps in combination with new types of infrastructure (public cloud), will get you where you need to be.

That cultural shift is the do-or-die, though. The agility of a 21st century business is determined largely by the agility of its IT support. Whatever sorts of resources the IT department is managing, they need to be doing so in a way which delivers the kinds of speed and agility that the business requires. If internal IT becomes a bottleneck, that’s when it gets bypassed in favour of that old bugbear of shadow IT.

IT is becoming more and more of a differentiator between companies, and it is also a signifier of which companies will make it in the long term – and which will not. It may already be too late to change the culture at organisations still mired in hands-on, artisanal provisioning of IT resources, but it is certain that completing that transition should be a priority.


Photo by Amy Skyer on Unsplash

Cloud Adoption Is Still Not A Done Deal

I have some thoughts on this new piece from 451 Research about IT provisioning. The report is all about how organisations that are slow to deliver IT resources will struggle to achieve their other goals. As business becomes more and more reliant on IT, the performance of IT becomes a key controlling factor for the overall performance of the entire business.

This connection between business and IT is fast becoming a truism; very few businesses could exist without IT, and most activities are now IT-enabled to some extent. If you’re selling something, you’ll have a website. People need to be able to access that website, and you need to make regular changes as you roll out new products, run sales promotions, or whatever. All of that requires IT support.

Where things get interesting is in the diagnosis of why some organisations succeed and others do not:

Just as internal IT culture and practices have an impact on provisioning time, they can also severely impact acceptance of technologies. Although the promise of machine learning and artificial intelligence (AI) is emerging among IT managers who took early steps toward machine-enabled infrastructure control, much work remains in convincing organizations of the technologies' benefits. In fact, the more manual the processes are for IT infrastructure management, the less likely that IT managers believe that machine learning and AI capabilities in vendor products will simplify IT management. Conversely, most managers in highly automated environments are convinced that these technologies will improve IT management.

If the IT team is still putting hands on keyboards for routine activities, that’s a symptom of some deeper rot.

It may appear easy to regard perpetual efforts of organizations to modernize their on-premises IT environments as temporary measures to extract any remaining value from company-owned datacenters before complete public cloud migration occurs. However, the rate of IT evolution via automation technologies is accelerating at a pace that allows organizations to ultimately transform their on-premises IT into cloudlike models that operate relatively seamlessly through hybrid cloud deployments.

The benefits of private cloud are something I have been writing about for a long time:

The reason this type of organisation might want to look at private cloud is that there’s a good chance that a substantial proportion of that legacy infrastructure is under- or even entirely un-used. Some studies I’ve seen even show average utilisation below 10%! This is where they get their elasticity: between the measured service and the resource pooling, they get a much better handle on what that infrastructure is currently used for. Over time, private cloud users can then bring their average utilisation way up, while also increasing customer satisfaction.

The bottom line is, if you already own infrastructure, and if you have relatively stable and predictable workloads, your best bet is to figure out ways to use what you already have more efficiently. If you just blindly jump into the public cloud, without addressing those cultural challenges, all you will end up with is a massive bill from your public cloud provider.

Large organisations have turning circles that battleships would be embarrassed by, and their radius is largely determined by culture, not by technology. Figuring out new ways to use internal resources more efficiently (private cloud), perhaps in combination with new types of infrastructure (public cloud), will get you where you need to be.

That cultural shift is the do-or-die, though. The agility of a 21st century business is determined largely by the agility of its IT support. Whatever sorts of resources the IT department is managing, they need to be doing so in a way which delivers the kinds of speed and agility that the business requires. If internal IT becomes a bottleneck, that’s when it gets bypassed in favour of that old bugbear of shadow IT.

IT is becoming more and more of a differentiator between companies, and it is also a signifier of which companies will make it in the long term – and which will not. It may already be too late to change the culture at organisations still mired in hands-on, artisanal provisioning of IT resources, but it is certain that completing that transition should be a priority.


Photo by Amy Skyer on Unsplash

When is a Cloud not a Cloud?

Further thoughts on yesterday’s post, prompted by some of the conversations about it

click.jpg

I realised that in yesterday’s post I implied that the key difference was between different markets; that in some markets, a full-on enterprise sales push is required, while in others you can rely on word of mouth to allow good products to emerge.

I do believe that there are macro divisions like that, but even within a more circumscribed group of products, you can still see big differences that are driven at least in part by sales activity.

Let's talk about cloud.

The conventional wisdom is that Amazon’s AWS dominates because of under-the-radar adoption by developers. In this view, teams who wish to move faster than corporate IT’s procurement and delivery cycles use their company credit cards to get resources directly from AWS and release their code directly to the cloud. By the time the suits realise this has happened, the developers have enough users on their side that it’s easier just to let them keep on doing what they’re doing.

There is a fair amount of truth to this story, and I have seen it play out more or less in this way many times. What is neglected in this simple scenario is the other cloud vendors. There was a while back there when it wasn’t obvious that AWS was going to be the big winner. Google Compute Engine seemed like a much better bet; developers already had a high comfort level with using Google services. In addition, AWS initially offered bare-metal systems, while GCE had a full-stack PaaS. Conventional wisdom was that developers would prefer the way a PaaS abstracts away all the messy details of the infrastructure.

Of course it didn’t work out that way. Today GCE is an also-ran in this market, and even that only thanks to a pivot in its strategy. AWS dominates, but right behind them and growing fast we see Microsoft’s Azure.

CIS_Q415.jpg

Data from Synergy Research Group

And look who’s right behind Microsoft: IBM! Microsoft and IBM of course have huge traditional sales forces - but what some commentators seem to miss is that the bulk of AWS’ success is driven by its own big enterprise and channel sales force. A developer getting a dozen AMIs on the company AmEx might get usage off the ground, but it doesn’t drive much volume growth. Getting an enterprise contract for several thousand machines, plus a bunch of ancillary services? Now we’re talking.

Also note who’s missing from this list - anything driven by OpenStack. There are as many opinions on OpenStack technology as there are people working on it - which seems to be part of the problem. The one thing that seems clear is that it has not (yet?) achieved widespread adoption. I am seeing some interest on the SDN/NFV side, but most of those projects are still exploratory, so it remains to be seen how that market shakes out - especially with competition from commercial offerings from Cisco and VMware ramping up.

A good sales force won’t be able to push a terrible product, not least because sales people will jump ship in order to have a better product to sell, which makes their job easier. However, a good sales force can make the difference between a good product emerging from the churn, or not.

Underestimate sales at your peril.


Image by Jan Schulz via Unsplash

The curve points the way to our future

url.png

Just a few days ago, I wrote a post about how technology and services do not stand still. Whatever model we can come up with based on how things are right now, it will soon be obsolete, unless our model can accomodate change.

One of the places where we can see that is with the adoption curve of Docker and other container architectures. Anyone who thought that there might be time to relax, having weathered the virtualisation and cloud storms, is in for a rude awakening.

Who is using Docker?

Sure, the latest Docker adoption survey still shows that most adoption is in development, with 47% of respondents classifying themselves as "Developer or Dev Mgr", and a further 15% as "DevOps or Release Eng". In comparison, only 12% of respondents were in "SysAdmin / Ops / SRE" roles.

Also, 56% of respondents are from companies with fewer than 100 employees. This makes sense: long-established companies have too much history to be able to adopt the hot new thing in a hurry, no matter what benefits it might promise.

What does happen is that small teams within those big companies start using the new cool tech in the lab or for skunkworks projects. Corporate IT can maybe ignore these science experiments for a while, but eventually, between the pressure of those research projects going into production, and new hires coming in from smaller startups that have been working with the new technology stack for some time, they will have to figure out how they are going to support it in production.

Shipping containers

If the teams in charge of production operations have not been paying attention, this can turn into Good news for Dev, bad news for Ops, as my colleague Sahil wrote on the official Moogsoft blog. When it comes to Docker specifically, one important factor for Ops is that containers tend to be very short-lived, continuing and accelerating the trend that VMs introduced. Where physical servers had a lifespan of years, VMs might last for months - but containers have been reported to have a lifespan four times shorter than VMs.

That’s a huge change in operational tempo. Given that shorter release cycles and faster scaling (up and down) in response to demand are among the main benefits that people are looking for from Docker adoption, this rapid churn of containers is likely to continue and even accelerate.

VMs were sometimes used for short-duration tasks, but far more often they were actually forklifted physical servers, and shoe-horned into that operational model. This meant that VMs could sometimes have a longer lifespan than physical servers, as it was possible for them simply to be forgotten.

Container-based architectures are sufficiently different that there is far less risk of this happening. Also, the combination of experience and generational turnover mean that IT people are far more comfortable with the cloud as an operational model, so there is less risk of backsliding.

The Bow Wave

The legacy enterprise IT departments that do not keep up with the new operational tempo will find themselves in the position of the military, struggling to adapt to new realities because of its organisational structure. Armed forces set up for Cold War battles of tanks, fighters and missiles struggle to deal with insurgents armed with cheap AK-47s and repurposed consumer technology such as mobile phones and drones.

In this analogy, shadow IT is the insurgency, able to pop up from nowhere and be just as effective as - if not more so than - the big, expensive technological solutions adopted by corporate. On top of that, the spiralling costs of supporting that technological legacy will force changes sooner or later. This is known as the "bow wave" of technological renewal:

"A modernization bow wave typically forms as the overall defense budget declines and modernization programs are delayed or stretched in the future," writes Todd Harrison of the Center for Strategic and International Studies. He continues: "As this happens the underlying assumption is that funding will become available to cover these deferred costs." These delays push costs into the future, like a ship’s bow pushes a wave forward at sea.

(from here)

What do we do?

The solution is not to throw out everything in the data centre, starting from the mainframe. Judiciously adapted, upgraded, and integrated, old tech can last a very long time. There are B-52 bombers that have hosted three generations from the same family. In the same way, ancient systems like SABRE have been running since the 1960s, and still (eventually) underpin every modern Web 3.0 travel-planning web site you care to name.

What is required is actually something much harder: thought and consideration.

Change is going to happen. It’s better to make plans up front that allow for change, so that we can surf the wave of change. Organisations that wipe out trying to handle (or worse, resist) change that they had not planned for may never surface again.

Lowering the Barrier to Cloud

The 451 Group might not have the name recognition of some of the bigger analyst firms, with their magic quadrants and what-not, but there is a lot of value in their approach to the business. In particular, they have the only "cloud economist" I know, in the person of Dr Owen Rogers. Dr Rogers actually did his PhD on the economics of cloud computing, so he knows what he is talking about.

Dr Rogers also defies the stereotype of economists by being fun to talk to. He's also good on his personal blog - see this recent post for instance. I'll let you read the setup yourself - it's worth it - but I just wanted to comment on the closing paragraph:

Moving to the cloud might make cost-savings. But actually, it might just mean you consume IT resources that you might have otherwise. This isn’t a bad thing in itself - just make sure you’re prepared, and that this extra consumption is deriving something real in return.

This is something that I have seen in action time and time again - although not so much recently. Certainly in the early days of cloud computing, when it was still widely seen as "virtualisation 2.0", many people jumped in thinking that cloud would substantially lower the cost of IT, by keeping the volume constant - or even shrinking it by controlling virtualisation sprawl - while lowering costs.

Unfortunately for people who built their business cases around this model, it didn't quite work out that way. Done right, cloud computing certainly lowers the unit cost of IT - the cost to deliver a certain quantum of IT service. Note that the unit here is not "a server", otherwise straight virtualisation would have been sufficient to deliver the expected benefits. People outside of IT cannot consume "a server* directly; they need a lot more to be done before it is useful to them:

  • Install and configure database

  • Install and configure middleware

  • Deploy application code

  • Reserve storage

  • Set up networking (routing, load balancer, firewall/NAT access, …)

  • Security hardening

  • Compliance checks

  • And so on and so forth

Doing all of this in a pre-cloud way was expensive. Even if all the IT infrastructure was in-house, it was expensive in opportunity costs - all the other tasks that those various teams had on their plates - and in the simple time necessary to deliver all of those different parts. Worse, it wasn't just a one-off cost, but an ongoing cost. This is where another term from economics gets introduced: technical debt, or the future work that IT commits itself to in order to maintain what they deliver.

All of this translated to a high barrier to access IT services. The only applications (in the business sense, not the App Store sense) that could be implemented were ones that could clear the high hurdle of being able to justify not only the initial outlay and delay, but all the future maintenance costs.

Cloud computing changes that equation by lowering the barrier to entry. The most expensive component of IT delivery, both in resources and in time, is manual human action. By automating that away, the unit cost of IT drops dramatically.

This is where Jevons' Paradox comes in. Instead of lowering the total cost of IT, this reduction in the unit cost unlocks all sorts of applications that were previously unthinkable. The result is that instead of delivering the same amount of IT for less money, companies end up delivering much more IT for the same budget.

How to ensure that this flowering of IT delivers business value? In yet another intersection of IT and economics, let us turn to the Financial Times and an article entitled Big service providers turn to the cloud:

According to Forrester Research, technologies with a direct impact on a company’s business, such as customer relationship management services and analytics, eat up only about 20 per cent of IT spending.

That is where the value of cloud computing comes from: the good old 80/20 rule. Done right, cloud computing acts on both parts of the rule, making it easy to increase the 20% of IT that actually delivers value - by lowering the barrier to entry - while automating or outsourcing the keep-the-lights-on activity that consumes the other 80% of the IT budget.

So much for the dismal science!

Dark Security

Brian Krebs reports a spike in payment card fraud following the now-confirmed Home Depot security breach.

This is actually good news.

Wait, what?

Bear with me. There has always been a concern that many security breaches in the cloud are not being reported or disclosed. The fact that there are no other unexplained spikes in card fraud would tend to indicate that there are no huge breaches that have not been reported, frantic stories about billions of stolen accounts notwithstanding.

The day we should really start to worry is when we see spikes in card fraud that are not related to reported breaches.

Cloud Elephant

There are fashions in IT (and don't let anyone tell you us nerds are all perfectly rational actors). That goes double in IT marketing, where metaphors get adopted rapidly and even more rapidly perverted. If you need an example, look no further than the infamous "cloud computing" itself.

There is a new trend I am seeing, of calling cloud "the elephant in the room". I heard this the other day and went off into a little dwam, thinking of the cloud as an actual elephant.

There's an old story about six blind men who are asked by a king to determine what an elephant looked like by feeling different parts of the elephant's body. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe.

The king explains to them: "All of you are right. The reason every one of you is telling it differently is because each one of you touched the different part of the elephant. So, actually the elephant has all the features you mentioned."

Cloud is much the same. All the rival "cloud experts" are blind men feeling up different parts of the cloud elephant and describing radically different animals. Here is my little taxonomy of the Blind People1 of Cloud.

Public Cloud Purists

There is no such thing as a private cloud!

If I had a euro for every tweet expressing that sentiment… well, I could buy my own Instagram for sure. I have already set out my own view of why they have got hold of the wrong end of the elephant (TL;DR: Must be nice to start from a clean sheet, but most people have datacenters full of legacy, and private cloud at least lets them use what they have more efficiently).

SaaSholes

Servers are out, platforms are in! Point and laugh at the server huggers!

Alright, clever-clogs: what do you think your "platforms" run on? Just because you are choosing to run at a far remove from the infrastructure doesn't mean it's not there. SaaS is great for getting stuff done fast with fairly standard business processes, but beyond a certain point you'll need to roll your own.

Cloud FUDdy Duddies

The cloud is insecure by definition! You can't use it for anything! The NSA and the Chinese military are vying with each other and with Romanian teenagers to be the first to take your business down!.

Well, yes and no. I'd hazard that many traditional datacenters are quite a bit less secure than the big public clouds. Mostly this is a result of complexity stemming from legacy services, not to mention lack of sufficient dedicated resources for security - but does that matter for every single service? I'm going to go out on a limb here and say no.

Cloud Washers

Email with a Web UI in front of it - that's cloud, right? Can I have some money now?

Thankfully this trend seems to be dying down a bit. It's been a while since I have seen any truly egregious examples of cloud-washing.

Cloudframers

I was doing the same thing on my Model 317 Mark XXV, only with vacuum tubes! Now get off my lawn!

Sorry, mainframe folks - this is a little bit unfair, because the mainframe did indeed introduce many concepts that we in the open world are only adopting now. However, denying that cloud is significantly different from the mainframe is not helpful.

Flat Clouders

My IT people tell me all the servers are virtualised and that means we have cloud, right? When I send them an email asking them for something, the response I get a couple of weeks later says "cloud" right in the subject line…

Cloud is not just an IT project, and if it's treated as such, it'll fail, and fail badly. However I still hear CIOs planning out a cloud without involving or even consulting the business, or allowing for any self-service capabilities at all.


This elephant is pretty big, though, and I am sure there are more examples out there. Why don't you share your own?


  1. Because it's the twenty-first century, and we believe in giving women equal opportunities to make fools of themselves. Somehow they mostly manage to resist taking that particular opportunity, though… 

Signalling

I've been blogging a lot about messaging lately, which I suppose is to be expected from someone in marketing. In particular, I have been focusing on how messaging can go wrong.

The process I outlined in "SMAC My Pitch Up" went something like this:

  • Thought Leaders (spit) come up with a cool new concept
  • Thought Leaders discuss the concept amongst themselves, coming up with jargon, abbreviations, and acronyms (oh my!)
  • Thought Leaders launch the concept on an unsuspecting world, forgetting to translate from jargon, abbreviations and acronyms
  • Followers regurgitate half-understood jargon, abbreviations and acronyms
  • Much clarity is lost

Now the cynical take is that the Followers are doing this in an effort to be perceived as Thought Leaders themselves - and there is certainly some of that going on. However, my new corollary to the theory is that many Followers are not interested in the concept at all. They are name-checking the concept to signal to their audience that they are aware of it and gain credibility for other initiatives, not to jump on the bandwagon of the original concept. This isn't the same thing as "cloudwashing", because that is at least about cloud. This is about using the cloud language to justify doing something completely different.

This is how we end up with actual printed books purporting to explain what is happening in the world of mobile and social. By the time the text is finalised it's already obsolete, never mind printed and distributed - but that's not the point. The point is to be seen as someone knowledgeable about up-to-date topics so that other, more traditional recommendations gain some reflected shine from the new concept.

The audience is in on this too. There will always be rubes taken in by a silver-tongued visionary with a high-concept presentation, but a significant part of the audience is signalling - to other audience members and to outsiders who are aware of their presence in that audience - that they too are aware of the new shiny concept.

It's cover - a way of saying "it's not that I don't know what the kids are up to, it's that I have decided to do something different". This is how I explain the difficulties in adoption of new concepts such as cloud computing1 or DevOps. It's not the operational difficulties - breaking down the silos, interrupting the blamestorms, reconciling all the differing priorities; it's that many of the people talking about those topics are using them as cover for something different.


Images from Morguefile, which I am using as an experiment.


  1. Which my fingers insist on typing as "clod computing", something that is far more widespread but not really what we should be encouraging as an industry. 

They put me in the Zoo

On Friday I had the chance to sit down with Alf, of Alf’s Zoo fame. We had a great chat about automation, cloud and… cheese? You’ll just have to watch the show!

If you’re wondering about the art behind me, here are the two prints: Ski Pluto and Visit Mars. Both are by Steve Thomas. I had seen them linked ages ago and filed the bookmark, and when I was furnishing my home office in the new house I finally had somewhere to put them.


On a related note, every time I try to do something with Google Hangouts, I gain a better understanding of why WebEx has been so successful. Recording a ten-minute show took half an hour of futzing around. It’s one thing to do this if guest and host know each other already, but this would make a terrible first impression.

Cloud as utility

People keep talking about cloud as being, or needing to become, like a utility. The analogy is that users don't want to own a power station, they want to close a switch and have the light come on.

I love analogies, and I especially love following them to their logical conclusions - so that's what I'm going to do.








Let's look at an existing utility like electricity. At least in developed countries, it's true that users don't spend a lot of time worrying about the generation and transmission of electricity; they just turn on the light.
Businesses, however, can't afford to do that. The potential consequences of anything happening to disrupt the electricity supply are just too drastic, so businesses mitigate that risk with batteries and generators. Serious businesses test their equipment regularly to make sure their IT can keep operating for a while and shutdown gracefully if the electricity supply is ever interrupted.








    The fact you have a contract for electricity to be delivered over the grid doesn't mean you don't need UPS and gensets on site, and Schneider Electric, Rolls-Royce, and many others are doing very well selling that sort of kit despite the fact that the electricity grid has been a reliable reality for decades now.












The same applies to cloud: even if you have a public cloud that is as reliable as the electricity grid - a high bar indeed! - you will still need some amount of private cloud for the services that absolutely cannot go down or be disrupted in any way.