Showing all posts tagged puppet:

Important, But Not Exciting

I don’t usually do breaking news here, but this story pushes a whole lot of my buttons. Today, VMWare announced their intent to acquire SaltStack.

I have been following the automation market closely, at least since my time at BladeLogic. With BladeLogic acquired by BMC, and arch-rival Opsware by HP, much of the action moved to the open-source realm, with Puppet, Chef, Ansible, and SaltStack. Of those four, only Puppet remains as an independent player1; Ansible was bought by Red Hat back in 2015, and of course Red Hat were themselves snapped up by IBM a few years later.

There was a gap after that, but just this month Chef was bought by Progress (who?), and now there is this Ansible news.

While merging automation functionality in the shape of Ansible into Red Hat made a lot of sense, the reaction to the Chef acquisition was more one of bemusement. We discussed the acquisition on a recent episode of the Roll for Enterprise podcast, and the only strategic rationale any of us could see for the acquisition was a possible integration with Whats Up Gold, as part of some sort of integrated detection and remediation play. I haven’t seen any further news from that direction, but it’s only been three weeks, so based on my own experience during acquisitions, I wouldn’t necessarily expect anything for a while yet.

The Action Moves Up The Stack

That theory about the role of automation in the modern infrastructure stack explains both why automation specialists no longer have the sorts of growth prospects (and valuations) that they did fifteen years ago at the time of the BladeLogic and Opsware acquisitions, and why they are being bought up now.

As the interface to software stacks moves further and further away from the bare metal, adding more and more layers of abstraction, the role of automation becomes that of plumbing: it’s important, perhaps crucial, but it’s invisible unless it breaks or fails. Arguably, this is a positive development, signifying the maturity of the automation market. Technology that is visible is cutting-edge and unreliable. There is a reason it’s called the bleeding edge; given the choice, I’d rather it be someone else’s blood getting spilled, while I hold back and learn from their mistakes.

Once that exciting technology settles down and becomes better understood, it disappears from our attention. We don’t think about what happens when we flip a switch, because we simply expect the light to come on. Intellectually we understand that there are all sorts of systems in place to make that light come on, that specialists work hard around the clock to look after those systems, and that there is a whole world of complexity around the generation and transmission of electricity, but ultimately all we care about is that it ultimately enables us to reach out and say "let there be light".

Automation is getting to that point: it’s a must-have, and because it’s a must-have, it’s no longer tenable for everyone to have to roll their own. In the dawn of personal computing, it was reasonable to expect every computer owner to bring their own soldering iron. That was obviously not a setup that could drive mass adoption, and these days, our computers are sealed shut, with no moving parts, let alone user-serviceable ones.

In the same way, back in the dog days of the last millennium, it was reasonable and even expected of me, as a junior sysadmin in training, to bang out a script that would let an Apache web server running on HP-UX authenticate users from a Windows NT domain — because there was no off-the-shelf way to do it. When I had to do add single sign-on to a project of mine last year, the SSO part took me one line of config, and I was done with that task and could move on to something more interesting and value-additive.

Automation is no longer something the CIO will care about. It’s expected and built-in, and the action has moved elsewhere. This is a victory: it’s not every software category that lasts long enough to become legacy!


🖼️ Photo by Yung Chang on Unsplash


  1. VMWare had previously joined a funding round for Puppet; that round was led by Cisco, so it may be that Puppet’s new home is somewhere in Cisco’s Unified Computing division. 

What if…

While it may seem obvious to those of us who have been around this market for a while, it was interesting to read that at the recent Puppetconf 2016 event, Puppet still felt the need to state that "In the future code is going to be managed and deployed by other code". If you’re surprised that this sentiment still needs to be articulated explicitly in 2016, you have not been paying attention.

It is certainly true that the leading edge is all "cattle, not pets" and "automate all the things", but there’s a pretty long tail behind that head. Only 2% of workloads are currently running in "the cloud" - although the precise definition is complicated by that nebulous term. Everything else? Still running on premises, or at best in a colo.

The same goes for automation: for every fully-automated containerised full-stack deployment, there are fifty that are not automated.

Nevertheless, Puppet has built a $100M business on automation. I know a bit about this space, having worked at BladeLogic, one of the pioneers of automation. While BladeLogic and Puppet have a history, today I am wondering about whether things might have gone differently.

Luke Kanies, the founder of Puppet, was a BladeLogic employee, although he left before I ever joined. From what I gather, he was a proponent of extending BladeLogic’s foundation in Network Shell, or NSH, into a free open-source platform, on which a commercial product could be built. Instead, BladeLogic’s management preferred to shut down the open-source NSH project and just use the technology inside the commercial BladeLogic product.

For those in the know, NSH was a fantastic tool. At root it was a shell based on ZSH, but with network awareness on top. What this meant was that you could do things like this:

 host $ cp /etc/hosts //host1/etc/hosts

 host $ cd //host2/home

 host2 $ ps -ef | grep inetd

 host2 $ diff //host3/etc/passwd //host4/etc/passwd

 host2 $ iostat 2 5

 host2 $ vi //nthost/c/AUTOEXEC.BAT

 host2 $ nexec nthost reboot Let's reboot NT

You could copy files between systems, compare them or even edit them in place, and generally do all sorts of good things - including developing scripts to automate those tasks. For me at least, this was the first hint of the new world in which systems are no longer managed one by one, with admins ssh’ing into them individually, but in bulk, deploying a single config to many systems in one action. Best of all, it was multi-platform, abstracting the differences between different UNIX variants, and even working on Windows. ZSH on NT? That’s a major selling point right there!

However, even among BladeLogic employees and users, the interactive mode of NSH was a well-kept secret, with most people working exclusively within the BladeLogic GUI. What might the combination of NSH and BladeLogic have become if it had been allowed to flourish? Could a free NSH have taken the place in sysadmin’s hearts that is currently occupied by Puppet? Would this have prevented the long, quiet death of BladeLogic?

20/20 Hindsight

Of course hindsight is a wonderful thing, and what is a fairly uncontroversial strategy to propose in 2016 was not so obvious fifteen years ago. Back then, there were vanishingly few successful hybrid business models that combined an open-source platform with a commercial component. It would not be fair to criticise BladeLogic’s management at the time for not taking that route - especially since they were outstandingly successful with the strategy that they did choose. The hybrid model would have been a major strategic choice, and there is no guarantee that VCs and other investors would have gone along with it.

I just wonder sometimes - what might have been, in a world where a free download of NSH would have gained mindshare in the data center, at the same time that high-powered, PTC-trained sales people were gaining the trust of the C-suite?

Today, in 2016, Robert Stroud, a Forrester analyst at the Puppet event, is saying the following:

Businesses services now involve infrastructure, middleware, and applications, said Stroud. "Moving forward, to be a complete automation environment, the successful player in the space will have a role in all three," he said.

At BladeLogic, we were saying that ten years ago. Regardless of the commercials, this market of automated server configuration management is arguably ten years behind where it should be. Sure, we can deploy things at scale, but managing them at scale is still a challenge - although the challenge is as much one of process as of tools. The cloud has enabled all sorts of new businesses and even entire new business models, but it is still constrained by the complexity and consequent fragility of the underlying infrastructure.

What might be possible if we had solved that problem ten years ago? What new possibilities might have been enabled, that we will only find out about years from now?