Showing all posts tagged tech:

The Bigger Picture

Or, Why BladeLogic Isn’t Puppet

Disclaimer: In case you don’t know already, I work for BMC Software, having joined with the acquisition of BladeLogic. My current role is in marketing for BMC’s cloud computing and data center automation products, including BladeLogic. In other words, if you are looking for a 100% objective take, look elsewhere. The reason this piece is here rather than on a bmc.com site is to make it clear that this is personal opinion, not official corporate communication.

At the turn of the last century, life as a sysadmin was nasty and brutish. Your choice was either to spend a lot of time doing the same few things by hand, generally at the command line, or to spend a lot of time writing scripts to automate those tasks, and then even more time maintaining the scripts, all the while being interrupted to do those same things.

Then along came two automation frameworks, Opsware and BladeLogic, that promised to make everything better. The two tools had somewhat similar origin stories: both were based around cores that came out of the hosting world, and both then built substantial frameworks around those cores. In the end both were bought by larger players, Opsware by HP, and BladeLogic by BMC.

BladeLogic and Opsware both let sysadmins automate common tasks without scripting, execute actions against multiple servers at once, and assemble sub-tasks together to do full-stack provisioning. There were (and are) any number of differences in approach, many of which can still be traced back to the very beginnings of the two companies, but it’s safe to say that they are the two most similar tools in today’s automation marketplace.

Yes, marketplace. Because in the last decade or so a huge number of automation tools have emerged (or matured to the point of usefulness), mainly in the open-source arena. If you are managing tons of OSS Linux boxes, running an OSS application stack, it makes sense to have an OSS config management tool as well.

So far, so good. For a while the OSS tools flew under the radar of the big vendors, since the sorts of people willing to download a free tool and hack Ruby to do anything with it tended not to be the same sorts of people with the six- or seven-figure budgets for the big-vendor tools. As is the way of such things, though, the two markets started to overlap, and people started to ask why one tool was free and the other was expensive. This all came to a head when Puppet Labs published a document entitled "Puppet Enterprise vs BMC BladeLogic". Matthew Zito from BMC responded with An Open Letter to PuppetLabs on BMC’s site, which led to an exchange on Twitter, storified here.

This is the longer response I promised Luke Kanies.

When I was at BladeLogic, I was in pre-sales, and one of my jobs (on top of the usual things like demos, proof-of-concept activities, and RfI/RfP responses) was to help the sales people assemble the business case for buying BladeLogic. This usually meant identifying a particular activity, measuring how much time and effort it took to complete without BladeLogic, and then proposing savings through the use of BladeLogic. Because for some unknown reason prospective customers don’t take vendors’ word for these sorts of claims, we would then arrange to prove our estimates on some mutually-agreed subset of the measured activities.

We would typically begin by talking to the sysadmins and people in related teams. They had generally spent a lot of time scripting, automating and streamlining, and were eager to tell us that there was no reason to buy what we were selling, because there were no further savings to be made. Any requests could be delivered in minutes.

The interesting thing is, we would then go around the corner to the users and ask them how long they typically had to wait for requested services to be delivered. The answers varied, but generally in the range of, not minutes, but two to eight weeks.

Weeks, not minutes

Where is that huge discrepancy coming from? Because, depressingly, it’s still the same today, a decade later.

The delay experienced by users is not caused by the fact that sysadmins are frantically flailing away at the keyboard for a full month to deliver something. Credit us sysadmins (for I am still one at heart) with a bit more sense than that. No, the problem is that there are many many different people, functions and teams involved in delivering something to users. Even if each individual step is automated and streamlined and standardised to within epsilon of perfection, the overall process is delayed by the hand-offs between the steps, and even more so when the hand-off isn’t clean and something needs to be reworked, or worse, discussed and argued about.

That is the difference between Puppet and BladeLogic. Puppet is trying to address one - or, in all fairness, several - of those steps, but BladeLogic is trying to address the entire process.

In a wider sense, this is what BMC is trying to do for all of enterprise IT. "Consumerisation of IT" has become a cliché, but it’s true that in the same way that Puppet has moved from a hobbyist market to the IT mainstream, Dropbox has moved from a home user market to a corporate one, AWS has eaten the cloud, and so on. We are living in a Cambrian explosion of tools and services.

Enterprise IT departments and the vendors that serve them cannot compete with these tools, and nor should they. The models of the new entrants give them economies - of scale, of attention, of access - that the traditional model cannot touch. The role for enterprise IT is to provide governance across the top of this extremely diverse and rapidly evolving ecosystem, and fill in the gaps between those tools so that we deliver the correct end product on time, on spec and on budget1.

Sure, use Puppet - and Chef, and Ansible, and SaltStack, and CFEngine, and your home-grown scripts, and maybe even BladeLogic’s BLpackages. Just make sure that you are using them in a way that makes sense, and that meets the users’ needs. At the end of the day, that’s what we are all here for.


  1. Yes, all three. The point of automation is to resolve that dilemma. 

Amateur Hour

So I’m running a little webinar next week (sorry, internal only). It was supposed to have attendance somewhere between ten and twenty people, but the topic is hot, and the invite got forwarded around - a lot. One of the biggest cheeses in the company is attending, or at least, his exec assistant accepted the invite.

On the one hand: yay! An opportunity to shine! Okay, I need to put a lot more thought into my slides and delivery, but this is a chance to put all those presentation techniques into practice.

On the other hand, this has given me an unwelcome insight into what a nightmare it is to run events with large numbers of attendees with "normal" software. Since I have completely lost track of who is attending, I want to at least get a feel for how many people will be in the audience. There doesn’t seem to be an obvious way of doing this in Outlook, so I started googling - and that is when I came across this gem:

If you are the meeting organizer and you want to include each attendee's response to your meeting request, click the Tracking tab, press ALT+PRINT SCREEN, and then paste the image into a Microsoft Office program file.

That is from Microsoft’s own official Office support site, not some random "This One Weird Old Trick Will Help You Get The Most Out Of Outlook". O tempora, o mores...

Apple opens up OS X Beta Seed Program

Apple has always made beta version of its operating systems (both MacOS and iOS) available to registered developers. What was not widely known is that there was also an invitation-only programme for non-developers to get access to pre-release versions of the OSen. This programme has now been opened up for anyone to join.

mavericks_x-9e0a3577ef5cc95c581f680824ca1947.png

Here is the link - but I hope you won’t sign up.

Why?

Remember iOS 7? Before the thing was even out, it was being lambasted in the press - including the mainstream press - for being buggy and even bricking people’s phones. It turned out that the "bricking" was simply the built-in auto-expiry of the beta versions. Non-developers who had somehow got hold of an early beta but had not kept up with newer version found out the hard way that betas expire after some time. Also, being beta versions, the quality of the software was - guess what? - not up to release standard yet.

In light of that experience, I do wonder whether opening up OS X even further is a wise move on Apple’s part. I really hope that I don’t have to read on the BBC next week that OS X 10.9.9 is really buggy and unstable, or something equally inane.

DevOps is killing us

I came across this interesting article about the changes that DevOps brings to the developer role. Because of my sysadmin background, I had tended to focus on the Ops side of DevOps. I had simply not realised that developers might object to DevOps!

I knew sysadmins often didn’t like DevOps, of course. Generalising wildly, sysadmins are not happy with DevOps because it means they have to give non-sysadmins access to the systems. This is not just jealousy (although there is often some of that), but a very real awareness that incentives are not necessarily aligned. Developers want change, sysadmins want stability.

Actually, that point is important. Let me emphasise it some more.

Developers want change, sysadmins want stability

Typical pre-DevOps scenario: developers code up an application, and it works. It passes all the testing: functional, performance, and user-acceptance. Now it’s time to deploy it in production - and suddenly the sysadmins are begin difficult, complaining about processes running as root and world-writable directories, or talking about maintenance windows for the deployment. Developers just want the code that they have spent all this time working on to get out there, and the sysadmins are in the way.

From the point of view of the sysadmins, it’s a bit different. They just got all the systems how they like them, and now developers are asking for the keys? Not only that, but their stuff is all messy, with processes running as root, world-writable directories, and goodness knows what. When the sysadmins point out these issues and propose reasonable corrections, the devs get all huffy, and before you know it, the meeting has turned into a blamestorm.

1330992512834_1603323.png

The DevOps movement attempts to address this by getting developers more involved in operations, meaning instead of throwing their code over the proverbial wall between Dev and Ops, they have to support not just deployment but also support and maintenance of that code. In other words, developers have to start carrying pagers.

The default sysadmin assumption is that developers can’t wait to get the root password and go joy-riding in their carefully maintained datacenter - and because I have a sysadmin background, sell to sysadmins, and hang out with sysadmin types, I had unconsciously bought into that. However, now someone points it out, it does make sense that developers would not want to take up that pager…

When is a Wearable not a Wearable

CNET reports that Nike are getting out of the wearable market.

Best comment:

Uh oh, it looks like your embed code is broken.

But seriously.

The tech commentariat is going crazy, passing around the conspiracy theory that Tim Cook, who sits on Nike’s board, killed the FuelBand effort.

M. G. Siegler:

> I’ve been saying
this
for a while
Tim Cook remaining on Nike’s board while Apple readies its own health/fitness-focused device was
awkward at best
.
[John Gruber](
http://daringfireball.net/linked/2014/04/18/nike-fuelband "CNet: Nike Fires FuelBand Engineering Team; Set to Exit Wearable Hardware Market" )

Interesting, particularly when you consider that Tim Cook sits on the Nike board.

Nick Heer
:

It’s worth remembering that Tim Cook is on Nike’s board, and that Nike and Apple have long collaborated on fitness.

I don’t think that Tim Cook strong-armed Nike into dropping the FuelBand to favour Apple’s own iWatch. It’s simply that "wearable tech" is not a discrete device. I wore a Jawbone Up! band for more than a year, but when I somehow ripped off the button end against a door frame, I couldn’t be bothered to replace it, and I don’t miss it. The only thing that class of wearables - Fitbit, FuelBand, Up!, they’re all interchangeable for the purpose of this discussion - is generating moderately interesting stats on your everyday level of activity. Sure, it was mildly amusing to get back to the hotel at the end of a long day wandering around Manhattan and upload that I had walked thirty thousand steps, but I knew already that I had done a ton of walking simply by the feel of my legs and feet! When I took actual exercise, the Up! didn’t track it very well, because a wrist-mounted sensor isn’t very good at working out how hard you are cycling or snowboarding.

Instead, I use an app on my iPhone, which does GPS tracking. I still have an ancient - I mean, vintage - 4S, so I don’t have any of the fancy-schmancy M7 sensors in the 5S, but even so, it’s much better at actually tracking exercise than the dedicated devices.

Sure, I could go all in and get one of those heartbeat monitors and what-not, but quite frankly I can’t be bothered. I don’t exercise to beat some abstract number, although I admit to keeping an eye on my average speed on the bicycle. Given the low frequency of my outings (surprise! two kids take up a whole bunch of your free time), I’m quite happy with my 30 km/h average, without needing to plot heartbeat, hydration, etc.

It is looking more and more like Apple is not building a watch at all, and I think that’s exactly the right move. We have spent the last twenty years or so reducing the amount of devices we carry.
Why reverse that trend now?

1993-2013.jpg

Nike just saw which way the wind was blowing - maybe with a little help from Tim Cook.

Tech in Layers

This post is the follow-up to the earlier post Caveat Vendor.

It’s not easy being an enterprise IT buyer these days. Time was, the IBM sales rep would show up once a year, you would cover your eyes and sign the cheque, and that would be it for another year. Then things got complicated.

Nowadays there are dozens of vendors at each level of your stack, and more every day. Any hope of controlling the Cambrian explosion of technologies in the enterprise went out of the window with the advent of cloud computing. Large companies used to maintain an Office of Vendor Relations, or words to that effect. Their job was to try to keep the number of vendors the company dealt with to a minimum. The rationale was simple: if we have an existing enterprise licensing agreement with BigCo, introducing PluckyStartupCo just adds risk, not to mention complicating our contract negotiations. It doesn’t matter if PluckyStartupCo has better tech than BigCo, we get good enough tech from BigCo as part of our Enterprise Licensing Agreement (all hail the ELA!). On top of that we are pretty sure BigCo is going to be around for the long haul, while PluckyStartupCo is untested and will either go bust or get bought by someone else, either BigCo or one of their competitors. Job done, knock off at five on the dot.

The dependency of business on technology is too close for that approach to work any longer. The performance of IT is the performance of the business, to all intents and purposes. If you don’t believe me, try visiting an office when the power is down or the net connection has gone out. Not much business gets done without IT.

If companies effectively hobble themselves with antiquated approaches to procurement, they leave themselves wide open to being outmanoeuvred by their competitors. When the first non-tech companies built websites, plenty of their competitors thought it was a fad, but those early adopters stole a march on their slow-moving erstwhile peers.

All of this does not even count shadow IT. Famously, Gartner predicted that "By 2015, 35 percent of enterprise IT expenditures for most organizations will be managed outside the IT department's budget." People are bringing their own services, not just the techie example of devs1 spinning up servers in AWS, but business users - Muggles - using Dropbox, Basecamp, Google Hangouts and Docs, Slideshare, Prezi, and so on and on.

What is a CIO or CTO to do?

First of all, don’t try to stop the train - you’ll just get run down. The only thing you can do is to jump on board and help drive it. Note that I said help: IT can no longer lead from high up an ivory tower. IT leaders need to engage with their business counterparts, or those business users will vote with their feet and go elsewhere for their IT services.

IT leaders can help the business by building a policy framework to encompass all of these various technologies. Most importantly, this framework has to be flexible and assume that new technologies will appear and gain adoption. Users won’t listen if you say "we’ll review doing a pilot of that cool new tech in six months, and if that goes well we can maybe roll it out a year from now". By the time you’ve finished speaking, they’ve already signed up online and invited all their team-mates.

Fortunately, there is a technique that can be used to build these frameworks. It’s called pace layering, and was introduced by Gartner in 2012.

Pace Layering

Pace layering divides IT into three layers:

  • Systems of Record — Established packaged applications or legacy homegrown systems that support core transaction processing and manage the organization's critical master data. The rate of change is low, because the processes are well-established and common to most organizations, and often are subject to regulatory requirements.

  • Systems of Differentiation — Applications that enable unique company processes or industry-specific capabilities. They have a medium life cycle (one to three years), but need to be reconfigured frequently to accommodate changing business practices or customer requirements.

  • Systems of Innovation — New applications that are built on an ad hoc basis to address new business requirements or opportunities. These are typically short life cycle projects (zero to 12 months) using departmental or outside resources and consumer-grade technologies.

The idea is that there are parts of the business where the emphasis is on absolute stability, reliability and predictability - the Systems of Record, which are basically the boring stuff that has to work but isn’t particularly interesting. Other areas need to move fast and respond with agility to changing conditions - the Systems of Innovation, the cool high-tech leading-edge stuff. In between there are the Systems of Differentiation, which are about what the company actually does. They need to move fast enough to be relevant, but still be reliable enough to use - often a tough balancing act. The layering looks like this:

If we overlay two common IT methodologies, we start to understand many of the ongoing arguments of the last few years, where it seems that practitioners are talking past each other:

DevOps, Agile, and so on are approaches that work well for Systems of Innovation. Here it is appropriate to "move fast and break stuff", to fail fast, to A/B test things on the live environment. Run with what works; the goal is speed and quickly figuring out the Next Big Thing.

ITIL is the opposite: it’s designed for a cautious approach to mature and predictable systems, with the ultimate goal of maintaining stability. Here, the absolute goal is not breaking stuff; the whole moving fast part is completely subordinate to that goal.

I hear a lot of complaints along the lines of "ITIL is a bottleneck on IT", "ITIL is the anti-Agile", and so on. In the same vein, ITIL sages throw up their hands in horror at some of what the new crowd are getting up to. The thing is, they’re both right.

Use ITIL where it’s appropriate, and be agile where that is appropriate. Try to figure out the demarcation points, the hand-offs, and where you can, by all means take the best of both worlds. You don’t want to have to wait for the weekly Change Advisory Board meeting to make a minor website change, but when something goes wrong, you’ll be thankful for having some sort of an audit trail in place.

From operations to planning

So much for operations - but the same applies to planning. The Systems of Record might have a roadmap planned out years in advance, with little reason to deviate from it. The motto here is "if it ain’t broke, DON’T TOUCH IT!". This is the part of the company where mainframes still lurk. Why? Because they work. It’s as simple as that.

On the other hand, the Systems of Innovation are where you want to let a thousand clouds bloom (to coin a phrase). Let people try all those wonderful services I mentioned earlier. The ones that are useful and safe will gradually get adopted further back from the bleeding edge. If something doesn’t make the cut, no matter - you didn’t bet the company on it.

To return to one of my pet arguments, the Systems of Record are virtualised, the Systems of Differentiation are on a private cloud, and the Systems of Innovation are in the public cloud. This way, the strengths of each model fit nicely with each layer’s requirements.

The problems arise if get your layers mixed up - but that’s outside the scope of this post.


  1. Not "debs", whatever autocorrect might think. Although the image is amusing. 

The Bus Number

218319.strip.print.gif

Everyone in the trenches of IT knows that Dilbert is drawn from real life - and [today’s strip]
(
http://dilbert.com/fast/2014-04-15/) is no exception. Does your IT organisation rely on knowledge that is held only by only a few people, or maybe even one person? This is known as the bus number or bus factor - basically, the number of people who would have to be hit by a bus for the organisation to be severely affected. With slightly less black humour, let’s say they win the lottery, get a dream job elsewhere, or simply feel sick and don’t come in to work one day. Regardless of the details, most organisations have a bus number of one.

That’s right: if a single person is missing, the organisation is unable to operate normally. Note that it’s rarely just one person, old Bob who’s been there since the beginning and knows everything. It’s more likely to be Alice the database whisperer, Carol who knows which arcane options to give the batch jobs so they’ll all go through in sequence, Dave who has the admin password to the core routers, and so on.

If you’re not in IT, this may seem crazy, but trust me, this is exactly how most teams work. Everyone knows they should get around to documenting this stuff, if not outright automating it, but there are always more fires to put out than there are hours in the day… This sort of thing is a serious business risk, too, but it’s invisible to management unless they go looking for it, and few managers are inclined to look for additional problems.

Everything muddles along - until Bob wins the lottery...

Marketing is a four-letter word

To techies, "marketing" has always been a four-letter word. My own first exposure was in the Browser Wars of the Nineties, when Microsoft was widely held to have won by "marketing" (pronounced with extreme scorn). That attitude is alive and well today:

Luckily, this time around there are people calling out that attitude as misguided: What Heartbleed Can Teach The OSS Community About Marketing:

Remember CVE-2013-0156? Man, those were dark days, right?

Of course you don’t remember CVE-2013-0156.

[…]

Compare "Heartbleed" to CVE-2014-0160, which is apparently the official classification for the bug. (I say "apparently" because I cannot bring myself to care enough to spend a minute verifying that.) Crikey, what a great name that is.

heartbleed.png

The open-source community has always had a bit of a hair-shirt attitude to it: if you can’t hand-code your own YAML config files at the command-line and recompile your entire toolchain at least once a month, you are not worthy. That’s all well and good, but at some point you have to be able to talk to other people, especially when what you do has become critical infrastructure. This may - shock, horror - require you to engage with marketing.

Guess what? It’s not that bad. The sort of "marketing" that offends OSS purists is generally bad marketing. It’s mis-targeted, content-free, and exaggerated - and none of those things are goals of good marketing. I can say that, since I have the word "marketing" right there on my business card, and also patched my home Linux server against Heartbleed.

Better marketing, and communications in general, is the only way we are going to solve the problem of poorly-funded and -managed open-source software becoming critical infrastructure. From the WSJ (emphasis mine):

Matthew Green, an encryption expert at Johns Hopkins University, said OpenSSL Project is relatively neglected, given how critical of a role it plays in the Internet. Last year, the foundation took in less than $1 million from donations and consulting contracts.

Donations have picked up since Monday, Mr. Marquess said. This week, it had raised $841.70 as of Wednesday afternoon.

Guess what? Eight hundred bucks doesn’t buy much code review. "I think I’m going to audit some code for buffer overflows this Saturday night", said no-one ever. The way to get more attention to the problem… is marketing.

Caveat Vendor

The world of IT is changing fast, and the rate of change is itself increasing. This insight is almost a tautology by now, I admit, but what I want to explore here is what this means for enterprise software customers and vendors.

A recent newsletter from Ben Kepes, of Diversity fame, includes this aside in the introduction (emphasis mine):

One theme that I kept coming back to was the risk that IT vendors run in continuing to communicate under the false expectation that enterprises are all at the same level of adoption. It's easy to sit in a conference room and think that everyone "gets it", but the reality is that organizations are complex beasts and sometimes it's hard for IT practitioners to look beyond simply "keeping the lights on". IT vendors have a responsibility to articulate their solutions in a way that helps them plot a progressive journey from where they are today to a better future.

This is basically a reformulation of Clayton Christensen’s Innovator’s Dilemma. If you innovate beyond your customers’ needs, your position is at risk of being undermined by less sophisticated offerings that match your customers’ current needs. The insidious part is that for a while this feels good. Those are the customers which are a stretch for your product, and the ones where the return on investment is weakest. Dropping those raises the average among your remaining customers - for a while.

The really insidious part is what Ben Kepes points out: not all customers are at the same point along that journey. Vendors have to strike a balance between the Scylla of out-innovating their less sophisticated customers and the Charybdis of not keeping up with their more sophisticated customers’ requirements. This dilemma has been articulated already by Massimo Re Ferré, so I will just point to his blog for the full treatment.

For vendors, the trick is finding that sweet spot in the market. You don’t want to chase every will o’ the wisp promising technology - nobody has the development dollars to do that. You also can’t afford to get left behind by your customers’ adoption rate. You have to surf that wave constantly, and never fall off.

Sticking with the surfing metaphor for a moment, surfers like smooth, predictable waves. The worst thing for surfers is chop - but chop is exactly what we have in the enterprise software market. The pace of technology churn is accelerating.

It used to take years, sometimes even decades, for new technologies to be widely adopted in the enterprise. Sure, there might be testbeds experimenting with crazy notions such as relational databases or object-oriented programming, but they remained isolated.

This gave vendors the time to adapt their own offerings, whether that meant using the New Hotness themselves, integrating with it, or managing it - or buying a smaller player who had worked it out faster. Once they had built an offering, they could also count on getting revenue from it for a good few years, as their customers kept on using the now mature and widely adopted tech.

So what changed? The pace of adoption of new technologies in the enterprise has accelerated enormously, and indeed is still accelerating. The plateau of productivity has also shortened, since there is a new technology wave coming right behind the current one, and another one even closer behind that.

Meanwhile, vendors have not been able to accelerate the pace of development, distribution and adoption of their offerings to match this heightened tempo. In other words, the rate of churn within the enterprise is now, or will soon be, inside vendors' OODA loops.

What does this mean? Is it the end of the commercial software vendors, as some argue?

I don't think so, but it is the end of what has been business as usual, and some vendors will not survive the transition. To survive and prosper, vendors need to let go of their old engagement models.

Agile is not just a development buzzword, it needs to be adopted as part of the DNA of vendors. Multi-year product roadmaps are as out of date as the Soviet Union’s infamous Five-Year Plans. By the time the roadmap reaches its first milestone, it’s already obsolete. If vendors - or customer architects1 - try to stick to their roadmaps, they will find themselves wildly out of step with their customers after a few rounds, which is hardly a recipe for long-term success.

So, both customers and vendors need to build flexibility into their plans. No more huge monolithic projects that will show return on investment only after twelve, eighteen or even twenty-four months. Instead, modular projects with loosely-coupled milestones, with each milestone able to stand alone in terms of its own RoI. In this model, milestones can be rearranged, cancelled or replaced with others as the project develops and its goals and usage evolve.

This new model also requires a different type of sales process.

Traditionally, vendors start engaging with customers only once a sales process begins, with activity really ramping in the delivery phase. Once implementation is complete, the vendor generally disengages, handing the implemented solution over to the customer's IT team to manage in production. As I posted yesterday, customers don’t see a huge amount of value in this approach, especially nowadays.

The New Normal requires a much more constant engagement between vendors and users. This contact begins long before an official sales or procurement cycle, as part of what is known as the Zero Moment Of Truth or ZMOT. The ZMOT requires a constant exchange between vendor and user. This conversation will cover topics such as:

  • New technology developments

  • Changing user requirements

  • Level of satisfaction with existing technologies

  • Constraints on adoption of new technologies

  • Expected benefits from new technologies

This conversation has obvious benefits for the vendor in enabling them to prepare their solution for the user, reducing the lag time between when users want to adopt a new technology and the vendor being able to support it. The benefit is also for the user, because the resulting solution will be much more closely matched to their actual requirements, rather than to the vendor's theory of what those requirements might be at some point in the future.

The gap between the user's requirements and the vendor's projections is often large, and the reason is that lag time. Vendors must not simply divine what users want today, but what they will want a year from now, when their solution will actually be ready - a much more difficult task.

Vendors talk about building a "trusted advisor" relationship with customers. Sometimes this is no more than a code for "persuading customers to buy whatever we are selling, sight unseen", but when it is done right, this relationship is a two-way one. The vendor-adviser needs to understand the customer's needs in depth to provide good advice.

Good advisers do not hand out their advice and then disappear, they stick around for the long haul and are available to give advice at any juncture. The rapid churn of technologies means that advice is needed regularly, not just every twelve months or so, when it’s time to set the budget for next year or renew the maintenance contract.


Next up: what can customers do in this brave new world? Follow Mum’s advice: Tech in Layers.


Serendipity: Seth Godin’s post for today has an example of a company failing to engage in this way. His example is more consumer-oriented - an inkjet printer - but the general point about continuous engagement holds. Vendors that sell something, then disappear until it’s time for them to sell something again, are actively pushing customers away. What do you call a vendor who doesn’t vend?


  1. If you think it’s only vendors who have inflexible roadmaps, I have a bridge here that is going cheap to a good home.