Showing all posts tagged enterprise:

That Old Enterprise Software Business

This is an interesting time in the enterprise software market. The shift to the cloud is causing massive disruption, with storied old names struggling to reinvent themselves, and scrappy startups taking over the world.

One interesting story is that HP Enterprise, or HPE - one of the units that old HP split itself into - is looking into selling off some of its software assets. I am especially interested in one of these, namely Mercury, because I worked there for several years.

To recap, Mercury (née Mercury Interactive) was a leader in automated software testing. Its products covered functional testing (XRunner, WinRunner, QuickTest Professional), load testing (LoadRunner), and test management (TestDirector, later renamed to QualityCenter).

Basically what these tools let you do is to record a user interacting with an application, and then parameterise the recording - i.e. turn it into a little programme that you can replay, so that you can select different menu options and make sure that they all work, or simulate ten thousand users all hitting the app simultaneously and make sure it doesn’t fall down, or whatever.

LoadRunner in particular was the default standard at the time, dominating its market segment. I worked on the functional test products, but because of language coverage, I had at least basic familiarity with the whole product set.

Out Of The Blue

In 2006, Mercury was trying to bridge that difficult chasm from $1B to $2B in revenue, but was caught up in a wider stock option backdating scandal. As its founder was exiled and the stock price cratered, HP swooped in and bought up the whole shop in a fire sale.


UPDATE: Christopher Lochhead interviewed Dr Giora Yaron on his excellent Legends & Losers podcast about this history. Dr Yaron was on the board of Mercury at the time, while Chris himself was the CMO there. It was fascinating to hear the inside account of what happened during that tumultuous time.


What happened after that is fairly typical of such acquisitions. Despite some big talk and high expectations, I think it is fair to say that the Mercury products languished within HP - or at the very least failed to evolve with any urgency.

After The Acquisition

This is unfortunately a pattern with technology acquisitions. There is often a honeymoon period, where increased funding enables delivery of long-awaited functionality, but the releases after that get hollowed out into maintenance releases, and even those start coming further and further apart, frustrating customers and insiders alike.

In the case of HP and Mercury, the slow-down was particularly unfortunate because the acquisition came just as enterprise application development was moving from proprietary protocols and GUIs to web applications talking HTTP. Mercury’s powerful and extremely customisable products were arguably overkill for simpler web applications, and a new generation of tools was beginning to emerge that was dedicated for that purpose. Given its singular focus on testing, and based on what I know of the company culture pre-acquisition, I am quite certain that an independent Mercury would have addressed the challenge head on and remade itself for that new world. After all, Mercury was fully aware of web applications, offering services that would simulate user access from locations around the world to have a continuous view on sites’ performance as experienced around the world.

Unfortunately, that’s not what happened under HP stewardship. The Mercury products languished in the Software group, which itself represented only around 2% of HP revenues. As often happens in such cases, much of the original talent left, creating a flourishing "alumni" network. I was part of that diaspora, so I can’t talk about the quality of their replacements, but there was certainly a discontinuity, and the Mercury tools never recovered their previous dominance.

Looking On The Bright Side

None of this is to say that the acquisition was not a success by its own lights. HP still uses the Mercury technology in all sorts of places. Many enterprise HP customers did not move to the new technologies with any urgency, and therefore continued to have a business need for Mercury’s powerful tools. This means that the products still throw off enough stable and predictable revenue to make a private equity purchase potentially attractive

HP also adopted the Mercury notion of Business Technology Optimization, or BTO. This acted as a framework for many of HP’s other software initiatives, although it seems to have been abandoned more recently.

The failure of this acquisition is a failure of potential. What might an independent Mercury have become if the M2B project had been successful in taking it to $2B in revenue and beyond? What might Mercury have built in the world of the web and the cloud? As is often the case with these acquisitions, there is no way to know.

We do know roughly what the conditions are under which acquisitions succeed or fail. Arguably, the Mercury acquisition was more successful than most in no small part because HP kept the Mercury R&D centre in Israel, somewhat isolated from the rest of the company. This enabled the ex-Mercury staff to keep some sense of their own distinct identity, and keep developing their technology even after the acquisition.

There is an alternative view: that while isolation and even benign neglect may allow for survival of the startup within the acquiring company, they will not build true success. That requires a deeper integration of the startup's mentality into the acquiring company’s culture. Very few company cultures have the strength to be able to integrate a challenging outside vision without triggering an immune reaction of sorts.

The only way to integrate acquired companies - their technology and their culture - successfully, is to have strong executive guidance over a period of years. This has been a long-time failing at HP, to the despair of its longer-serving employees. In the absence of that guidance, benign neglect is maybe all that can be hoped for.

Pity the Vendor

click.jpg

Dear users: It’s not easy, being on the vendor side. Let’s assume, for the sake of argument, that you work for a reputable, non-scammy vendor. Let’s also take it as a given that you have done your homework, so you are not spamming people indiscriminately, but trying to reach people whom you genuinely believe to have a need for your product.

How do you go about reaching those people?

Most people are understandably very reluctant to publish their contact details everywhere, because less principled sales people have already saturated their tolerance for randos showing up in their inbox out of the blue. This means that there is a (justifiably) high barrier to getting their attention.

There is also something of a tragedy of the commons effect, as all the vendors converge on those people who have been less diligent about scrubbing their personal details off the Internet.

Here’s the deal: when I contact someone, it’s because I genuinely believe that they might have a need for what I’m selling. It’s a pretty niche market - which is why it makes sense to hire human sales people to build and maintain a small number of customer relationships in the first place. This means I take the time to do my homework, and my approach is specific as I can make it based on public information.

If I’m working on selling into BigCorp and I get a number for Alice or Bob who work there, my first step is not to pick up the phone. Rather, I go off to research what they do at BigCorp, what they personally care about, and so on. I use all of this to build a pitch that might go something like this:

Hi, sorry to contact you uninvited, but I know you are working on A, B, and C as part of an initiative at BigCorp. I have worked with other companies in your position such as WidgetTicklers, who were able to complete their own similar project under budget and ahead of schedule. They did this thanks to key capabilities enabled by our technology: …

You get the idea: it’s not a form letter I’m blasting out, it’s carefully targeted and as specific as I can make it with information at hand.

So what’s the problem? The problem is that the hit rate on doing this is still terrible. It's not mis-targeting, because often when I do finally manage to make contact by some other means, it turns out that I was right, there really was a need - but that was the wrong channel to connect with the person.

Here’s my question: what is a good way to contact you? Assume I have something you want, but not something that would show up in your normal reading. Maybe it’s launched since the last time you went looking for this sort of thing. I’ve done the prep work of identifying a potential interest you might have for this product; how should I bring it to your attention?

Because seriously, this stuff is great, and everyone needs to know about it - not just because I get paid (that too, of course!), but because I think it can really help a whole bunch of people. That’s the definition of win-win.


Image by Anton Repponen via Unsplash

Emails, Meetings, and Slides, Oh My!

click.jpg
In the vein of Coté’s White Collar Survival Guides, here are some suggestions of my own for knowledge workers.

The Three Horsemen of the Productivity Apocalypse - and how to slay them

There are a few constants of corporate life that are pretty much universal, and those are email1, meetings, and slides. All three are widely hated, and someone is always trying to kill one or other of them. I have even heard people say that these three factors have ruined their lives.

I would say that the truth is a bit more nuanced. "With great power comes great responsibility", as they say, and certainly all three are powerful tools.

Email

Ah, email. If I had a Euro for every time something had promised to kill email… well, I could start by filling out my dream garage, but I’d still have plenty left over after that. We should probably amend the old saw about only cockroaches surviving nuclear war to include Sendmail running somewhere. Despite everything, it’s still the best tool at its job.

The problem is that most of the time we use email wrong. Email chains devolve into endless back and forth, with unhelpful subject lines like "Re: Re: Fwd: RE: Re: …". This is why, amid all the froth about the latest would-be email killer, I was interested to spot an article discussing how to do email right. In particular, the counter-intuitive recommendation is to write longer emails.

A key rule for e-mail is to keep it brief. The recipients are pressed for time – and perhaps, on their mobiles, cramped for visual space – so keep it to a sentence or two.

Wrong, says productivity expert Cal Newport.

His recommendation is something called "process-centric email":

  • When sending or replying to an email, identify the goal this emerging email thread is trying to achieve. For example, perhaps its goal is to synchronize a plan for an upcoming meeting with a collaborator or to agree on a time to grab coffee.
  • Next, come up with a process that gets you and your correspondent to this goal while minimizing the number of back and forth messages required.
  • Explain this process in the email so that you and your recipient are on the same page.

The desired result is to spend a little more time on each individual message or thread, but reduce the number of visits you need to make to your inbox over time.

This is not that dissimilar to the time-tested format of the VITO letter: grab your correspondent’s attention up front, then articulate your message and the reasons why they should care, and close with concrete actions. Of course you will take more time when writing to an actual VITO than to a run-of-the-mill correspondent, but it’s still an effective tool.

The only problem with both these techniques is that they work well one-to-one, but fall down with those huge sprawling email threads that we all know and love to hate. As more and more people get added to the conversation, skirting Dunbar’s Number, any chance of useful communication breaks down.

Modern tools like Slack can supposedly make this situation better, or at least more tolerable, but the problem there is the requirement that everyone adopt the new tool. As long as not everyone is on the new channel, it’s more trouble than it’s worth to either verify someone is on board or to bring them on board. In time-honoured fashion, people default to just adding more and more people to the same old email chain.

The problem is compounded by the perfect confusion which reigns over email etiquette, with no agreement over who goes in To: and who goes in Cc:, let alone anything about hierarchical ordering of participants.

Still, email is better than all the alternatives, for one simple reason: it works, almost always and almost everywhere. That is a high bar for any new offering to clear, as I have written before.

Meetings

url.png
Meetings share many of the same problems as the big multi-user email chains that I was just complaining about. Sure, the attendee list for an in-person meeting is limited by the size of the available meeting rooms - still the most in-demand commodity in any office. Online meetings and conference calls, of course, do not share this limitation.

In either case, though, some attendees may be disinterested, others may be there mainly to be seen, and some may actually be negative. Unless the agenda is enforced ruthlessly, the discussion will move off-topic very rapidly - which anyway is probably a good idea, otherwise why have the meeting in the first place?

One way to optimise the use of meetings comes from Amazon.

In senior executive Amazon meetings, before any conversation or discussion begins, everyone sits for 30 minutes in total silence, carefully reading six-page printed memos.

What makes this management trick work is how the medium of the written word forces the author of the memo to really think through what he or she wants to present.

I have criticised some Amazon quick-fix management practices before, but I think this one makes a lot of sense. In the typical meeting agenda, quite a lot of time is spent level-setting, making sure everyone agrees on the situation to be analysed before proposals can be put forwards. Inevitably, people who are already up to speed - or who think they are - will hijack this process by asking questions, and there are only so many times you can promise to "get to that in just a couple more slides", especially with senior people, before you start losing your audience.

Amazon’s two-stage approach, with the author clarifying their thinking by setting down their analysis and proposal in writing, and the other participants absorbing that message in full before starting to discuss and question it, seems like a really productive way of avoiding the problem.

Sure, it takes a long time to write six-page documents, so maybe save those for the big strategic meetings - but if there is time for a meeting, there should be time for at least a one-page recap of the situation to date, some high-level proposals, and desired outcomes of the meeting. If there is no time to either write or read such a succinct summation, is the meeting really a valuable use of anyone’s time?

Slides

What can I say? I’m a fan of PowerPoint. There, I said it. Much like email, PowerPoint can be (and often is) used wrong, putting audiences at risk of death by PowerPoint, but it’s very effective when used well. Not to blow my own horn, but I get a lot of compliments on my presentations. Partly of course this is because people are used to such a low standard that it doesn’t take much to stand out - and partly it’s because I put thought, preparation, and the results of formal training into my slides. Sometimes this takes a bit more effort than it should, but the results are well worth it.

Invest in a couple of books - I like Slide:ology and Resonate by Nancy Duarte, and of course Presentation Zen by Garr Reynolds. If you can get to a training session, so much the better; talking through this sort of material with an instructor is really effective.

See you at the meeting

I’ll drop you an email about it, and send you my slides afterwards.


Image by Nirzar Pangarkar via Unsplash


  1. I’ve given up on hyphenating e-mail since I realised that apart from bills and greeting cards, I receive no physical mail whatsoever, and have not done for some time now. In fact, we could pretty much just go ahead and call it "mail", if it were not for the fact that then you would need a term to describe old-style mail, and "snail mail" is just a bit too precious and insider-y to catch on. 

The curve points the way to our future

url.png

Just a few days ago, I wrote a post about how technology and services do not stand still. Whatever model we can come up with based on how things are right now, it will soon be obsolete, unless our model can accomodate change.

One of the places where we can see that is with the adoption curve of Docker and other container architectures. Anyone who thought that there might be time to relax, having weathered the virtualisation and cloud storms, is in for a rude awakening.

Who is using Docker?

Sure, the latest Docker adoption survey still shows that most adoption is in development, with 47% of respondents classifying themselves as "Developer or Dev Mgr", and a further 15% as "DevOps or Release Eng". In comparison, only 12% of respondents were in "SysAdmin / Ops / SRE" roles.

Also, 56% of respondents are from companies with fewer than 100 employees. This makes sense: long-established companies have too much history to be able to adopt the hot new thing in a hurry, no matter what benefits it might promise.

What does happen is that small teams within those big companies start using the new cool tech in the lab or for skunkworks projects. Corporate IT can maybe ignore these science experiments for a while, but eventually, between the pressure of those research projects going into production, and new hires coming in from smaller startups that have been working with the new technology stack for some time, they will have to figure out how they are going to support it in production.

Shipping containers

If the teams in charge of production operations have not been paying attention, this can turn into Good news for Dev, bad news for Ops, as my colleague Sahil wrote on the official Moogsoft blog. When it comes to Docker specifically, one important factor for Ops is that containers tend to be very short-lived, continuing and accelerating the trend that VMs introduced. Where physical servers had a lifespan of years, VMs might last for months - but containers have been reported to have a lifespan four times shorter than VMs.

That’s a huge change in operational tempo. Given that shorter release cycles and faster scaling (up and down) in response to demand are among the main benefits that people are looking for from Docker adoption, this rapid churn of containers is likely to continue and even accelerate.

VMs were sometimes used for short-duration tasks, but far more often they were actually forklifted physical servers, and shoe-horned into that operational model. This meant that VMs could sometimes have a longer lifespan than physical servers, as it was possible for them simply to be forgotten.

Container-based architectures are sufficiently different that there is far less risk of this happening. Also, the combination of experience and generational turnover mean that IT people are far more comfortable with the cloud as an operational model, so there is less risk of backsliding.

The Bow Wave

The legacy enterprise IT departments that do not keep up with the new operational tempo will find themselves in the position of the military, struggling to adapt to new realities because of its organisational structure. Armed forces set up for Cold War battles of tanks, fighters and missiles struggle to deal with insurgents armed with cheap AK-47s and repurposed consumer technology such as mobile phones and drones.

In this analogy, shadow IT is the insurgency, able to pop up from nowhere and be just as effective as - if not more so than - the big, expensive technological solutions adopted by corporate. On top of that, the spiralling costs of supporting that technological legacy will force changes sooner or later. This is known as the "bow wave" of technological renewal:

"A modernization bow wave typically forms as the overall defense budget declines and modernization programs are delayed or stretched in the future," writes Todd Harrison of the Center for Strategic and International Studies. He continues: "As this happens the underlying assumption is that funding will become available to cover these deferred costs." These delays push costs into the future, like a ship’s bow pushes a wave forward at sea.

(from here)

What do we do?

The solution is not to throw out everything in the data centre, starting from the mainframe. Judiciously adapted, upgraded, and integrated, old tech can last a very long time. There are B-52 bombers that have hosted three generations from the same family. In the same way, ancient systems like SABRE have been running since the 1960s, and still (eventually) underpin every modern Web 3.0 travel-planning web site you care to name.

What is required is actually something much harder: thought and consideration.

Change is going to happen. It’s better to make plans up front that allow for change, so that we can surf the wave of change. Organisations that wipe out trying to handle (or worse, resist) change that they had not planned for may never surface again.

Multimodal IT

click.jpg

The current big debate in Enterprise IT (now that we have mostly moved on from arguing about hybrid cloud) is Bimodal IT.

To recap: the idea is that enterprises will have both stable and predictable services, sometimes unkindly referred to as "legacy", and new, unpredictable services just being developed. These two types of services have different priorities and expectations: the former focus above all on reliability and availability, while the latter instead need to deliver quickly on new requests, evolving rapidly by definition. These differing requirements are sufficiently at odds that - so the theory goes - they should be split off and operated by two different teams.

So far, so good. The objection is that few people will want to work on the "legacy" services, preferring to hone their skills on more cutting-edge, trendy technology. Over time, this will hollow out the legacy support team, undermining its mission of quality and reliability.

click.jpg

Why it’s not that simple

Personally, I think both positions are over-simplifications. First of all, mainframe people aren't dinosaurs; they're just solving for a different set of variables (Simon Wardley’s Town Planners). I've been in the room when someone was presenting version 56 (!) of a mainframe product, and people were excited to hear about what it could do for them and how it could improve their jobs. Also, the idea that there are no jobs to be had on the mainframe side of the house is rubbish: it's a niche, sure, so absolute numbers are low, but if you specialise in that niche, it can be very profitable. The first person in my graduating class to get a job - in the teeth of the post-bubble, post-9/11 complete lack of IT jobs - was a friend of mine who had been one of the few to take the COBOL class.

Secondly, the Bimodal IT picture is a snapshot of a moment in time. To extend bimodal to the classic three modes of Pace Layering: right now, mainframes are systems of record, classic ERP middleware represents systems of differentiation, and the new DevOps-Agile-Web 2.0-whatever brings the systems of innovation. Over time, though, things flow down the stack and sediment at lower layers. SQL used to be the neat new thing that had all the promise. Now? Now everyone scoffs at relational databases and the cool kids at the conferences are all about the NoSQL. Meanwhile, all of those relational databases are still humming away, keeping everything up and running.

This looks a little like fossilisation: the dynamism of a freshly-introduced service dries up, because it's generally not a good idea to change the foundations too much once you have started building on top of them. Messing around with foundations is a major effort, and the sort of thing that makes the news.

watch.gif

As a rule, you want to put your efforts into making those foundational systems as stable and reliable as possible. This will ensure that the layers further out are free to innovate on top of those solid underpinnings. The whole point is that nobody worries about the database being up, they focus on how to query it and what to do with the results.

click.jpg

Innovation trickles down

What all this means is that instead of treating this conversation about Bimodal IT as an argument between Old and New, we need to pull back and focus instead on the lifecycle of individual services. Once we have an idea of which systems of innovation are catching on and making the transition to systems of engagement and record, we need to start taking decisions about how we treat those systems and allow for their dependencies. Today’s fast-moving top of the stack is tomorrow’s mid-stack connective tissue and next week’s bottom-of-stack foundation element.

The good news is that none of this should be new. Interestingly, the two extremes have more in common than the centre, if you squint a little. Both mainframes and modern distributed services share a focus on small units of work, defined interfaces and checkpoints, and no assumption of reliability on the part of other components in the pipeline. Arguably, the intervening generation of relatively monolithic and rapidly-obsolescent enterprise IT is the aberration.

Jez Humble articulated this point in a widely-shared piece:

Gartner’s model rests on a false assumption that is still pervasive in our industry: that we must trade off responsiveness against reliability. The conventional wisdom is that if we make changes to our products and services faster and more frequently, we will reduce their stability, increase our costs, and compromise on quality.

This assumption is wrong.

The responsiveness-reliability dichotomy is a product of fragility. The attitude of "if it ain’t broke, for pity’s sake, don’t touch it!" comes from knowing that the whole thing is a massive pile of cards:

Right now someone who works for Facebook is getting tens of thousands of error messages and frantically trying to find the problem before the whole charade collapses. There's a team at a Google office that hasn't slept in three days. Somewhere there's a database programmer surrounded by empty Mountain Dew bottles whose husband thinks she's dead. And if these people stop, the world burns. Most people don't even know what sysadmins do, but trust me, if they all took a lunch break at the same time they wouldn't make it to the deli before you ran out of bullets protecting your canned goods from roving bands of mutants.

Fortunately, this situation is now the base assumption of any sane IT architect. IT used to work to grandiose Soviet-style Five Year Plans:

IT will firmly implement the CIO's new strategy, new concept and new mission, promote structure adjustment, layout optimisation, and transformation and upgrading, as well as insist on green power strategy, overall lean management, staff innovation and creation, harmonious development, profit increase and strengthen the Party enterprise construction in an all-round way in order to further solidify production safety foundation, speed up the development method and mechanism adapting to the "new normal", and go all out to launch IT’s innovative and balanced development of the "13th Five Year Plan to update the CMDB" successfully.1

Today, we work with what we have, and assume that will change - often in unpredictable ways, since the IT department is no longer the gatekeeper of IT adoption in the company.

click.jpg

People, Process, Technology

Technological solutions are emerging to help with this transition. My own employer, Moogsoft, is part of this trend, helping to bring monitoring data from all the different tools and systems in the enterprise together in a way that is comprehensible and actionable for humans.

As usual, though, the technology is the easy part. The hard parts are the people using the technology, and the process according to which they do so.

I already hinted at one people problem: if you divide IT into the Cool Kidz and the Old Fuddy-Duddies, pretty soon you’re going to run out of people to take care of the mainframes (or SAP, or Oracle, or whatever tech is actually running big chunks of your business). Also, you’ll find that suddenly consultants in that area have added a zero to their hourly rates, and are booked up for months in advance.

A process problem is that you simply cannot freeze even the systems of record, as the faster-moving systems still need to interface with them to get their job done, and will require changes. Maybe you can’t do two-week sprints, especially if your release process itself takes a fortnight, but you still need to allow for that continuous delivery process. Otherwise, the change you can’t accomodate in-band will happen out-of-band when management starts screaming, and it doesn’t take too many of those before something breaks.

To quote Antoine Lavoisier:

Dans la nature rien ne se crée, rien ne se perd, tout change.

Or in our world:

In nature IT nothing is created, nothing is lost, everything changes.

Change is in the nature of the beast. Let’s sit down together and figure out how to accomodate it.


Images by Vladimir Chuchadeev, Patrick Tomasso, Erol Ahmed and Gabriel Garcia Marengo via Unsplash


  1. only very lightly paraphrased from here