Showing all posts tagged bimodal:

Bimodal or Bi-Model

I just attended1 the Gartner Data Center, Infrastructure & Operations Management Summit in London. It was an interesting event, as ever; Gartner events are expensive for both exhibitors and attendees, but they are also the highest-value events around.

With all the excitement there has been about Gartner’s "bimodal IT" concept, I was curious to see whether they would double down, or whether the original idea would be modified in any way.

Given that this was the sixth slide of the opening keynote, I think it’s safe to say that they are not giving up on bimodal:

As is often the case with these high-level structures, hearing it explained by its practitioners helps with reaching a deeper understanding of what bimodal IT actually means. The concern that many people have with the soundbite version of bimodal IT is that it will devolve into "two-speed" IT, with one well-funded group forging ahead with new technologies while the other scrounges around for scrap to keep old creaking systems alive. Of course nobody with any sense would want to be in the latter group, and therefore there will be substantial brain drain between the two.

After spending two days listening to Gartner analysts, whether during formal presentations, in our briefings, or just hanging out during the evening reception, I think I can safely say that there is a bit more subtlety going on here.

What does the business need?

Just a few slides further into the keynote, we have this one:

It is clear that Gartner are trying to move the conversation beyond the details of which technology fits into which category. Once you move the focus to the business outcome, talking about Mode 1 and Mode 2 technologies is a short-hand for some of their characteristics.

Throughout the conference, Gartner people used the terms in this way, talking about the varying levels of agility of the different approaches, but always in terms of the business goals that are being enabled.

It’s unfortunate to my mind that “bimodal" has become such a hot-button topic, because that is absolutely a conversation that needs to be had. If you start "moving fast and breaking things", where those "things" support critical aspects of the business, pretty soon you’re going to find an angry mob of front-line people bearing down on the IT department, waving torches and pitchforks and calling for the head of the idiot who changed something they relied on just because it let them refactor something into something else more technically pleasing to them.

Instead, the public conversation has pretty much stopped at this binary opposition between boring, uncool Mode 1 that gets starved of resources, and cool, shiny Mode 2 that gets all the love.

As I have pointed out before, this misses the time-based aspect. Just because something is all the rage now, and genuinely needs to evolve quickly to respond to rapidly changing business needs, does not mean that will always be the case.

Don’t get stuck in the past

Where I do encounter visibly un-appreciated IT teams, without resources or standing, that is because somebody somewhere missed that key transition. Instead of their job being "supporting the business", in their minds the job description has become "provisioning servers" or "deploying patches" or whatever.

The Gartner people do call out this attitude - see this slide, from that same keynote:

I encounter this type of push-back all the time, pushing a new solution that is disruptive to the existing process. IT people need to be persuaded of the value of changing something that is known to work. The mantra is "if it ain’t broke, don’t mess with it" - and for very good reason! There is a point, though, when that precautionary attitude crosses over and becomes full-blown reactionary.

A good sanity check is to go through this deck from a presentation by Casey West, and see if you recognise yourself or your routine activities.

I get it, it’s highly technical work, it takes time, you’ll notice it if it goes wrong - but ultimately, the main metric - the only metric that matters - is whether IT delivered the outcome that the business required.

Behind the scenes, we IT people need to figure out all sorts of things, but we need to recognise that, to the business, this is about as interesting as the facilities maintenance. Sure, if the bins don’t get emptied or the lift plummets down its shaft, that would be bad, but as long as everything is humming along nicely, it’s invisible. To the business, IT is the same: as long as business outcomes are being achieved, all is well, and if the business result isn’t there, nothing else matters.

Arguing about what is Mode 1 and Mode 2 is to miss the point

The ongoing argument about bimodal IT is a mirror image of the ITIL conversation. ITIL is a useful roadmap, but it goes bad when people are so focused on the map that they start to try to adapt reality to the map. In the case of bimodal IT, nobody is actually suggesting that organisations split their organisations into two and starve Peter to pay Paula. Instead, the notion of bimodal IT is a useful short-hand to talk about existing realities within IT.

Once the time factor is added, bimodal IT is not that different from pace layering, but that model never really seemed to catch on - perhaps because it was overly complex and dynamic. Instead, we are (still) arguing about how bimodal is too binary and static. After spending time actually talking to Gartner people about this stuff, I recognise it as a description of a spectrum, and one that is dynamic. A particular technology will move along the spectrum over time, and that movement needs to be recognised in the processes that deal with that particular technology.

Now that’s taken care of, we can all go back to arguing about private cloud.


  1. Normally people say "I got back from" whatever event, but being me, I’m still on the road, and won’t get home for some time yet. 

Multimodal IT

click.jpg

The current big debate in Enterprise IT (now that we have mostly moved on from arguing about hybrid cloud) is Bimodal IT.

To recap: the idea is that enterprises will have both stable and predictable services, sometimes unkindly referred to as "legacy", and new, unpredictable services just being developed. These two types of services have different priorities and expectations: the former focus above all on reliability and availability, while the latter instead need to deliver quickly on new requests, evolving rapidly by definition. These differing requirements are sufficiently at odds that - so the theory goes - they should be split off and operated by two different teams.

So far, so good. The objection is that few people will want to work on the "legacy" services, preferring to hone their skills on more cutting-edge, trendy technology. Over time, this will hollow out the legacy support team, undermining its mission of quality and reliability.

click.jpg

Why it’s not that simple

Personally, I think both positions are over-simplifications. First of all, mainframe people aren't dinosaurs; they're just solving for a different set of variables (Simon Wardley’s Town Planners). I've been in the room when someone was presenting version 56 (!) of a mainframe product, and people were excited to hear about what it could do for them and how it could improve their jobs. Also, the idea that there are no jobs to be had on the mainframe side of the house is rubbish: it's a niche, sure, so absolute numbers are low, but if you specialise in that niche, it can be very profitable. The first person in my graduating class to get a job - in the teeth of the post-bubble, post-9/11 complete lack of IT jobs - was a friend of mine who had been one of the few to take the COBOL class.

Secondly, the Bimodal IT picture is a snapshot of a moment in time. To extend bimodal to the classic three modes of Pace Layering: right now, mainframes are systems of record, classic ERP middleware represents systems of differentiation, and the new DevOps-Agile-Web 2.0-whatever brings the systems of innovation. Over time, though, things flow down the stack and sediment at lower layers. SQL used to be the neat new thing that had all the promise. Now? Now everyone scoffs at relational databases and the cool kids at the conferences are all about the NoSQL. Meanwhile, all of those relational databases are still humming away, keeping everything up and running.

This looks a little like fossilisation: the dynamism of a freshly-introduced service dries up, because it's generally not a good idea to change the foundations too much once you have started building on top of them. Messing around with foundations is a major effort, and the sort of thing that makes the news.

watch.gif

As a rule, you want to put your efforts into making those foundational systems as stable and reliable as possible. This will ensure that the layers further out are free to innovate on top of those solid underpinnings. The whole point is that nobody worries about the database being up, they focus on how to query it and what to do with the results.

click.jpg

Innovation trickles down

What all this means is that instead of treating this conversation about Bimodal IT as an argument between Old and New, we need to pull back and focus instead on the lifecycle of individual services. Once we have an idea of which systems of innovation are catching on and making the transition to systems of engagement and record, we need to start taking decisions about how we treat those systems and allow for their dependencies. Today’s fast-moving top of the stack is tomorrow’s mid-stack connective tissue and next week’s bottom-of-stack foundation element.

The good news is that none of this should be new. Interestingly, the two extremes have more in common than the centre, if you squint a little. Both mainframes and modern distributed services share a focus on small units of work, defined interfaces and checkpoints, and no assumption of reliability on the part of other components in the pipeline. Arguably, the intervening generation of relatively monolithic and rapidly-obsolescent enterprise IT is the aberration.

Jez Humble articulated this point in a widely-shared piece:

Gartner’s model rests on a false assumption that is still pervasive in our industry: that we must trade off responsiveness against reliability. The conventional wisdom is that if we make changes to our products and services faster and more frequently, we will reduce their stability, increase our costs, and compromise on quality.

This assumption is wrong.

The responsiveness-reliability dichotomy is a product of fragility. The attitude of "if it ain’t broke, for pity’s sake, don’t touch it!" comes from knowing that the whole thing is a massive pile of cards:

Right now someone who works for Facebook is getting tens of thousands of error messages and frantically trying to find the problem before the whole charade collapses. There's a team at a Google office that hasn't slept in three days. Somewhere there's a database programmer surrounded by empty Mountain Dew bottles whose husband thinks she's dead. And if these people stop, the world burns. Most people don't even know what sysadmins do, but trust me, if they all took a lunch break at the same time they wouldn't make it to the deli before you ran out of bullets protecting your canned goods from roving bands of mutants.

Fortunately, this situation is now the base assumption of any sane IT architect. IT used to work to grandiose Soviet-style Five Year Plans:

IT will firmly implement the CIO's new strategy, new concept and new mission, promote structure adjustment, layout optimisation, and transformation and upgrading, as well as insist on green power strategy, overall lean management, staff innovation and creation, harmonious development, profit increase and strengthen the Party enterprise construction in an all-round way in order to further solidify production safety foundation, speed up the development method and mechanism adapting to the "new normal", and go all out to launch IT’s innovative and balanced development of the "13th Five Year Plan to update the CMDB" successfully.1

Today, we work with what we have, and assume that will change - often in unpredictable ways, since the IT department is no longer the gatekeeper of IT adoption in the company.

click.jpg

People, Process, Technology

Technological solutions are emerging to help with this transition. My own employer, Moogsoft, is part of this trend, helping to bring monitoring data from all the different tools and systems in the enterprise together in a way that is comprehensible and actionable for humans.

As usual, though, the technology is the easy part. The hard parts are the people using the technology, and the process according to which they do so.

I already hinted at one people problem: if you divide IT into the Cool Kidz and the Old Fuddy-Duddies, pretty soon you’re going to run out of people to take care of the mainframes (or SAP, or Oracle, or whatever tech is actually running big chunks of your business). Also, you’ll find that suddenly consultants in that area have added a zero to their hourly rates, and are booked up for months in advance.

A process problem is that you simply cannot freeze even the systems of record, as the faster-moving systems still need to interface with them to get their job done, and will require changes. Maybe you can’t do two-week sprints, especially if your release process itself takes a fortnight, but you still need to allow for that continuous delivery process. Otherwise, the change you can’t accomodate in-band will happen out-of-band when management starts screaming, and it doesn’t take too many of those before something breaks.

To quote Antoine Lavoisier:

Dans la nature rien ne se crée, rien ne se perd, tout change.

Or in our world:

In nature IT nothing is created, nothing is lost, everything changes.

Change is in the nature of the beast. Let’s sit down together and figure out how to accomodate it.


Images by Vladimir Chuchadeev, Patrick Tomasso, Erol Ahmed and Gabriel Garcia Marengo via Unsplash


  1. only very lightly paraphrased from here