Showing all posts tagged disruption:

Thinking Two Steps Behind

In my day job, I spend a lot of my time building business cases to help understand whether our technology is a good fit for a customer. When you are building a startup business, this is the the expected trajectory: in the very early days, you have to make the technology work, translating the original interesting idea into an actual product that people can use in the real world. Once you have a working product, though, it’s all about who can use it, and what they can do with it.

In this phase, you stop pitching the technology. Instead, you ask questions and try to understand what ultimate goals your prospective customer has. Only once you have those down do you start talking about what your technology can do to satisfy those goals. If you do not do this, you find yourself running lots of "kick the tyres" evaluations that never go anywhere. You might have lots of activity, but you won’t have many significant results to show for it.

This discipline of analysing goals and identifying a technology fit is very useful in analysing other fields too, and it helps to identify when others may be missing some important aspect of a story.

Let’s think about driverless cars

Limited forms of self-driving technology already exist, from radar cruise-control to more complete approaches such as Tesla’s Autopilot. None of these are quite ready for prime time, and there are fairly regular stories about their failures, with consequences from the comic to the tragic.

Uhoh, This content has sprouted legs and trotted off.

Because of these issues, Tesla and others require that a drivers keep their hands on the wheel even when the car is in Autopilot mode. This brings its own problems, falling into an "uncanny valley" of attention: the driver is neither fully engaged, nor can they fully disengage. Basically it’s the worst of both worlds, as drivers are no longer involved in the driving, but still cannot relax and read a book or watch a film.

These limitations have not stopped much of the commentary from assuming self-driving car technology to be, if not a problem that is already solved, at least one that is solvable. Extrapolations from that point lead to car ownership becoming a thing of the past as people simply summon self-driving pods to their location, which in turn causes massive transformations in both labour force (human drivers, whether truckers or Uber drivers, are no longer required) and the physical make-up of cities (enormous increases in the utilisation rate for cars mean that large permanent parking structures are no longer required) - let alone the consequences for automotive manufacturers, faced with a secular transformation in their market.

Okay, maybe not cars

Self-driving technology is not nearly capable (yet) of navigating busy city streets, full of unpredictable pedestrians, cyclists, and so on, so near-term projections focus on what is perceived as a more easily solvable problem: long-distance trucking.

The idea is that currently existing self-driving tech is already just about capable of navigating the constrained, more predictable environment of the highways between cities. Given some linear improvement, it does not seem that far-fetched to assume that a few more years of development would give us software capable of staying in lane and avoiding obstacles reliably enough to navigate a motorway in formation with other trucks.

Extrapolating this capability to the wholesale replacement of truckers with autonomous robot trucks, however, is a big reach - and not so much for technical reasons, as for less easily tractable external reasons.

Assuming for the sake of argument that Otto (or whoever) successfully develop their technology and build an autonomous truck that can navigate between cities, but not enter the actual city itself. This means that Otto or its customers would need to build warehouses right at the motorway junctions in areas where they wish to operate, to function as local hubs. From these locations, smaller, human-operated vehicles would make the last-mile deliveries to homes and businesses inside the city streets, which are still not accessible to the robot trucks.

This is all starting to sound very familiar. We already have a network optimised for long-distance freight between local distribution hubs. It is very predictable by design, allowing only limited variables in its environment, and it is already highly instrumented and very closely monitored. Even better, it has been in operation at massive scale for more than a century, and has a whole set of industry best practices and commercial relationships already in place.

I am of course talking about railways.

Get on the train

Let’s do something unusual for high-tech, and try to learn something from history for once. What can the example of railways teach us about the potential for self-driving technology on the road?

The reason for the shift from rail freight to road freight was to avoid trans-shipment costs. It’s somewhat inefficient to load your goods onto one vehicle, drive it to a warehouse, unload them, wait for many other shipments to be assembled together, load all of them onto another vehicle, drive that vehicle to another warehouse, unload everything, load your goods onto yet another vehicle, and finally drive that third vehicle to your final destination. It’s only really worthwhile to do this for bulk freight that is not time-sensitive. For anything else, it’s much easier to just back a truck up to your own warehouse, load up the goods, and drive them straight to their final destination.

Containerisation helped somewhat, but railways are still limited to existing routes; a new rail spur is an expensive proposition, and even maintenance of existing rail spurs to factories is now seen as unnecessary overhead, given the convenience of road transport’s flexibility and ability to deliver directly to the final destination.

In light of this, a network of self-driving trucks that are limited to predictable, pre-mapped routes on major highways can be expected to run into many of the same issues.

Don’t forget those pesky humans

Another interesting lesson that we can take from railways is the actual uptake of driverless technology. As noted above, railways are a far more predictable environment than roads: trains don’t have to manoeuvre, they just move forwards along the rails, stopping at locations that are predetermined. Changes of directions are handled by switching points in the rails, not by the operator needing to steer the train around obstacles. Intersections with other forms of transport are rare, as other traffic generally uses bridges and underpasses. Where this separation is not possible, level crossings are still far more controlled than road intersections. Finally, there are sensors everywhere on railways; controllers know exactly where a certain train is, what its destination and speed are, and what is the state of the network around it.

So why don’t we have self-driving trains?

The technology exists, and has done so for years - it’s a much simpler problem than self-driving cars - and it is in use in a few locations around the world (e.g. London and Milan); but still, human-operated trains are the norm. Partly, it’s a labour problem; those human drivers don’t want to be out of a job, and have been known to go on strike against even the possibility of the introduction of driverless trains. Partly, it’s a perception problem: trains are massive, heavy, powerful things, and most people simply feel more comfortable knowing that a human is in charge, rather than potentially buggy software. And partly, of course, it’s the economics; human train drivers are a known quantity, and any technology that wants to replace them is not.

This means that the added convenience of end-to-end transportation limits the uptake of rail transport, and human factors limit the adoption of driverless technology even when it is perfectly feasible - something that has not yet been proven in the case of road transport.

A more familiar example?

In Silicon Valley, people are often moving too fast and too busy breaking things that work to learn from other industries, let alone one that is over a hundred years old1, but there is a relevant example that is closer to home - literally.

When the Internet first opened to the public late last century, the way most people connected was through a dial-up modem over an analogue telephone line. We all become expert in arcane incantations in the Hayes AT command language, and we learned to recognise the weird squeals and hisses emitted by our modems and use them to debug the handshake with our ISP's modem at the far end. Modem speeds did accelerate pretty rapidly, going from the initial 9.6 kbits per second to 14.4, to 28.8, to weird 33.6, to a screamingly fast 56k (if the sun was shining and the wind was in the right quarter) in a matter of years.

However, this was still nowhere near fast enough. These days, if our mobile phones drop to EDGE - roughly equivalent to a 56k modem on a good day - we consider the network as being basically unusable. Therefore, there was a lot of angst about how to achieve higher speeds. Getting faster network speeds in general was not a problem - 10 Mbps Ethernet was widely available at the time. The issue was the last mile from the trunk line to subscribers' homes. Various schemes were mooted to get fast internet to the curb - or kerb, for Americans. Motivated individuals could sign up for ISDN lines, or more exotic connectivity depending on their location, but very few did. When we finally got widespread consumer broadband, it was in the form of ADSL over the existing copper telephone lines.

So where does this leave us?

Driverless vehicles will follow the same development roadmap2: until they can deliver the whole journey end to end, uptake will be limited. Otherwise, they are not delivering what people need.

More generally, to achieve any specific goals, it is usually better to work with existing systems and processes. That status quo came to be over time, and generally for good reason. Looking at something now, without the historical context, and deciding that it is wrong and needs to be disrupted, is the sort of Silicon Valley hubris that ends in tears.

Right now, with my business analyst hat on, driverless vehicles look like a cool idea (albeit one that is still unproven) that is being shoe-horned into a situation that it is not a good match for. If I were looking at a situation like this one in my day job, I would advise everyone to take a step back, re-evaluate what the actual goals are, and see whether a better approach might be possible. Until then, no matter how good the technology gets, it won’t actually deliver on the requirements.

But that doesn’t get as many visionary thinkpieces and TED talks.


Images by Nabeel Syed and Darren Bockman via Unsplash, and by ronnieb via Morguefile


  1. The old saw is that "In Europe, a hundred miles is a long way; in the US, a hundred years is a long time". In Silicon Valley, which was all groves of fruit trees fifty years ago, that time frame is shorter still. 

  2. Sorry - not sorry. 

Talkin' Bout a Revolution

Once again, the seemingly unkillable idea of modular phones rears its misshapen head.

The first offender is VentureBeat, with a breathless piece entitled The dream of Ara: Inside the rise and fall of the world’s most revolutionary phone.

record scratch

Let me stop you right there, VentureBeat. Ara is not a "revolutionary phone" at all, let alone "the world's most revolutionary phone", for the very good and sufficient reason that Project Ara never got around to shipping an actual phone before it was ignominiously shut down.

"Most ambitious phone design", maybe. I’d also settle for "most misguided", but that would be a different article. Whatever Ara was, it was not "revolutionary", because otherwise we would all be using modular phones. Even the most watered-down version of that idea, LG’s expandable G5 phone design, is now dead - although in their defence, at least LG did actually ship a product somewhat successfully.

Now Andy Rubin, creator of Android, is back in the news, with plans for a new phone… which sounds like it may well be modular:

It's expected to include […] the ability to gain new hardware features over time

This is a bold bet, and Andy Rubin certainly knows more about the mobile phone market than I do - but here’s why I don’t think a modular phone is the way to go.

Take a Step Back - No, Further Back

The reason I was sceptical about Project Ara’s chances from the beginning goes back to Clayton Christensen’s Disruption Theory. I have written about disruption theory before, so I won’t go into too much length about it here, but basically disruption states that in a fast-developing market, integrated products win because they can take advantage of rapid advances in the field. Vice versa, in a mature market products win by modularising, providing specific features with specific benefits or at a lower cost than the integrated solutions can deliver.

Disruption happens when innovation slows down because further innovation requires more resources than consumers are willing to invest. In this scenario, incumbent vendors continue to chase diminishing returns at the top of the market, only to find themselves undercut by modular competitors delivering "good enough" products. Over time, the modular products eat up the bulk of the market, leaving the ex-incumbents high and dry.

If you assume that the mobile phone market is mature and all development is just mopping up at the edges, then maybe a modular strategy makes sense, allowing consumers to start with a "good enough" basic phone and pick and choose the features most important to them, upgrading individual functionality over time. However, if the mobile phone market is still advancing rapidly and consumers still see the benefit from each round of improvements, then fundamental upgrades will happen frequently enough that integrated solutions will still have the advantage.

Some of the tech press seem to be convinced that we have reached the End of History in mobile technology. Last year’s iPhone 7 launch was the epitome of this view, with the consensus being that because the outside of the phone had not changed significantly compared to the previous generation, there was therefore no significant change to talk about.

The actual benchmarks tell a different story. The iPhone 7 is not only nearly a third faster than the previous generation of iPhone across the board, it also compares favourably to a 2013 MacBook Pro.

That type of year-over-year improvement is not the mark of a market that is ripe for modular disruption.

What Do Users Say?

The other question, beyond technical suitability, is whether users would consider a product like Project Ara, or LG’s expandable architecture. The answer, at least according to LG’s experience, is a resounding NO:

An LG spokesperson commented that consumers aren’t interested in modular phones. The company instead is planning to focus on functionality and design aspects

Consumers do not see significant benefits from the increase in complication that modularisation brings, preferring instead to upgrade the entire handset every couple of years, at which point every single component will be substantially better.

And that is why the mobile phone market is not ready for a modular product, instead preferring integrated ones. If every component in the phone needs to be upgraded anyway, modularisation brings no benefit; it’s an overhead at best, and a liability at worst, if modules can become unseated and get lost or cause software instability.

At some point the mobile phone market will probably be disrupted - but I doubt it will be done through a modularised hardware solution in the vein of Project Ara. Instead, I would expect modularisation to take place with more and more functionality being handed off to a cloud-based back-end. In this model, the handset will lose many of its independent capabilities, and revert to being what the telephone has been for most of its history: a dumb terminal connected to a smart network.

But we’re not there yet.


Images by Pavan Trikutam and Ian Robinson via Unsplash

Faster disruption

There are theories which seem just intrinsically right when you hear them. Clayton Christensen's famous "disruption theory" is one of these. I was recommended to read "The Innovator's Solution" by a friend of mine who had previously worked directly with Professor Christensen, and it definitely shaped my thinking about the technology business.

When the core business approaches maturity and investors demand new growth, executives develop seemingly sensible strategies to generate it. Although they invest aggressively, their plans fail to create the needed growth fast enough; investors hammer the stock; management is sacked; and Wall Street rewards the new executive team for simply restoring the status quo ante: a profitable but low-growth core business.

unknown.png

In sustaining circumstances—when the race entails making better products that can be sold for more money to attractive customers—we found that incumbents almost always prevail. In disruptive circumstances—when the challenge is to commercialise a simpler, more convenient product that sells for less money and appeals to a new or unattractive customer set—the entrants are likely to beat the incumbents.

Disruption theory explains a lot about many markets, although it is not without its critics. In particular, Jill Lepore caused a minor furore with a piece in the New Yorker entitled The Disruption Machine, in which she accused Professor Christensen of cherry-picking his evidence.

There is one famous exception and objection to disruption theory, and that is Apple. According to the orthodox version of the theory, Apple should have been disintermediated by now by smaller, more agile modularised competitors. Every year seems to bring its candidate as the disruptor of Apple: whether it's the Nexus, or Samsung, or Xiaomi, or whoever. Apple somehow survives them all, and not just survives, but goes from strength to strength.

Why is this?

Ben Thompson wrote about how classic disruption theory applies mainly to enterprise products, where the buyer is not the user, and so "feeds & speeds" that can be mapped on a checklist rule the purchasing process. The buyer is looking for a product that can satisfy some simplified criteria. Beyond that binary fit, the decision is primarily based on price.

Apple emphatically does not fit this model, with its focus on design and the user experience. The buyer is the user, and once their basic criteria are met, they can still be swayed by different user experience and personal preferences - of course, up to the budget they have available or are willing to assign to a piace of electronics. Therefore, Apple continues to be able to command much higher prices for devices that are - on paper - comparable to their modularised competitors. Users return again and again for newer versions of their device, every year or two, and Apple's profit margins are legendary.

Enterprise software had always seemed to be on a much more classic track to disruption, with procurement departments working from Request for Proposal (RfP) documents that generally allow yes/no or at most grading on a short scale, typically from one to four. Recently, though, the market has been changing, and incumbent vendors are being disrupted by offerings which bypass the traditional buyer in Procurement and appeal directly to the end user. One model is open-source, where sufficiently technical users have been able to download and use free software without support from central IT for at least the last fifteen years. The main roadblock to this avenue of disruption was in users' willingness to futz around with graphics drivers or whatever. More recently, a new avenue has opened up, namely software as a service. SaaS offerings require much less technical acumen, and much less effort even when the technical acumen is available. Even users who are able and willing to get their hands dirty only have a limited amount of time available to do so, and are quite happy to hand the responsibility off to someone else.

This is how you get the infamous "shadow IT" - enterprise IT's particular incarnation of disruption theory. However, I do wonder whether enterprise software might not also have exceptions to classic disruption theory. Buyer inertia may prevent modularisation, or at least complete modularisation, from taking hold, or delay it for a long time.

From integrated to modular to orchestrated

Apple cannot easily be replaced by modular competitors, even when those competitors offer lower prices and nominally higher performance, because the overall user experience delivered by those competing devices is inferior. There is an equivalent mechanism in enterprise software - although, unfortunately, it does not often manifest in attractive user interfaces and satisfying interactions. Rather, it is the experience of the buyer which is important.

Many mature, established companies have a "vendor rationalisation" initiative of some sort. Some may even go so far as to have an "Office of Vendor Management" or equivalent. One way of looking at this is the "one throat to choke" school of thought taken to its extreme, but there is something else going on here.

As software becomes more complex, and user requirements more varied, there are fewer and fewer one-stop software packages. Even within a single vendor's offerings, users will need to select multiple packages, many of which will have been developed by different teams or even by different companies acquired by the vendor over the years. Customers are looking for a trade-off between best-of-breed solutions from different vendors or open-source tools that require substantial work to integrate with each other, versus vertically integrated solutions from a single vendor that may not excel in any one area but can deliver on the whole task.

The variable that will drive choice in one direction or the other is the rate of change. If the integration between the best-of-breed packages remains valid, once developed, for a significant length of time, then the modularised, disrupting solutions - whether commercial on-premise, open-source, or SaaS - will win. If on the other hand integration is a constant effort that never fully stabilises, requiring never-ending development to chase a constantly moving target, then the benefits of the pre-integrated solution become more attractive in their turn.

Timing is everything

The twist is in the incentives. Developers of the modularised solutions are in a race with other modularised solutions, hired to do the same job, in Christensen's terminology. The way they keep ahead in the race is by evolving faster, adding more functionality sooner than their competitors. They have no direct incentive to stabilise their solutions. For the same reason, they have no particular incentive to stabilise the interfaces to their solutions, as this makes them more easily replaceable by their competitors (less sticky).

The upshot of all this is that a vertically-integrated company can stay ahead of the curve of disruption by innovating just enough to maintain stability for its users, while supporting a certain speed of evolution. This is the job that their customers hire them to do.

The commercial Unix platforms were displaced by Linux because both followed standards (GNU, Posix, and so on) that made them largely fungible from the point of view of their buyers, once Linux had developed beyond its beginnings. iPhones were not displaced by Android phones because they were not fungible to their buyers.

How not to be fungible

What are the characteristics of enterprise software that can make it non-fungible? Simply put, it comes pre-integrated, both with itself (or rather, between different components of itself) and with everything else. This is why content is so important. An API is not enough to avoid being disrupted; any open-source project worth its salt comes with an API - probably RESTful these days, but the principle is independent of technology. What prevents disruption, making the software "sticky", is content: pre-built integrations, workflows, best practices, and data transformations, that make the software work seamlessly for customers' needs.

Enterprise software needs enterprise-grade content that takes advantage of those integrations. Relying on technical capabilities alone leaves enormous vulnerability to motivated developers and agile start-ups. A would-be enterprise vendor must focus on what it can do to prevent disruption. Agility - chasing the bleeding edge - is not the job that buyers hire it to do.

However, there ain't no such thing as a free lunch: those integrations have to keep up with those other fast-moving targets. There is no "we support these two products, and we will add the third-placed platform with the next release of our software in a year's time"1. The integrations themselves have to evolve, and do so in a way that is both backwards-compatible (you can't break everything your users have built every time you upgrade) and fast-moving (you have to keep up with where your users are going, whether to new platforms, or to new versions of existing platforms2).

The bottom line

All of this represents yet another level of abstraction. The competition moves to a different layer of the stack: the content and integrations. Enterprise vendors who refuse to follow along are simply ceding to their competition; users - and importantly, buyers - are there already.

Hardware got commoditised, then operating systems got commoditised, and now it's the next layer up. It's not what you have under the hood, it's what you do with it - and both people and enterprises will buy the tool that enables them to get the most done.


  1. Please note the small print around any roadmap estimates. 

  2. Note that saying "nobody is using X in production yet" doesn't cut it. Users are most certainly using X in testing as a prelude to putting it into production, and as a part of that process they need to test that everything else integrates with X. Missing that wave is the first step to hearing "we went into production with your competitor on all new projects because they were able to support us in our move to X".