Showing all posts tagged enterprise:

Serving Two Causes

My current gig involves making comparisons between different products. It’s mostly interesting, and helps me understand better what the use cases are for all of these products — not just how they are built, but why. That double layer of analysis is what takes most of the work, because those two aspects — the how and the why — are never, ever documented in the same place. I spend a lot of time trawling through product docs and cross-comparing them to blogs, press reports, and most annoying of all, recorded videos.

Fundamentally, most reviews or evaluations tend to focus either on how well a product delivers its promised capabilities, or on how well it serves its users’ purposes. Those are not at all the same thing! A mismatch here is how you get products that are known to be powerful, but not user-friendly.

One frequent dichotomy is between designing to serve power users versus novices. It is very very hard to serve both groups well! The handholding and signposting that people require when they are new to a product, and perhaps even to the entire domain, is liable to be perceived as clutter that gets in the way of what more experienced users already know how to achieve. The ultimate example is between graphical and command-line interfaces: the sorts of pithy incantations that can be condensed into a dozen keystrokes at a shell prompt might take whole minutes of clicking, dragging, and selecting from menus in the GUI. On the other hand, if you don’t know those commands already, the windows, menus, and breadcrumbs, perhaps combined with judicious use of contextual help, give a much more discoverable route to the functionality.

Ideally, what you want to do is serve both user populations equally well: surprise and delight new users at first contact, but keep on making them happy and productive once they’re on board.

Two Tribes Go To War: Dev and Ops

There are two forces that are always tugging IT in different directions: stability, maintainability, and solid foundations (historically, this has been called Ops), versus ease of building and agility (this one is Dev). Business needs both, which is why Gartner came up with the idea of "bimodal IT" to describe the difference between these two approaches.

For more on this topic, listen to Episode Ten of Roll For Enterprise, digging into how come "legacy" became a dirty word in IT.

The problem, and the reason for a lot of the pushback on the mere idea of bimodal IT, is that most measurement frameworks only evaluate against one of the two axes. This is how we get to the failure modes of bimodal: the cool kids playing with new tech, and boring, uncool legacy tech in a forgotten corner.

Technologies can be successful while concentrating only on stability or only on agility, depending on target market; think of how successful Windows was in the 95-98-ME era despite tragic levels of instability, or for a more positive example, look at something like QNX quietly running nuclear reactors. However, in most cases it’s best to deliver at least some modicum of both stability and agility.

It’s About Time

Another factor to bear in mind is the different timescales that are in play. The lifetime of services can be short, in which case it’s okay to iterate rapidly & obsolesce equally rapidly — or more pithily, "move fast and break things", without slowing down to update the docs. On the other hand, the lifecycle of users can be long, if you treat them right at first contact. This is why it’s so important to strike a balance between onboarding new users quickly and servicing existing ones. If you turn off new users to please existing power users, you’ll regret it quickly; by definition, there are more potential users out there than existing ones. The precise balance point to aim for depends on what percentage of the total addressable user base already adopted your service, but there will always be new or more casual users who can benefit from helpful features, as long as they don’t get in the way of people who already know what they’re doing.

There’s no easy way to resolve the tension between agility and stability, speed of development and ease of maintainability, or power-user features and helpful onboarding. All I can suggest is to try to keep both in mind, because I have seen entire products sunk by an excessive focus on one or the other. If you do decide to focus your efforts on one side of the balance, that may still be fine — just as long as you’re conscious of what you’re giving up.


🖼️ Photos by MILKOVÍ and Aron Visuals on Unsplash

Foundations

What Rome is mainly built on is Rome.1

This is a view of the foundations of the Roman hotel I stayed in last night. Luckily, being a fairly modern structure, they built it in a way that exposes what lies underneath.

There’s an enterprise software architecture metaphor in there, and in fact I already wrote about it:

People, including many Italians, bemoan the lack of infrastructure around here. The problem is that as soon as you sink a shovel in the ground, you hit a priceless historical artefact. Then you have to spend months digging it out with toothbrushes and tweezers, and then you get to shovel a few more loads before there is another clunk and everything has to stop again. I shudder to think of how many times things have just been quickly reburied so as not to delay work... Here's an example from my home town: this monstrosity was built over the city amphitheatre, and it looks like the replacement will still not allow the ruins to be visited.

This is very much like what happens in established corporations. You can't just sink a shaft or dig a trench, because there are very good chances you will interfere with something that's already there. You also wind up with your brand new shiny thing still relying on thousand-year-old culverts for its drainage. You have to allow for all of these things when you plan what you want to build next.

Remember this when you bemoan how change-averse large companies are: if you don’t plan changes carefully, you risk breaking something critical somewhere else. This doesn’t mean you can’t change things, just that you have to go about it carefully.

Here is an example from London of major change, carefully planned, and done right.


  1. With apologies to Sir Pterry: "But then, practically everywhere in Ankh-Morpork had cellars that were once the first or even second or third floors of ancient buildings, built at the time of one of the city’s empires when men thought that the future was going to last for ever. And then the river had flooded and brought mud with it, and walls had gone higher and, now, what Ankh-Morpork was built on was mostly Ankh-Morpork. People said that anyone with a good sense of direction and a pickaxe could cross the city underground by simply knocking holes in walls." Excerpt from The Truth

Cloud Adoption Is Still Not A Done Deal

I have some thoughts on this new piece from 451 Research about IT provisioning. The report is all about how organisations that are slow to deliver IT resources will struggle to achieve their other goals. As business becomes more and more reliant on IT, the performance of IT becomes a key controlling factor for the overall performance of the entire business.

This connection between business and IT is fast becoming a truism; very few businesses could exist without IT, and most activities are now IT-enabled to some extent. If you’re selling something, you’ll have a website. People need to be able to access that website, and you need to make regular changes as you roll out new products, run sales promotions, or whatever. All of that requires IT support.

Where things get interesting is in the diagnosis of why some organisations succeed and others do not:

Just as internal IT culture and practices have an impact on provisioning time, they can also severely impact acceptance of technologies. Although the promise of machine learning and artificial intelligence (AI) is emerging among IT managers who took early steps toward machine-enabled infrastructure control, much work remains in convincing organizations of the technologies' benefits. In fact, the more manual the processes are for IT infrastructure management, the less likely that IT managers believe that machine learning and AI capabilities in vendor products will simplify IT management. Conversely, most managers in highly automated environments are convinced that these technologies will improve IT management.

If the IT team is still putting hands on keyboards for routine activities, that’s a symptom of some deeper rot.

It may appear easy to regard perpetual efforts of organizations to modernize their on-premises IT environments as temporary measures to extract any remaining value from company-owned datacenters before complete public cloud migration occurs. However, the rate of IT evolution via automation technologies is accelerating at a pace that allows organizations to ultimately transform their on-premises IT into cloudlike models that operate relatively seamlessly through hybrid cloud deployments.

The benefits of private cloud are something I have been writing about for a long time:

The reason this type of organisation might want to look at private cloud is that there’s a good chance that a substantial proportion of that legacy infrastructure is under- or even entirely un-used. Some studies I’ve seen even show average utilisation below 10%! This is where they get their elasticity: between the measured service and the resource pooling, they get a much better handle on what that infrastructure is currently used for. Over time, private cloud users can then bring their average utilisation way up, while also increasing customer satisfaction.

The bottom line is, if you already own infrastructure, and if you have relatively stable and predictable workloads, your best bet is to figure out ways to use what you already have more efficiently. If you just blindly jump into the public cloud, without addressing those cultural challenges, all you will end up with is a massive bill from your public cloud provider.

Large organisations have turning circles that battleships would be embarrassed by, and their radius is largely determined by culture, not by technology. Figuring out new ways to use internal resources more efficiently (private cloud), perhaps in combination with new types of infrastructure (public cloud), will get you where you need to be.

That cultural shift is the do-or-die, though. The agility of a 21st century business is determined largely by the agility of its IT support. Whatever sorts of resources the IT department is managing, they need to be doing so in a way which delivers the kinds of speed and agility that the business requires. If internal IT becomes a bottleneck, that’s when it gets bypassed in favour of that old bugbear of shadow IT.

IT is becoming more and more of a differentiator between companies, and it is also a signifier of which companies will make it in the long term – and which will not. It may already be too late to change the culture at organisations still mired in hands-on, artisanal provisioning of IT resources, but it is certain that completing that transition should be a priority.


Photo by Amy Skyer on Unsplash

More of Me

I have not been posting here nearly as much as I mean to, and I need to figure out a way to fix that.

In my defence, the reason is that I have been writing a lot lately, just not here. I have monthly columns at DevOps.com and IT Chronicles, as well as what I publish over at the Moogsoft blog. I aim for weekly blog posts, but that’s already three weekly slots out of four in each month taken up right there - plus I do a ton of other writing (white papers, web site copy, other collateral) that doesn’t get associated so directly with me.

As it happens, though, I am quite proud of my latest three pieces, so I’m going to link them here in case you’re interested. None of these are product pitches, not even the one on the company blog, more reflections on the IT industry and where it is going.

Do We Still Need the Datacenter? - a deliberately provocative title, I grant you, but it was itself provoked by a moment of cognitive dissonance when I was planning for the Gartner Data Center show while talking to IT practitioners who are busily getting rid of their data centers. Gartner themselves have recognised this shift, renaming the event to "IT Infrastructure, Operations Management & Data Center Summit" - a bit of a mouthful, but more descriptive.

Measure What’s Important: DevFinOps - a utopian piece, suggesting that we should embed financial data (cost and value) directly in IT infrastructure, to simplify impact calculation and rationalise decision making. I doubt this will ever come to pass, at least not like this, but it’s interesting to think about.

Is Premature Automation Holding IT Operations Back? - IT at some level is all about automation. The trick is knowing when to automate a task. At what point is premature automation considered not just wasteful, but actively harmful?


Photos by Patrick Perkins on Unsplash

Not Biting My Tongue

I spend a lot of time explaining enterprise buyers and vendors. There are often perfectly good reasons for doing something in a way that is now considered old-fashioned or uncool. Especially for vendors, the argument of "people still buy X! for money!" is a powerful incentive to continue making X.

Where things go wrong is when stodgy enterprise vendors put on their dad-jeans and go down to the skate park.

Case in point: BMC trying to jump on the AIOps bandwagon. The whole thing is a pretty spectacular case study in missing the point, but I think this paragraph is the nadir:

As mentioned above, AIOps platforms should encompass the IT disciplines of Performance Management, Service Management, Automation, and Process Improvement, along with technologies such as monitoring, service desk, capacity management, cloud computing, SaaS, mobility, IoT and more.

If you’re not familiar with AIOps, it’s a model that Gartner came up with (paid link, unless you’re a Gartner subscriber) to describe some shifts in the IT operations market. The old category of ITOA had been broadened to the point that it was effectively meaningless, and AIOps recognises a new approach to the topic.

The first thing to know about AIOps is that the "AI" bit did not originally stand for Artificial Intelligence. When the term was originally coined, AIOps actually stood for Algorithmic IT Operations. However, in these fallen times when everyone and their dog claims AI, Machine Learning, or other poorly-understood snake-oil, everyone assumed that AIOps was something to do with AI. Even Gartner have now given up, and retconned it to "Artificial Intelligence for IT Operations".

Anyway, AIOps solutions sit at the intersection of monitoring, service desk, and automation. The idea is that they ingest monitoring data, apply algorithms to help operators find valuable needles in the haystack of alerts, sync with service desk systems to plug in to existing processes, and trigger automated diagnostic and resolution activities.

So far so good - but here’s why it’s so laughable for BMC to claim AIOps.

BMC’s whole model is BSM - Business Service Management. Where the centre of AIOps is the algorithms, the centre of BSM is the CMDB.

The model for applying BSM goes something like this:

  1. Fully populate CMDB: define service models & document infrastructure
  2. When an alert comes in, determine which infrastructure element it came from, then walk the service model to determine what the cause and effect are
  3. Create a ticket in the ITSM suite to track resolution

Note the hidden assumptions, even in this grossly over-simplified version:

  1. The CMDB can be fully populated given finite time and effort
  2. All alerts relate to known elements, and all elements have known dependencies
  3. Every failure has one cause and falls within one group’s area of responsibility

In today’s IT, precisely none of these assumptions hold true. No matter how much effort and how many auto-discovery tools are thrown at the task, the CMDB will always be a snapshot in time1. Jorge Luis Borges famously documented the logical endpoint of this progression:

... In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

purportedly from Suárez Miranda, Travels of Prudent Men, Book Four, Ch. XLV, Lérida, 1658

There is also a timing factor: what happens if an alert comes in between a change occurring and being documented? Another question is, what happens if operators simply don’t have visibility into part of the infrastructure - say, managed hosting, or outside telco networks? And finally, the big one: what if there is no one root cause? Modern architectures are sufficiently robust and resilient that it’s quite rare for any one macro-event to take them out. What gets you is usually a combination of a number of smaller issues, all occurring together in some unforeseen way.

The whole architecture of BSM is built around assumptions that are less and less true. This is not to say that individual products within that suite don’t have value, but the old BSM model is no longer fit for purpose. The result is an example of "shipping the org chart": the CMDB is at the core and Remedy is the interface, because that is what the organisation demands. However, you can’t just drape AIOps over the old suite and call it good! Radical changes are required, not weak attempts to shoe-horn existing "IT disciplines" into the new mold.

AIOps represents the algorithmic convergence of ITOM & ITSM. In contrast, if we consider the sequence of BSM, these are assumed to be different discrete steps in a sequential process. This is Waterfall thinking applied to IT Ops, where today’s IT infrastructures demand Agile thinking.

The most relevant question for users is, of course, "do I trust a legacy vendor to deliver a new model that is so radically different from what it has built its entire strategy around?"

The answer is simple, because it’s determined by the entire structure and market position of all the Big Four vendors. Like its peers, BMC makes its revenue in the old model of IT. As long as there is money to be made by doing the same things it has always done, there is enormous inertia to work against (the Innovator’s Dilemma in action). It takes an existential threat to disturb that sort of equilibrium. It was not until ServiceNow was seriously threatening the Remedy user base that BMC started to offer SaaS options and subscription pricing. It will take an equivalent upheaval in its business for any legacy vendor to adopt a radically new strategy like AIOps. These days, customers can’t wait for one vendor to see the writing on the wall; they need to move at the speed their customers require.

Much as I would like to believe that we have got BMC running scared, I don’t think that’s the case - so they will continue along their very profitable way. This is of course exactly how it should be! If they were to jump on every new bandwagon, their shareholders would be rightly furious. They absolutely should focus on doing what they do well.

But that does not include doing AIOps. If you’re a practitioner looking at this, I hope it’s obvious who you want to go with: the people creating the new model and who are steeped in what is required to deliver and adopt it - or the ones who see a keyword trending on Google, and write a quick ambulance-chasing blog post - or claim that Remedy is a key part of AIOps - or even that mainframes are.


  1. Which is why BMC’s own automation products have their separate real-time operational data stores, which sync with the CMDB on a schedule. 

Talk Softly

With the advent of always-on devices that are equipped with sensitive microphones and a permanent connection to the Internet, new security concerns are emerging.

Virtual assistants like Apple’s Siri, Microsoft’s Cortana and Google Now have the potential to make enterprise workers more productive. But do "always listening" assistants pose a serious threat to security and privacy, too?

Betteridge’s Law is in effect here. Sure enough, the second paragraph of the article discloses its sources:

Nineteen percent of organizations are already using intelligent digital assistants, such as Siri and Cortana, for work-related tasks, according to Spiceworks’ October 2016 survey of 566 IT professionals in North America, Europe, the Middle East and Africa.

A whole 566 respondents, you say? From a survey run by a help desk software company? One suspects that the article is over-reaching a bit - and indeed, if we click through to the actual survey, we find this:

Intelligent assistants (e.g., Cortana, Siri, Alexa) used for work-related tasks on company-owned devices had the highest usage rate (19%) of AI technologies

That is a little bit different from what the CSO Online article is claiming. Basically, anyone with a company-issued iPhone who has ever used Siri to create an appointment, set a reminder, or send a message about anything work-related would fall into this category.

Instead, the article makes the leap from that limited claim to extrapolating that people will be bringing their Alexa device to work and connecting it to the corporate network. Leaving aside for a moment the particular vision of hell that is an open-plan office where everyone is talking into the air all the time, what does that mean for the specific recommendations in the article?

  1. Focus on user privacy
  2. Develop a policy
  3. Treat virtual assistant devices like any IoT device
  4. Decide on BYO or company-owned
  5. Plan to protect

These are actually not bad recommendations - but they are so generic as to be useless. Worse, when they do get into specifics, they are almost laughably paranoid:

Assume all devices with a microphone are always listening. Even if the device has a button to turn off the microphone, if it has a power source it’s still possible it could be recording audio.

This is drug-dealer level of paranoia. Worrying that Alexa might be broadcasting your super secret and valuable office conversations does not even make the top ten list of concerns companies should have about introducing such devices into their networks.

The most serious threat you can get from Siri at work is co-workers pranking you if you enable access from the lock screen. In that case, anyone can grab your unattended iPhone and instruct Siri to call you by some ridiculous name. Of course I would never sabotage a colleague’s phone by renaming him "Sweet Cakes". Ahem. Interestingly, it turns out that the hypothetical renaming also extends to the entry in the Contacts…

The real concern is that by focusing on these misguided recommendations, the focus is taken off advice that would actually be useful in the real world. For instance, if you must have IoT devices in the office for some reason, this is good advice:

One way to segment IoT devices from the corporate network is to connect them to a guest Wi-Fi network, which doesn’t provide access to internal network resources.

This recommendation applies to any device that needs Internet access but does not require access to resources on the internal network. This will avoid issues where, by compromising a device (or its enabling cloud service), intruders are able access your internal network in what is known as a "traversal attack". If administrators restrict the device’s access to the network, that will also restrict the amount of damage an intruder can do.

Thinking about access to data is a good idea in general, not just for voice assistants or IoT devices:

Since personal virtual assistants "rely on the cloud to comprehend complex commands, fetch data or assign complex computing tasks to more resources," their use in the enterprise raises issues about data ownership, data retention, data and IP theft, and data privacy enforcement that CISOs and CIOs will need to address.

Any time companies choose to adopt a service that relies on the cloud, their attack surface is not limited to the device itself, but also extends to that back-end service - which is almost certainly outside their visibility and control. Worse, in a BYOD scenario, users may introduce new devices and services to the corporate network that are not designed or configured for compliance with organisations’ security and privacy rules.

Security is important - but let’s focus on getting the basics right, without getting distracted by overly-specific cybersecurity fantasy role-playing game scenarios involving Jason Bourne hacking your Alexa to steal your secrets.

Replace or Augment?

One of the topics that currently exercise the more forward-looking among us is the potential negative impact of automation on the jobs market and the future of work in general. Comparisons are frequently made with the Industrial Age and its consequent widespread social disruption - including violent reactions, most famously the Luddite and saboteur movements.

Some cynics have pointed out that there was less concern when it was only blue-collar jobs that were being displaced, and that what made the chattering classes sit up and pay attention was the prospect of the disruption coming for their jobs too. I could not possibly comment on this view - but I can comment on what I have seen in years of selling automation software into large companies.

For more than a decade, I have been involved in pitching software that promised to automate manual tasks. My customers have always been large enterprises, usually the Global 2000 or their immediate followers. Companies like this do not buy software on a whim; rather, they build out extensive business cases and validate their assumptions in detail before committing themselves1. There are generally three different ways of building a business case for this kind of software:

  • Support a growth in demand without increasing staff levels (as much);
  • Support static demand with decreasing staff;
  • Quality improvement (along various different axes) and its mirror image, risk avoidance.

The first one is pretty self-evident - if you need to do more than you can manage with the existing team, you need to hire more people, and that costs money. There are some interesting second-order consequences, though. Depending on the specifics of the job to be done, it will take a certain amount of time to identify a new hire and train them up to be productive. Six months is a sensible rule of thumb, but I know of places where it takes years. If the rate of growth gets fast enough, that lag time starts to be a major issue. You can't just hire yourself out of the hole, even with endless money. The hole may also be getting deeper if other companies in the same industry and/or region are all going through the same transformation at the same time, and all competing for the same talent.

If instead you can adopt tooling that will make your existing people more efficient and let you keep up with demand, then it is worth investing some resources in doing so.

That second business case is the nasty one. In this scenario, the software will pay for itself by automating people's jobs, thus enabling the company to fire people - or in corporate talk, "reduce FTE2 count". The fear of this sort of initiative is what makes rank and file employees often reflexively suspicious of new automation tools - over and above their natural suspicion that a vendor might be pitching snake-oil.

Personally I try not to build business cases around taking away people's jobs, mainly because I like being able to look myself in the mirror in the mornings (it's hard to shave any other way, for one thing). There is also a more pragmatic reason not to build a business case this way, though, and I think it is worth exploring for its wider implications.

Where Are The Results?

The thing is, in my experience business cases for automation built around FTE reduction have never been delivered successfully - if focused on automation of existing tasks. That is an important caveat, but I will come back to that.

Sure, the business case might look very persuasive - "we execute this task roughly a dozen times a day, it takes half an hour each time, and if you add that up, it's the equivalent of a full-time employee (an FTE), so we can fire one person". When you look at the details, though, it's not quite so simple.

The fact is that people rarely work at discrete tasks. Instead, they spend their time on a variety of different tasks, more or less integrated into a whole process. There is a tension between the two extremes: at the one end you have workers on a repetitive assembly line, while at the other you have people jumping around so much they can never get anything done. Most organisational functions are somewhere in between those two poles.

If automation is focused on addressing those discrete tasks, it absolutely will bring benefits, but those benefits will add up to freeing up existing employees to catch up with other tasks that were being neglected. Every IT department I have ever seen has a long tail of to-dos that keep getting pushed down the stack by higher-priority items. Automation is the force multiplier that promises to let IT catch up with its to-do list.

This sort of benefit is highly tactical, and is generally the domain of point solutions that do one thing and do it well. This will enable the first kind of business case, delivering on new requirements faster. It will not deliver the second kind of business case. The FTEs freed up through automation get redeployed, not fired, and while the organisation is receiving benefit from that, it is not what was built into the assumptions of the project, which will cause problems for its sponsors. Simply put, if someone ever checks the return on the investment (an all too rare occurrence in my experience), the expected savings will not be there.

Strategic benefits of automation, on the other hand, are delivered by bundling many of these discrete tactical tasks together into a new whole.

Realising those strategic benefits is not as straightforward as dropping a new tool into an existing process. Actually achieving the projected returns will require wholesale transformation of the process itself. This is not the sort of project that can be completed in a quarter or two (although earlier milestones should already show improvement). It should also not be confused with a technology implementation project. Rather, it is a business transformation project, and must be approached as such.

Where does this leave us?

Go Away Or I Will Replace You With A Very Small Shell Script

In my experience in the field, while tactical benefits of automation are achievable, true strategic improvement through automation can only be delivered by bundling together disparate technical tasks into a new whole. The result is that it is not skilled workers that are replaced, but rather the sorts of undifferentiated discrete tasks that many if not most large enterprises have already outsourced.

This shows who the losers of automation will be: it is the arbitrageurs and rent-seekers, the body-rental shops who provide no added value beyond cheap labour costs. The jobs that are replaced are those of operators, what used to be known as tape jockeys; people who perform repetitive tasks over and over.

The jobs that will survive and even benefit from the wave of automation are those that require interaction with other humans in order to determine how to direct the automation, plus of course the specialists required to operate the automation tools themselves. The greatest value, however, will accrue to those who can successfully navigate the interface between the two worlds. This is why it is so important to own those interfaces.

What might change is the nature of the employment contracts for those new roles. While larger organisations will continue to retain in-house skills, smaller organisations for which such capabilities are not core requirements may prefer to bring them in on a consultative basis. This will mean that many specialists will need to string together sequences of temporary contracts to replace long-duration full-time employment.

This is its own scary scenario, of course. The so-called gig economy has not been a win so far, despite its much-trumpeted potential. Perhaps the missing part to making this model work is some sort of universal basic income to provide a base and a safety net between consulting jobs? As more and more of the economy moves in this direction, at least in part due to the potential of automation, UBI or something similar will be required to bridge the gap between the assumptions of the old economy and the harsh realities of the new one.

So, the robots are not going to take our jobs - but they are going to change them, in some cases into something unrecognisable. The best thing humans can do is to plan to take care of one other.


Images by Annie Spratt, Janko Ferlic, and Jayphen Simpson via Unsplash


  1. Well, in theory. Sometimes you lose a deal because the other vendor's CEO took your prospect's entire management team for a golfing weekend in the corporate jet. But we don't talk about that. 

  2. An FTE is a Full-Time Equivalent: the amount of work expected of one employee, typically over a year, allowing for holidays and so on. Typically that means somewhere between 200 and 220 working days of 8 hours each, so 1600 to 1760 hours in a year. The "FTE cost" of an activity is calculated by taking the time required to perform an activity once, multiplying that by the number of times that activity needs to be performed, and dividing by the FTE rate. 

Own Your Interfaces

The greatest benefit of the Internet is the democratisation of technology. Development of customised high-tech solutions is no longer required for success, as ubiquitous commodity technology makes it easy to bring new product offerings to market.

Together with the ongoing move from one-time to recurring purchases, this process of commoditisation moves the basis of the competition to the customer experience. For most companies, the potential lifetime value of a new customer is now many times the profit from their initial purchase. This hoped-for future revenue makes it imperative to control the customer's experience at every point.

As an illustration, let us consider two scenarios involving outsourcing of products that are literally right in front of their users for substantial parts of the day.

Google Takes Its Eye Off the Watch

The first is Google and Android's answer to the Apple Watch, Android Wear. As is (usually) their way, Google have not released their own smartwatch product. Instead, they have released the Android Wear software platform, and left it to their manufacturing partners to build the actual physical products.

Results have been less than entirely positive:

If Android Wear is to be taken as seriously as the Apple Watch, we actually need an Android version of the Apple Watch. And these LG watches simply aren't up to the task.

Lacking the sort of singular focus and vertical integration between hardware and software that Apple brings to bear, these watches fail to persuade, and not by a little:

I think Google and LG missed the mark on every level with the Style, and on the basis of features alone that it is simply a bad product.

So is the answer simply to follow Apple's every move?

It is certainly true Google have shown with their Nexus and Pixel phones just how much better a first-party Android phone can be, and it is tempting to extrapolate that success to a first-party Google Watch. However, smartwatches are still very much a developing category, and it is not at all clear whether they can go beyond the current fitness-focused market. In fact, I would not be surprised to see a contraction in the size of the overall smartwatch market. Many people who bought a first-generation device out of curiosity and general technophilia may well opt not to replace that device.

Apple Displays Rare Clumsiness

In that case, let us look at an example outside the smartwatch market - and one where the fumble was Apple's.

Ever since Retina displays became standard first on MacBooks1 and then on iMacs, Mac users have clamoured for a large external display from Apple, to replace the non-Retina Apple Thundebolt Display that still graces many desks. Bandwidth constraints meant that this was not easy to do until a new generation of hardware came to market, but Apple fans were disappointed when, instead of their long-awaited Apple Retina 5K Display, they were recommended to buy a pretty generic-looking offering from LG.

Insult was added to injury when it became known that the monitor was extremely sensitive to interference, and in fact became unusable if placed anywhere near a wifi router:

the hardware can become unusable when located within 2 meters of a router.

Two metres is not actually that close; it's over six feet, if you're not comfortable with metric units. Many home office setups would struggle with that constraint - I know mine would.

Many have pointed out that one of the reasons for preferring expensive Apple solutions is that they are known to be not only beautifully designed, but obsessively over-engineered. It beggars belief that perfectionist, nit-picking Apple would have let a product to market with such a basic flaw - and yet, today, if an Apple fan spends a few thousand dollars on a new MacBook Pro and a monitor in an Apple Store, they will end up looking at a generic-looking LG monitor all day - if, that is, they can use the display at all.

Google and Apple both ceded control of a vitally important part of the customer experience to a third party, and both are now paying the price in terms of dissatisfied users. There are lessons here that also apply outside of manufacturing and product development.

Many companies, for instance, outsource functions that are seen as ancillary to third parties. A frequent candidate for these arrangements is support - but to view support this way is a mistake. It is a critical component of the user experience, and all the more so because it is typically encountered at times of difficulty. A positive support experience can turn a customer into a long-term fan, while a negative one can put them off for good.

Anecdata Time

A long time ago and far far away, I did a stint in technical support. During my time there, my employer initiated a contract with a big overseas outsourcing firm. The objective was to add a "tier zero" level of support, which could deal with routine queries - the ones where the answer was a polite invitation to Read The Fine Manual, basically - and escalate "real" issues to the in-house support team.

The performance of the outsourcer was so bad that my employer paid a termination fee to end the contract early, after less than one year. Without going into the specifics, the problem was that the support experience was so awful that it was putting off our customers. Given that we sold mainly into the large enterprise space, where there is a relatively limited number of customers in the first place, and that we aimed to cross-sell our integrated products to existing customers, a sudden increase in the number of unhappy customers was a potential disaster.

We went back to answering the RTFM queries ourselves, customer sat went back up into the green, and everyone was happy - well, except for the outsourcer, presumably. The company had taken back control of an important interface with its customers.

Interface to Differentiate

There are only a few of these interfaces and touch-points where a company has an opportunity to interact with its customers. Each interaction is an opportunity to differentiate against the competition, which is why it is so vitally important to make these interactions as streamlined and pleasant as possible.

This requirement is doubly important for companies who sell subscription offerings, as they are even more vulnerable to customer flight. In traditional software sales, the worst that can happen is that you lose the 20% (or whatever) maintenance, as well as a cross-sell or up-sell opportunity that may or may not materialise. A cancelled subscription leaves you with nothing.

A customer who buys an Android Wear smartwatch and has a bad experience will not remember that the watch was manufactured by LG; they will remember that their Android Wear device was not satisfactory. In the same way, someone who spends their day looking at an LG monitor running full-screen third-party applications - say, Microsoft Word - will be more open to considering a non-Apple laptop, or not fighting so hard to get a MacBook from work next time around. Both companies ceded control of their interface with their customers.

Usually companies are very eager to copy Apple and Google's every move. This is one situation where instead there is an opportunity to learn from their mistakes. Interfaces with customers are not costs to be trimmed; instead, they can be a point of differentiation. Treat them as such.


Image by Austin Neill via Unsplash


  1. Yes yes, except for the Air. 

Thinking Two Steps Behind

In my day job, I spend a lot of my time building business cases to help understand whether our technology is a good fit for a customer. When you are building a startup business, this is the the expected trajectory: in the very early days, you have to make the technology work, translating the original interesting idea into an actual product that people can use in the real world. Once you have a working product, though, it’s all about who can use it, and what they can do with it.

In this phase, you stop pitching the technology. Instead, you ask questions and try to understand what ultimate goals your prospective customer has. Only once you have those down do you start talking about what your technology can do to satisfy those goals. If you do not do this, you find yourself running lots of "kick the tyres" evaluations that never go anywhere. You might have lots of activity, but you won’t have many significant results to show for it.

This discipline of analysing goals and identifying a technology fit is very useful in analysing other fields too, and it helps to identify when others may be missing some important aspect of a story.

Let’s think about driverless cars

Limited forms of self-driving technology already exist, from radar cruise-control to more complete approaches such as Tesla’s Autopilot. None of these are quite ready for prime time, and there are fairly regular stories about their failures, with consequences from the comic to the tragic.

Uhoh, This content has sprouted legs and trotted off.

Because of these issues, Tesla and others require that a drivers keep their hands on the wheel even when the car is in Autopilot mode. This brings its own problems, falling into an "uncanny valley" of attention: the driver is neither fully engaged, nor can they fully disengage. Basically it’s the worst of both worlds, as drivers are no longer involved in the driving, but still cannot relax and read a book or watch a film.

These limitations have not stopped much of the commentary from assuming self-driving car technology to be, if not a problem that is already solved, at least one that is solvable. Extrapolations from that point lead to car ownership becoming a thing of the past as people simply summon self-driving pods to their location, which in turn causes massive transformations in both labour force (human drivers, whether truckers or Uber drivers, are no longer required) and the physical make-up of cities (enormous increases in the utilisation rate for cars mean that large permanent parking structures are no longer required) - let alone the consequences for automotive manufacturers, faced with a secular transformation in their market.

Okay, maybe not cars

Self-driving technology is not nearly capable (yet) of navigating busy city streets, full of unpredictable pedestrians, cyclists, and so on, so near-term projections focus on what is perceived as a more easily solvable problem: long-distance trucking.

The idea is that currently existing self-driving tech is already just about capable of navigating the constrained, more predictable environment of the highways between cities. Given some linear improvement, it does not seem that far-fetched to assume that a few more years of development would give us software capable of staying in lane and avoiding obstacles reliably enough to navigate a motorway in formation with other trucks.

Extrapolating this capability to the wholesale replacement of truckers with autonomous robot trucks, however, is a big reach - and not so much for technical reasons, as for less easily tractable external reasons.

Assuming for the sake of argument that Otto (or whoever) successfully develop their technology and build an autonomous truck that can navigate between cities, but not enter the actual city itself. This means that Otto or its customers would need to build warehouses right at the motorway junctions in areas where they wish to operate, to function as local hubs. From these locations, smaller, human-operated vehicles would make the last-mile deliveries to homes and businesses inside the city streets, which are still not accessible to the robot trucks.

This is all starting to sound very familiar. We already have a network optimised for long-distance freight between local distribution hubs. It is very predictable by design, allowing only limited variables in its environment, and it is already highly instrumented and very closely monitored. Even better, it has been in operation at massive scale for more than a century, and has a whole set of industry best practices and commercial relationships already in place.

I am of course talking about railways.

Get on the train

Let’s do something unusual for high-tech, and try to learn something from history for once. What can the example of railways teach us about the potential for self-driving technology on the road?

The reason for the shift from rail freight to road freight was to avoid trans-shipment costs. It’s somewhat inefficient to load your goods onto one vehicle, drive it to a warehouse, unload them, wait for many other shipments to be assembled together, load all of them onto another vehicle, drive that vehicle to another warehouse, unload everything, load your goods onto yet another vehicle, and finally drive that third vehicle to your final destination. It’s only really worthwhile to do this for bulk freight that is not time-sensitive. For anything else, it’s much easier to just back a truck up to your own warehouse, load up the goods, and drive them straight to their final destination.

Containerisation helped somewhat, but railways are still limited to existing routes; a new rail spur is an expensive proposition, and even maintenance of existing rail spurs to factories is now seen as unnecessary overhead, given the convenience of road transport’s flexibility and ability to deliver directly to the final destination.

In light of this, a network of self-driving trucks that are limited to predictable, pre-mapped routes on major highways can be expected to run into many of the same issues.

Don’t forget those pesky humans

Another interesting lesson that we can take from railways is the actual uptake of driverless technology. As noted above, railways are a far more predictable environment than roads: trains don’t have to manoeuvre, they just move forwards along the rails, stopping at locations that are predetermined. Changes of directions are handled by switching points in the rails, not by the operator needing to steer the train around obstacles. Intersections with other forms of transport are rare, as other traffic generally uses bridges and underpasses. Where this separation is not possible, level crossings are still far more controlled than road intersections. Finally, there are sensors everywhere on railways; controllers know exactly where a certain train is, what its destination and speed are, and what is the state of the network around it.

So why don’t we have self-driving trains?

The technology exists, and has done so for years - it’s a much simpler problem than self-driving cars - and it is in use in a few locations around the world (e.g. London and Milan); but still, human-operated trains are the norm. Partly, it’s a labour problem; those human drivers don’t want to be out of a job, and have been known to go on strike against even the possibility of the introduction of driverless trains. Partly, it’s a perception problem: trains are massive, heavy, powerful things, and most people simply feel more comfortable knowing that a human is in charge, rather than potentially buggy software. And partly, of course, it’s the economics; human train drivers are a known quantity, and any technology that wants to replace them is not.

This means that the added convenience of end-to-end transportation limits the uptake of rail transport, and human factors limit the adoption of driverless technology even when it is perfectly feasible - something that has not yet been proven in the case of road transport.

A more familiar example?

In Silicon Valley, people are often moving too fast and too busy breaking things that work to learn from other industries, let alone one that is over a hundred years old1, but there is a relevant example that is closer to home - literally.

When the Internet first opened to the public late last century, the way most people connected was through a dial-up modem over an analogue telephone line. We all become expert in arcane incantations in the Hayes AT command language, and we learned to recognise the weird squeals and hisses emitted by our modems and use them to debug the handshake with our ISP's modem at the far end. Modem speeds did accelerate pretty rapidly, going from the initial 9.6 kbits per second to 14.4, to 28.8, to weird 33.6, to a screamingly fast 56k (if the sun was shining and the wind was in the right quarter) in a matter of years.

However, this was still nowhere near fast enough. These days, if our mobile phones drop to EDGE - roughly equivalent to a 56k modem on a good day - we consider the network as being basically unusable. Therefore, there was a lot of angst about how to achieve higher speeds. Getting faster network speeds in general was not a problem - 10 Mbps Ethernet was widely available at the time. The issue was the last mile from the trunk line to subscribers' homes. Various schemes were mooted to get fast internet to the curb - or kerb, for Americans. Motivated individuals could sign up for ISDN lines, or more exotic connectivity depending on their location, but very few did. When we finally got widespread consumer broadband, it was in the form of ADSL over the existing copper telephone lines.

So where does this leave us?

Driverless vehicles will follow the same development roadmap2: until they can deliver the whole journey end to end, uptake will be limited. Otherwise, they are not delivering what people need.

More generally, to achieve any specific goals, it is usually better to work with existing systems and processes. That status quo came to be over time, and generally for good reason. Looking at something now, without the historical context, and deciding that it is wrong and needs to be disrupted, is the sort of Silicon Valley hubris that ends in tears.

Right now, with my business analyst hat on, driverless vehicles look like a cool idea (albeit one that is still unproven) that is being shoe-horned into a situation that it is not a good match for. If I were looking at a situation like this one in my day job, I would advise everyone to take a step back, re-evaluate what the actual goals are, and see whether a better approach might be possible. Until then, no matter how good the technology gets, it won’t actually deliver on the requirements.

But that doesn’t get as many visionary thinkpieces and TED talks.


Images by Nabeel Syed and Darren Bockman via Unsplash, and by ronnieb via Morguefile


  1. The old saw is that "In Europe, a hundred miles is a long way; in the US, a hundred years is a long time". In Silicon Valley, which was all groves of fruit trees fifty years ago, that time frame is shorter still. 

  2. Sorry - not sorry. 

Reference This!

One of the downsides of working for a little startup that is going to change the world, but doesn’t quite have the name recognition yet - is that you get asked for customer references all. the. time.

Now on one level, I have absolutely no problem with this. It makes perfect sense from the point of view of a prospective customer: some rando just showed up, and sure, he’s got some good PowerPoint game, but I’ve never heard of him or his company - why should I waste any time on him?

The problem that a prospective customer might not appreciate is that by the nature of things, a growing startup has many more prospects than existing customers. Every single member of the sales team, if they are doing their job, has as many sales prospects at any one time as there are total customers in production. If we were to walk each of those prospects past a real customer, just so they could kick the tyres and see whether we have something real, pretty soon our existing customers would stop taking our calls - not a clever strategy, when we sell subscriptions and rely on customers renewing their contracts.

The trick, then, is to balance these requirements. On the side of the prospective customer, the goal is to validate whether this interesting startup has an actual product - or just an interesting idea and some vapourware slides. This is an absolutely valid goal - but prospective customers should recognise vendors’ incentives as well.

Reputable vendors who actually intend to build a lasting business have no more interest in wasting time and resources in projects that do not go anywhere than customers do. We know that our tech works (again, assuming for the sake of argument that the vendor you’re talking to is not just a straight-up scammer), so our goal is not to waste our limited time and resources chasing after something that is never going to be a successful, referenceable production implementation.

So, all of that being said - please don’t ask vendors for references on the first date. If the vendor you're talking to is any good, they will be qualifying you as aggressively as you are qualifying them. We vendors are very protective of our customers - once more, assuming you’re dealing with a reputable vendor in the first place! Please don’t see this as us being difficult or having something to hide; rather, it’s a preview of how your own relationship with us as a customer would be. If you trust us with your business, we will be equally protective of you. You want to be sure that if we come to you in the future with a request to talk to someone about your experience with our products, it’s for a good reason, and not something we will ask you to do every other week.

Once everyone is comfortable that there is a real opportunity - that is when we can get other parties involved. Until then, here’s a white paper, here’s a recorded webinar, here’s an article by an analyst who spoke to some of our customers - but my current customers are sacred to me, and I won’t introduce you to them until you convince me that you’re serious.

This has been your subtweet-as-blog-post of the day.


Image by María Victoria Heredia Reyes via Unsplash