I have not been posting here nearly as much as I mean to, and I need to figure out a way to fix that.
In my defence, the reason is that I have been writing a lot lately, just not here. I have monthly columns at DevOps.com and IT Chronicles, as well as what I publish over at the Moogsoft blog. I aim for weekly blog posts, but that’s already three weekly slots out of four in each month taken up right there - plus I do a ton of other writing (white papers, web site copy, other collateral) that doesn’t get associated so directly with me.
As it happens, though, I am quite proud of my latest three pieces, so I’m going to link them here in case you’re interested. None of these are product pitches, not even the one on the company blog, more reflections on the IT industry and where it is going.
Do We Still Need the Datacenter? - a deliberately provocative title, I grant you, but it was itself provoked by a moment of cognitive dissonance when I was planning for the Gartner Data Center show while talking to IT practitioners who are busily getting rid of their data centers. Gartner themselves have recognised this shift, renaming the event to "IT Infrastructure, Operations Management & Data Center Summit" - a bit of a mouthful, but more descriptive.
Measure What’s Important: DevFinOps - a utopian piece, suggesting that we should embed financial data (cost and value) directly in IT infrastructure, to simplify impact calculation and rationalise decision making. I doubt this will ever come to pass, at least not like this, but it’s interesting to think about.
I spend a lot of time explaining enterprise buyers and vendors. There are often perfectly good reasons for doing something in a way that is now considered old-fashioned or uncool. Especially for vendors, the argument of "people still buy X! for money!" is a powerful incentive to continue making X.
Where things go wrong is when stodgy enterprise vendors put on their dad-jeans and go down to the skate park.
As mentioned above, AIOps platforms should encompass the IT disciplines of Performance Management, Service Management, Automation, and Process Improvement, along with technologies such as monitoring, service desk, capacity management, cloud computing, SaaS, mobility, IoT and more.
If you’re not familiar with AIOps, it’s a model that Gartner came up with (paid link, unless you’re a Gartner subscriber) to describe some shifts in the IT operations market. The old category of ITOA had been broadened to the point that it was effectively meaningless, and AIOps recognises a new approach to the topic.
The first thing to know about AIOps is that the "AI" bit did not originally stand for Artificial Intelligence. When the term was originally coined, AIOps actually stood for Algorithmic IT Operations. However, in these fallen times when everyone and their dog claims AI, Machine Learning, or other poorly-understood snake-oil, everyone assumed that AIOps was something to do with AI. Even Gartner have now given up, and retconned it to "Artificial Intelligence for IT Operations".
Anyway, AIOps solutions sit at the intersection of monitoring, service desk, and automation. The idea is that they ingest monitoring data, apply algorithms to help operators find valuable needles in the haystack of alerts, sync with service desk systems to plug in to existing processes, and trigger automated diagnostic and resolution activities.
So far so good - but here’s why it’s so laughable for BMC to claim AIOps.
BMC’s whole model is BSM - Business Service Management. Where the centre of AIOps is the algorithms, the centre of BSM is the CMDB.
The model for applying BSM goes something like this:
Fully populate CMDB: define service models & document infrastructure
When an alert comes in, determine which infrastructure element it came from, then walk the service model to determine what the cause and effect are
Create a ticket in the ITSM suite to track resolution
Note the hidden assumptions, even in this grossly over-simplified version:
The CMDB can be fully populated given finite time and effort
All alerts relate to known elements, and all elements have known dependencies
Every failure has one cause and falls within one group’s area of responsibility
In today’s IT, precisely none of these assumptions hold true. No matter how much effort and how many auto-discovery tools are thrown at the task, the CMDB will always be a snapshot in time1. Jorge Luis Borges famously documented the logical endpoint of this progression:
... In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.
purportedly from Suárez Miranda, Travels of Prudent Men, Book Four, Ch. XLV, Lérida, 1658
There is also a timing factor: what happens if an alert comes in between a change occurring and being documented? Another question is, what happens if operators simply don’t have visibility into part of the infrastructure - say, managed hosting, or outside telco networks? And finally, the big one: what if there is no one root cause? Modern architectures are sufficiently robust and resilient that it’s quite rare for any one macro-event to take them out. What gets you is usually a combination of a number of smaller issues, all occurring together in some unforeseen way.
The whole architecture of BSM is built around assumptions that are less and less true. This is not to say that individual products within that suite don’t have value, but the old BSM model is no longer fit for purpose. The result is an example of “shipping the org chart": the CMDB is at the core and Remedy is the interface, because that is what the organisation demands. However, you can’t just drape AIOps over the old suite and call it good! Radical changes are required, not weak attempts to shoe-horn existing "IT disciplines" into the new mold.
AIOps represents the algorithmic convergence of ITOM & ITSM. In contrast, if we consider the sequence of BSM, these are assumed to be different discrete steps in a sequential process. This is Waterfall thinking applied to IT Ops, where today’s IT infrastructures demand Agile thinking.
The most relevant question for users is, of course, "do I trust a legacy vendor to deliver a new model that is so radically different from what it has built its entire strategy around?"
The answer is simple, because it’s determined by the entire structure and market position of all the Big Four vendors. Like its peers, BMC makes its revenue in the old model of IT. As long as there is money to be made by doing the same things it has always done, there is enormous inertia to work against (the Innovator’s Dilemma in action). It takes an existential threat to disturb that sort of equilibrium. It was not until ServiceNow was seriously threatening the Remedy user base that BMC started to offer SaaS options and subscription pricing. It will take an equivalent upheaval in its business for any legacy vendor to adopt a radically new strategy like AIOps. These days, customers can’t wait for one vendor to see the writing on the wall; they need to move at the speed their customers require.
Much as I would like to believe that we have got BMC running scared, I don’t think that’s the case - so they will continue along their very profitable way. This is of course exactly how it should be! If they were to jump on every new bandwagon, their shareholders would be rightly furious. They absolutely should focus on doing what they do well.
But that does not include doing AIOps. If you’re a practitioner looking at this, I hope it’s obvious who you want to go with: the people creating the new model and who are steeped in what is required to deliver and adopt it - or the ones who see a keyword trending on Google, and write a quick ambulance-chasing blog post - or claim that Remedy is a key part of AIOps - or even that mainframes are.
Which is why BMC’s own automation products have their separate real-time operational data stores, which sync with the CMDB on a schedule. ↩
With the advent of always-on devices that are equipped with sensitive microphones and a permanent connection to the Internet, new security concerns are emerging.
Virtual assistants like Apple’s Siri, Microsoft’s Cortana and Google Now have the potential to make enterprise workers more productive. But do “always listening" assistants pose a serious threat to security and privacy, too?
Betteridge’s Law is in effect here. Sure enough, the second paragraph of the article discloses its sources:
Nineteen percent of organizations are already using intelligent digital assistants, such as Siri and Cortana, for work-related tasks, according to Spiceworks’ October 2016 survey of 566 IT professionals in North America, Europe, the Middle East and Africa.
A whole 566 respondents, you say? From a survey run by a help desk software company? One suspects that the article is over-reaching a bit - and indeed, if we click through to the actual survey, we find this:
Intelligent assistants (e.g., Cortana, Siri, Alexa) used for work-related tasks on company-owned devices had the highest usage rate (19%) of AI technologies
That is a little bit different from what the CSO Online article is claiming. Basically, anyone with a company-issued iPhone who has ever used Siri to create an appointment, set a reminder, or send a message about anything work-related would fall into this category.
Instead, the article makes the leap from that limited claim to extrapolating that people will be bringing their Alexa device to work and connecting it to the corporate network. Leaving aside for a moment the particular vision of hell that is an open-plan office where everyone is talking into the air all the time, what does that mean for the specific recommendations in the article?
Focus on user privacy
Develop a policy
Treat virtual assistant devices like any IoT device
Decide on BYO or company-owned
Plan to protect
These are actually not bad recommendations - but they are so generic as to be useless. Worse, when they do get into specifics, they are almost laughably paranoid:
Assume all devices with a microphone are always listening. Even if the device has a button to turn off the microphone, if it has a power source it’s still possible it could be recording audio.
This is drug-dealer level of paranoia. Worrying that Alexa might be broadcasting your super secret and valuable office conversations does not even make the top ten list of concerns companies should have about introducing such devices into their networks.
The most serious threat you can get from Siri at work is co-workers pranking you if you enable access from the lock screen. In that case, anyone can grab your unattended iPhone and instruct Siri to call you by some ridiculous name. Of course I would never sabotage a colleague’s phone by renaming him “Sweet Cakes". Ahem. Interestingly, it turns out that the hypothetical renaming also extends to the entry in the Contacts…
The real concern is that by focusing on these misguided recommendations, the focus is taken off advice that would actually be useful in the real world. For instance, if you must have IoT devices in the office for some reason, this is good advice:
One way to segment IoT devices from the corporate network is to connect them to a guest Wi-Fi network, which doesn’t provide access to internal network resources.
This recommendation applies to any device that needs Internet access but does not require access to resources on the internal network. This will avoid issues where, by compromising a device (or its enabling cloud service), intruders are able access your internal network in what is known as a “traversal attack". If administrators restrict the device’s access to the network, that will also restrict the amount of damage an intruder can do.
Thinking about access to data is a good idea in general, not just for voice assistants or IoT devices:
Since personal virtual assistants “rely on the cloud to comprehend complex commands, fetch data or assign complex computing tasks to more resources," their use in the enterprise raises issues about data ownership, data retention, data and IP theft, and data privacy enforcement that CISOs and CIOs will need to address.
Any time companies choose to adopt a service that relies on the cloud, their attack surface is not limited to the device itself, but also extends to that back-end service - which is almost certainly outside their visibility and control. Worse, in a BYOD scenario, users may introduce new devices and services to the corporate network that are not designed or configured for compliance with organisations’ security and privacy rules.
Security is important - but let’s focus on getting the basics right, without getting distracted by overly-specific cybersecurity fantasy role-playing game scenarios involving Jason Bourne hacking your Alexa to steal your secrets.
One of the topics that currently exercise the more forward-looking among us is the potential negative impact of automation on the jobs market and the future of work in general. Comparisons are frequently made with the Industrial Age and its consequent widespread social disruption - including violent reactions, most famously the Luddite and saboteur movements.
Some cynics have pointed out that there was less concern when it was only blue-collar jobs that were being displaced, and that what made the chattering classes sit up and pay attention was the prospect of the disruption coming for their jobs too. I could not possibly comment on this view - but I can comment on what I have seen in years of selling automation software into large companies.
For more than a decade, I have been involved in pitching software that promised to automate manual tasks. My customers have always been large enterprises, usually the Global 2000 or their immediate followers. Companies like this do not buy software on a whim; rather, they build out extensive business cases and validate their assumptions in detail before committing themselves1. There are generally three different ways of building a business case for this kind of software:
Support a growth in demand without increasing staff levels (as much);
Support static demand with decreasing staff;
Quality improvement (along various different axes) and its mirror image, risk avoidance.
The first one is pretty self-evident - if you need to do more than you can manage with the existing team, you need to hire more people, and that costs money. There are some interesting second-order consequences, though. Depending on the specifics of the job to be done, it will take a certain amount of time to identify a new hire and train them up to be productive. Six months is a sensible rule of thumb, but I know of places where it takes years. If the rate of growth gets fast enough, that lag time starts to be a major issue. You can't just hire yourself out of the hole, even with endless money. The hole may also be getting deeper if other companies in the same industry and/or region are all going through the same transformation at the same time, and all competing for the same talent.
If instead you can adopt tooling that will make your existing people more efficient and let you keep up with demand, then it is worth investing some resources in doing so.
That second business case is the nasty one. In this scenario, the software will pay for itself by automating people's jobs, thus enabling the company to fire people - or in corporate talk, "reduce FTE2 count". The fear of this sort of initiative is what makes rank and file employees often reflexively suspicious of new automation tools - over and above their natural suspicion that a vendor might be pitching snake-oil.
Personally I try not to build business cases around taking away people's jobs, mainly because I like being able to look myself in the mirror in the mornings (it's hard to shave any other way, for one thing). There is also a more pragmatic reason not to build a business case this way, though, and I think it is worth exploring for its wider implications.
Where Are The Results?
The thing is, in my experience business cases for automation built around FTE reduction have never been delivered successfully - if focused on automation of existing tasks. That is an important caveat, but I will come back to that.
Sure, the business case might look very persuasive - "we execute this task roughly a dozen times a day, it takes half an hour each time, and if you add that up, it's the equivalent of a full-time employee (an FTE), so we can fire one person". When you look at the details, though, it's not quite so simple.
The fact is that people rarely work at discrete tasks. Instead, they spend their time on a variety of different tasks, more or less integrated into a whole process. There is a tension between the two extremes: at the one end you have workers on a repetitive assembly line, while at the other you have people jumping around so much they can never get anything done. Most organisational functions are somewhere in between those two poles.
If automation is focused on addressing those discrete tasks, it absolutely will bring benefits, but those benefits will add up to freeing up existing employees to catch up with other tasks that were being neglected. Every IT department I have ever seen has a long tail of to-dos that keep getting pushed down the stack by higher-priority items. Automation is the force multiplier that promises to let IT catch up with its to-do list.
This sort of benefit is highly tactical, and is generally the domain of point solutions that do one thing and do it well. This will enable the first kind of business case, delivering on new requirements faster. It will not deliver the second kind of business case. The FTEs freed up through automation get redeployed, not fired, and while the organisation is receiving benefit from that, it is not what was built into the assumptions of the project, which will cause problems for its sponsors. Simply put, if someone ever checks the return on the investment (an all too rare occurrence in my experience), the expected savings will not be there.
Strategic benefits of automation, on the other hand, are delivered by bundling many of these discrete tactical tasks together into a new whole.
Realising those strategic benefits is not as straightforward as dropping a new tool into an existing process. Actually achieving the projected returns will require wholesale transformation of the process itself. This is not the sort of project that can be completed in a quarter or two (although earlier milestones should already show improvement). It should also not be confused with a technology implementation project. Rather, it is a business transformation project, and must be approached as such.
Where does this leave us?
Go Away Or I Will Replace You With A Very Small Shell Script
In my experience in the field, while tactical benefits of automation are achievable, true strategic improvement through automation can only be delivered by bundling together disparate technical tasks into a new whole. The result is that it is not skilled workers that are replaced, but rather the sorts of undifferentiated discrete tasks that many if not most large enterprises have already outsourced.
This shows who the losers of automation will be: it is the arbitrageurs and rent-seekers, the body-rental shops who provide no added value beyond cheap labour costs. The jobs that are replaced are those of operators, what used to be known as tape jockeys; people who perform repetitive tasks over and over.
The jobs that will survive and even benefit from the wave of automation are those that require interaction with other humans in order to determine how to direct the automation, plus of course the specialists required to operate the automation tools themselves. The greatest value, however, will accrue to those who can successfully navigate the interface between the two worlds. This is why it is so important to own those interfaces.
What might change is the nature of the employment contracts for those new roles. While larger organisations will continue to retain in-house skills, smaller organisations for which such capabilities are not core requirements may prefer to bring them in on a consultative basis. This will mean that many specialists will need to string together sequences of temporary contracts to replace long-duration full-time employment.
This is its own scary scenario, of course. The so-called gig economy has not been a win so far, despite its much-trumpeted potential. Perhaps the missing part to making this model work is some sort of universal basic income to provide a base and a safety net between consulting jobs? As more and more of the economy moves in this direction, at least in part due to the potential of automation, UBI or something similar will be required to bridge the gap between the assumptions of the old economy and the harsh realities of the new one.
So, the robots are not going to take our jobs - but they are going to change them, in some cases into something unrecognisable. The best thing humans can do is to plan to take care of one other.
Well, in theory. Sometimes you lose a deal because the other vendor's CEO took your prospect's entire management team for a golfing weekend in the corporate jet. But we don't talk about that. ↩
An FTE is a Full-Time Equivalent: the amount of work expected of one employee, typically over a year, allowing for holidays and so on. Typically that means somewhere between 200 and 220 working days of 8 hours each, so 1600 to 1760 hours in a year. The "FTE cost" of an activity is calculated by multiplying the time required to perform an activity once, multiplying that by the number of times that activity needs to be performed, and dividing by the FTE rate. ↩
The greatest benefit of the Internet is the democratisation of technology. Development of customised high-tech solutions is no longer required for success, as ubiquitous commodity technology makes it easy to bring new product offerings to market.
Together with the ongoing move from one-time to recurring purchases, this process of commoditisation moves the basis of the competition to the customer experience. For most companies, the potential lifetime value of a new customer is now many times the profit from their initial purchase. This hoped-for future revenue makes it imperative to control the customer's experience at every point.
As an illustration, let us consider two scenarios involving outsourcing of products that are literally right in front of their users for substantial parts of the day.
Google Takes Its Eye Off the Watch
The first is Google and Android's answer to the Apple Watch, Android Wear. As is (usually) their way, Google have not released their own smartwatch product. Instead, they have released the Android Wear software platform, and left it to their manufacturing partners to build the actual physical products.
If Android Wear is to be taken as seriously as the Apple Watch, we actually need an Android version of the Apple Watch. And these LG watches simply aren't up to the task.
Lacking the sort of singular focus and vertical integration between hardware and software that Apple brings to bear, these watches fail to persuade, and not by a little:
I think Google and LG missed the mark on every level with the Style, and on the basis of features alone that it is simply a bad product.
So is the answer simply to follow Apple's every move?
It is certainly true Google have shown with their Nexus and Pixel phones just how much better a first-party Android phone can be, and it is tempting to extrapolate that success to a first-party Google Watch. However, smartwatches are still very much a developing category, and it is not at all clear whether they can go beyond the current fitness-focused market. In fact, I would not be surprised to see a contraction in the size of the overall smartwatch market. Many people who bought a first-generation device out of curiosity and general technophilia may well opt not to replace that device.
Apple Displays Rare Clumsiness
In that case, let us look at an example outside the smartwatch market - and one where the fumble was Apple's.
Ever since Retina displays became standard first on MacBooks1 and then on iMacs, Mac users have clamoured for a large external display from Apple, to replace the non-Retina Apple Thundebolt Display that still graces many desks. Bandwidth constraints meant that this was not easy to do until a new generation of hardware came to market, but Apple fans were disappointed when, instead of their long-awaited Apple Retina 5K Display, they were recommended to buy a pretty generic-looking offering from LG.
the hardware can become unusable when located within 2 meters of a router.
Two metres is not actually that close; it's over six feet, if you're not comfortable with metric units. Many home office setups would struggle with that constraint - I know mine would.
Many have pointed out that one of the reasons for preferring expensive Apple solutions is that they are known to be not only beautifully designed, but obsessively over-engineered. It beggars belief that perfectionist, nit-picking Apple would have let a product to market with such a basic flaw - and yet, today, if an Apple fan spends a few thousand dollars on a new MacBook Pro and a monitor in an Apple Store, they will end up looking at a generic-looking LG monitor all day - if, that is, they can use the display at all.
Google and Apple both ceded control of a vitally important part of the customer experience to a third party, and both are now paying the price in terms of dissatisfied users. There are lessons here that also apply outside of manufacturing and product development.
Many companies, for instance, outsource functions that are seen as ancillary to third parties. A frequent candidate for these arrangements is support - but to view support this way is a mistake. It is a critical component of the user experience, and all the more so because it is typically encountered at times of difficulty. A positive support experience can turn a customer into a long-term fan, while a negative one can put them off for good.
A long time ago and far far away, I did a stint in technical support. During my time there, my employer initiated a contract with a big overseas outsourcing firm. The objective was to add a "tier zero" level of support, which could deal with routine queries - the ones where the answer was a polite invitation to Read The Fine Manual, basically - and escalate "real" issues to the in-house support team.
The performance of the outsourcer was so bad that my employer paid a termination fee to end the contract early, after less than one year. Without going into the specifics, the problem was that the support experience was so awful that it was putting off our customers. Given that we sold mainly into the large enterprise space, where there is a relatively limited number of customers in the first place, and that we aimed to cross-sell our integrated products to existing customers, a sudden increase in the number of unhappy customers was a potential disaster.
We went back to answering the RTFM queries ourselves, customer sat went back up into the green, and everyone was happy - well, except for the outsourcer, presumably. The company had taken back control of an important interface with its customers.
Interface to Differentiate
There are only a few of these interfaces and touch-points where a company has an opportunity to interact with its customers. Each interaction is an opportunity to differentiate against the competition, which is why it is so vitally important to make these interactions as streamlined and pleasant as possible.
This requirement is doubly important for companies who sell subscription offerings, as they are even more vulnerable to customer flight. In traditional software sales, the worst that can happen is that you lose the 20% (or whatever) maintenance, as well as a cross-sell or up-sell opportunity that may or may not materialise. A cancelled subscription leaves you with nothing.
A customer who buys an Android Wear smartwatch and has a bad experience will not remember that the watch was manufactured by LG; they will remember that their Android Wear device was not satisfactory. In the same way, someone who spends their day looking at an LG monitor running full-screen third-party applications - say, Microsoft Word - will be more open to considering a non-Apple laptop, or not fighting so hard to get a MacBook from work next time around. Both companies ceded control of their interface with their customers.
Usually companies are very eager to copy Apple and Google's every move. This is one situation where instead there is an opportunity to learn from their mistakes. Interfaces with customers are not costs to be trimmed; instead, they can be a point of differentiation. Treat them as such.
In my day job, I spend a lot of my time building business cases to help understand whether our technology is a good fit for a customer. When you are building a startup business, this is the the expected trajectory: in the very early days, you have to make the technology work, translating the original interesting idea into an actual product that people can use in the real world. Once you have a working product, though, it’s all about who can use it, and what they can do with it.
In this phase, you stop pitching the technology. Instead, you ask questions and try to understand what ultimate goals your prospective customer has. Only once you have those down do you start talking about what your technology can do to satisfy those goals. If you do not do this, you find yourself running lots of "kick the tyres" evaluations that never go anywhere. You might have lots of activity, but you won’t have many significant results to show for it.
This discipline of analysing goals and identifying a technology fit is very useful in analysing other fields too, and it helps to identify when others may be missing some important aspect of a story.
Let’s think about driverless cars
Limited forms of self-driving technology already exist, from radar cruise-control to more complete approaches such as Tesla’s Autopilot. None of these are quite ready for prime time, and there are fairly regular stories about their failures, with consequences from the comic to the tragic.
Because of these issues, Tesla and others require that a drivers keep their hands on the wheel even when the car is in Autopilot mode. This brings its own problems, falling into an “uncanny valley" of attention: the driver is neither fully engaged, nor can they fully disengage. Basically it’s the worst of both worlds, as drivers are no longer involved in the driving, but still cannot relax and read a book or watch a film.
These limitations have not stopped much of the commentary from assuming self-driving car technology to be, if not a problem that is already solved, at least one that is solvable. Extrapolations from that point lead to car ownership becoming a thing of the past as people simply summon self-driving pods to their location, which in turn causes massive transformations in both labour force (human drivers, whether truckers or Uber drivers, are no longer required) and the physical make-up of cities (enormous increases in the utilisation rate for cars mean that large permanent parking structures are no longer required) - let alone the consequences for automotive manufacturers, faced with a secular transformation in their market.
Okay, maybe not cars
Self-driving technology is not nearly capable (yet) of navigating busy city streets, full of unpredictable pedestrians, cyclists, and so on, so near-term projections focus on what is perceived as a more easily solvable problem: long-distance trucking.
The idea is that currently existing self-driving tech is already just about capable of navigating the constrained, more predictable environment of the highways between cities. Given some linear improvement, it does not seem that far-fetched to assume that a few more years of development would give us software capable of staying in lane and avoiding obstacles reliably enough to navigate a motorway in formation with other trucks.
Extrapolating this capability to the wholesale replacement of truckers with autonomous robot trucks, however, is a big reach - and not so much for technical reasons, as for less easily tractable external reasons.
Assuming for the sake of argument that Otto (or whoever) successfully develop their technology and build an autonomous truck that can navigate between cities, but not enter the actual city itself. This means that Otto or its customers would need to build warehouses right at the motorway junctions in areas where they wish to operate, to function as local hubs. From these locations, smaller, human-operated vehicles would make the last-mile deliveries to homes and businesses inside the city streets, which are still not accessible to the robot trucks.
This is all starting to sound very familiar. We already have a network optimised for long-distance freight between local distribution hubs. It is very predictable by design, allowing only limited variables in its environment, and it is already highly instrumented and very closely monitored. Even better, it has been in operation at massive scale for more than a century, and has a whole set of industry best practices and commercial relationships already in place.
I am of course talking about railways.
Get on the train
Let’s do something unusual for high-tech, and try to learn something from history for once. What can the example of railways teach us about the potential for self-driving technology on the road?
The reason for the shift from rail freight to road freight was to avoid trans-shipment costs. It’s somewhat inefficient to load your goods onto one vehicle, drive it to a warehouse, unload them, wait for many other shipments to be assembled together, load all of them onto another vehicle, drive that vehicle to another warehouse, unload everything, load your goods onto yet another vehicle, and finally drive that third vehicle to your final destination. It’s only really worthwhile to do this for bulk freight that is not time-sensitive. For anything else, it’s much easier to just back a truck up to your own warehouse, load up the goods, and drive them straight to their final destination.
Containerisation helped somewhat, but railways are still limited to existing routes; a new rail spur is an expensive proposition, and even maintenance of existing rail spurs to factories is now seen as unnecessary overhead, given the convenience of road transport’s flexibility and ability to deliver directly to the final destination.
In light of this, a network of self-driving trucks that are limited to predictable, pre-mapped routes on major highways can be expected to run into many of the same issues.
Don’t forget those pesky humans
Another interesting lesson that we can take from railways is the actual uptake of driverless technology. As noted above, railways are a far more predictable environment than roads: trains don’t have to manoeuvre, they just move forwards along the rails, stopping at locations that are predetermined. Changes of directions are handled by switching points in the rails, not by the operator needing to steer the train around obstacles. Intersections with other forms of transport are rare, as other traffic generally uses bridges and underpasses. Where this separation is not possible, level crossings are still far more controlled than road intersections. Finally, there are sensors everywhere on railways; controllers know exactly where a certain train is, what its destination and speed are, and what is the state of the network around it.
So why don’t we have self-driving trains?
The technology exists, and has done so for years - it’s a much simpler problem than self-driving cars - and it is in use in a few locations around the world (e.g. London and Milan); but still, human-operated trains are the norm. Partly, it’s a labour problem; those human drivers don’t want to be out of a job, and have been known to go on strike against even the possibility of the introduction of driverless trains. Partly, it’s a perception problem: trains are massive, heavy, powerful things, and most people simply feel more comfortable knowing that a human is in charge, rather than potentially buggy software. And partly, of course, it’s the economics; human train drivers are a known quantity, and any technology that wants to replace them is not.
This means that the added convenience of end-to-end transportation limits the uptake of rail transport, and human factors limit the adoption of driverless technology even when it is perfectly feasible - something that has not yet been proven in the case of road transport.
A more familiar example?
In Silicon Valley, people are often moving too fast and too busy breaking things that work to learn from other industries, let alone one that is over a hundred years old1, but there is a relevant example that is closer to home - literally.
When the Internet first opened to the public late last century, the way most people connected was through a dial-up modem over an analogue telephone line. We all become expert in arcane incantations in the Hayes AT command language, and we learned to recognise the weird squeals and hisses emitted by our modems and use them to debug the handshake with our ISP's modem at the far end. Modem speeds did accelerate pretty rapidly, going from the initial 9.6 kbits per second to 14.4, to 28.8, to weird 33.6, to a screamingly fast 56k (if the sun was shining and the wind was in the right quarter) in a matter of years.
However, this was still nowhere near fast enough. These days, if our mobile phones drop to EDGE - roughly equivalent to a 56k modem on a good day - we consider the network as being basically unusable. Therefore, there was a lot of angst about how to achieve higher speeds. Getting faster network speeds in general was not a problem - 10 Mbps Ethernet was widely available at the time. The issue was the last mile from the trunk line to subscribers' homes. Various schemes were mooted to get fast internet to the curb - or kerb, for Americans. Motivated individuals could sign up for ISDN lines, or more exotic connectivity depending on their location, but very few did. When we finally got widespread consumer broadband, it was in the form of ADSL over the existing copper telephone lines.
So where does this leave us?
Driverless vehicles will follow the same development roadmap2: until they can deliver the whole journey end to end, uptake will be limited. Otherwise, they are not delivering what people need.
More generally, to achieve any specific goals, it is usually better to work with existing systems and processes. That status quo came to be over time, and generally for good reason. Looking at something now, without the historical context, and deciding that it is wrong and needs to be disrupted, is the sort of Silicon Valley hubris that ends in tears.
Right now, with my business analyst hat on, driverless vehicles look like a cool idea (albeit one that is still unproven) that is being shoe-horned into a situation that it is not a good match for. If I were looking at a situation like this one in my day job, I would advise everyone to take a step back, re-evaluate what the actual goals are, and see whether a better approach might be possible. Until then, no matter how good the technology gets, it won’t actually deliver on the requirements.
But that doesn’t get as many visionary thinkpieces and TED talks.
The old saw is that "In Europe, a hundred miles is a long way; in the US, a hundred years is a long time". In Silicon Valley, which was all groves of fruit trees fifty years ago, that time frame is shorter still. ↩
One of the downsides of working for a little startup that is going to change the world, but doesn’t quite have the name recognition yet - is that you get asked for customer references all. the. time.
Now on one level, I have absolutely no problem with this. It makes perfect sense from the point of view of a prospective customer: some rando just showed up, and sure, he’s got some good PowerPoint game, but I’ve never heard of him or his company - why should I waste any time on him?
The problem that a prospective customer might not appreciate is that by the nature of things, a growing startup has many more prospects than existing customers. Every single member of the sales team, if they are doing their job, has as many sales prospects at any one time as there are total customers in production. If we were to walk each of those prospects past a real customer, just so they could kick the tyres and see whether we have something real, pretty soon our existing customers would stop taking our calls - not a clever strategy, when we sell subscriptions and rely on customers renewing their contracts.
The trick, then, is to balance these requirements. On the side of the prospective customer, the goal is to validate whether this interesting startup has an actual product - or just an interesting idea and some vapourware slides. This is an absolutely valid goal - but prospective customers should recognise vendors’ incentives as well.
Reputable vendors who actually intend to build a lasting business have no more interest in wasting time and resources in projects that do not go anywhere than customers do. We know that our tech works (again, assuming for the sake of argument that the vendor you’re talking to is not just a straight-up scammer), so our goal is not to waste our limited time and resources chasing after something that is never going to be a successful, referenceable production implementation.
So, all of that being said - please don’t ask vendors for references on the first date. If the vendor you're talking to is any good, they will be qualifying you as aggressively as you are qualifying them. We vendors are very protective of our customers - once more, assuming you’re dealing with a reputable vendor in the first place! Please don’t see this as us being difficult or having something to hide; rather, it’s a preview of how your own relationship with us as a customer would be. If you trust us with your business, we will be equally protective of you. You want to be sure that if we come to you in the future with a request to talk to someone about your experience with our products, it’s for a good reason, and not something we will ask you to do every other week.
Once everyone is comfortable that there is a real opportunity - that is when we can get other parties involved. Until then, here’s a white paper, here’s a recorded webinar, here’s an article by an analyst who spoke to some of our customers - but my current customers are sacred to me, and I won’t introduce you to them until you convince me that you’re serious.
This has been your subtweet-as-blog-post of the day.
This is an interesting time in the enterprise software market. The shift to the cloud is causing massive disruption, with storied old names struggling to reinvent themselves, and scrappy startups taking over the world.
One interesting story is that HP Enterprise, or HPE - one of the units that old HP split itself into - is looking into selling off some of its software assets. I am especially interested in one of these, namely Mercury, because I worked there for several years.
To recap, Mercury (née Mercury Interactive) was a leader in automated software testing. Its products covered functional testing (XRunner, WinRunner, QuickTest Professional), load testing (LoadRunner), and test management (TestDirector, later renamed to QualityCenter).
Basically what these tools let you do is to record a user interacting with an application, and then parameterise the recording - i.e. turn it into a little programme that you can replay, so that you can select different menu options and make sure that they all work, or simulate ten thousand users all hitting the app simultaneously and make sure it doesn’t fall down, or whatever.
LoadRunner in particular was the default standard at the time, dominating its market segment. I worked on the functional test products, but because of language coverage, I had at least basic familiarity with the whole product set.
What happened after that is fairly typical of such acquisitions. Despite some big talk and high expectations, I think it is fair to say that the Mercury products languished within HP - or at the very least failed to evolve with any urgency.
This is unfortunately a pattern with technology acquisitions. There is often a honeymoon period, where increased funding enables delivery of long-awaited functionality, but the releases after that get hollowed out into maintenance releases, and even those start coming further and further apart, frustrating customers and insiders alike.
In the case of HP and Mercury, the slow-down was particularly unfortunate because the acquisition came just as enterprise application development was moving from proprietary protocols and GUIs to web applications talking HTTP. Mercury’s powerful and extremely customisable products were arguably overkill for simpler web applications, and a new generation of tools was beginning to emerge that was dedicated for that purpose. Given its singular focus on testing, and based on what I know of the company culture pre-acquisition, I am quite certain that an independent Mercury would have addressed the challenge head on and remade itself for that new world. After all, Mercury was fully aware of web applications, offering services that would simulate user access from locations around the world to have a continuous view on sites’ performance as experienced around the world.
Unfortunately, that’s not what happened under HP stewardship. The Mercury products languished in the Software group, which itself represented only around 2% of HP revenues. As often happens in such cases, much of the original talent left, creating a flourishing “alumni" network. I was part of that diaspora, so I can’t talk about the quality of their replacements, but there was certainly a discontinuity, and the Mercury tools never recovered their previous dominance.
None of this is to say that the acquisition was not a success by its own lights. HP still uses the Mercury technology in all sorts of places. Many enterprise HP customers did not move to the new technologies with any urgency, and therefore continued to have a business need for Mercury’s powerful tools. This means that the products still throw off enough stable and predictable revenue to make a private equity purchase potentially attractive
HP also adopted the Mercury notion of Business Technology Optimization, or BTO. This acted as a framework for many of HP’s other software initiatives, although it seems to have been abandoned more recently.
The failure of this acquisition is a failure of potential. What might an independent Mercury have become if the M2B project had been successful in taking it to $2B in revenue and beyond? What might Mercury have built in the world of the web and the cloud? As is often the case with these acquisitions, there is no way to know.
We do know roughly what the conditions are under which acquisitions succeed or fail. Arguably, the Mercury acquisition was more successful than most in no small part because HP kept the Mercury R&D centre in Israel, somewhat isolated from the rest of the company. This enabled the ex-Mercury staff to keep some sense of their own distinct identity, and keep developing their technology even after the acquisition.
There is an alternative view: that while isolation and even benign neglect may allow for survival of the startup within the acquiring company, they will not build true success. That requires a deeper integration of the startup's mentality into the acquiring company’s culture. Very few company cultures have the strength to be able to integrate a challenging outside vision without triggering an immune reaction of sorts.
The only way to integrate acquired companies - their technology and their culture - successfully, is to have strong executive guidance over a period of years. This has been a long-time failing at HP, to the despair of its longer-serving employees. In the absence of that guidance, benign neglect is maybe all that can be hoped for.
Dear users: It’s not easy, being on the vendor side. Let’s assume, for the sake of argument, that you work for a reputable, non-scammy vendor. Let’s also take it as a given that you have done your homework, so you are not spamming people indiscriminately, but trying to reach people whom you genuinely believe to have a need for your product.
How do you go about reaching those people?
Totally unsolicted vendor spam of the day - Telesoft.
Most people are understandably very reluctant to publish their contact details everywhere, because less principled sales people have already saturated their tolerance for randos showing up in their inbox out of the blue. This means that there is a (justifiably) high barrier to getting their attention.
There is also something of a tragedy of the commons effect, as all the vendors converge on those people who have been less diligent about scrubbing their personal details off the Internet.
"I found your info on LinkedIn and wanted to reach out."
Here’s the deal: when I contact someone, it’s because I genuinely believe that they might have a need for what I’m selling. It’s a pretty niche market - which is why it makes sense to hire human sales people to build and maintain a small number of customer relationships in the first place. This means I take the time to do my homework, and my approach is specific as I can make it based on public information.
If I’m working on selling into BigCorp and I get a number for Alice or Bob who work there, my first step is not to pick up the phone. Rather, I go off to research what they do at BigCorp, what they personally care about, and so on. I use all of this to build a pitch that might go something like this:
Hi, sorry to contact you uninvited, but I know you are working on A, B, and C as part of an initiative at BigCorp. I have worked with other companies in your position such as WidgetTicklers, who were able to complete their own similar project under budget and ahead of schedule. They did this thanks to key capabilities enabled by our technology: …
You get the idea: it’s not a form letter I’m blasting out, it’s carefully targeted and as specific as I can make it with information at hand.
Hello, I am total stranger. With your deep background, you would do very well to make me money. When is good time to review my plan?
So what’s the problem? The problem is that the hit rate on doing this is still terrible. It's not mis-targeting, because often when I do finally manage to make contact by some other means, it turns out that I was right, there really was a need - but that was the wrong channel to connect with the person.
Here’s my question: what is a good way to contact you? Assume I have something you want, but not something that would show up in your normal reading. Maybe it’s launched since the last time you went looking for this sort of thing. I’ve done the prep work of identifying a potential interest you might have for this product; how should I bring it to your attention?
Because seriously, this stuff is great, and everyone needs to know about it - not just because I get paid (that too, of course!), but because I think it can really help a whole bunch of people. That’s the definition of win-win.
The Three Horsemen of the Productivity Apocalypse - and how to slay them
There are a few constants of corporate life that are pretty much universal, and those are email1, meetings, and slides. All three are widely hated, and someone is always trying to kill one or other of them. I have even heard people say that these three factors have ruined their lives.
I would say that the truth is a bit more nuanced. "With great power comes great responsibility", as they say, and certainly all three are powerful tools.
Ah, email. If I had a Euro for every time something had promised to kill email… well, I could start by filling out my dream garage, but I’d still have plenty left over after that. We should probably amend the old saw about only cockroaches surviving nuclear war to include Sendmail running somewhere. Despite everything, it’s still the best tool at its job.
The problem is that most of the time we use email wrong. Email chains devolve into endless back and forth, with unhelpful subject lines like "Re: Re: Fwd: RE: Re: …". This is why, amid all the froth about the latest would-be email killer, I was interested to spot an article discussing how to do email right. In particular, the counter-intuitive recommendation is to write longer emails.
A key rule for e-mail is to keep it brief. The recipients are pressed for time – and perhaps, on their mobiles, cramped for visual space – so keep it to a sentence or two.
When sending or replying to an email, identify the goal this emerging email thread is trying to achieve. For example, perhaps its goal is to synchronize a plan for an upcoming meeting with a collaborator or to agree on a time to grab coffee.
Next, come up with a process that gets you and your correspondent to this goal while minimizing the number of back and forth messages required.
Explain this process in the email so that you and your recipient are on the same page.
The desired result is to spend a little more time on each individual message or thread, but reduce the number of visits you need to make to your inbox over time.
This is not that dissimilar to the time-tested format of the VITO letter: grab your correspondent’s attention up front, then articulate your message and the reasons why they should care, and close with concrete actions. Of course you will take more time when writing to an actual VITO than to a run-of-the-mill correspondent, but it’s still an effective tool.
The only problem with both these techniques is that they work well one-to-one, but fall down with those huge sprawling email threads that we all know and love to hate. As more and more people get added to the conversation, skirting Dunbar’s Number, any chance of useful communication breaks down.
Modern tools like Slack can supposedly make this situation better, or at least more tolerable, but the problem there is the requirement that everyone adopt the new tool. As long as not everyone is on the new channel, it’s more trouble than it’s worth to either verify someone is on board or to bring them on board. In time-honoured fashion, people default to just adding more and more people to the same old email chain.
The problem is compounded by the perfect confusion which reigns over email etiquette, with no agreement over who goes in To: and who goes in Cc:, let alone anything about hierarchical ordering of participants.
Still, email is better than all the alternatives, for one simple reason: it works, almost always and almost everywhere. That is a high bar for any new offering to clear, as I have written before.
Meetings share many of the same problems as the big multi-user email chains that I was just complaining about. Sure, the attendee list for an in-person meeting is limited by the size of the available meeting rooms - still the most in-demand commodity in any office. Online meetings and conference calls, of course, do not share this limitation.
In either case, though, some attendees may be disinterested, others may be there mainly to be seen, and some may actually be negative. Unless the agenda is enforced ruthlessly, the discussion will move off-topic very rapidly - which anyway is probably a good idea, otherwise why have the meeting in the first place?
One way to optimise the use of meetings comes from Amazon.
In senior executive Amazon meetings, before any conversation or discussion begins, everyone sits for 30 minutes in total silence, carefully reading six-page printed memos.
What makes this management trick work is how the medium of the written word forces the author of the memo to really think through what he or she wants to present.
I have criticised some Amazon quick-fix management practices before, but I think this one makes a lot of sense. In the typical meeting agenda, quite a lot of time is spent level-setting, making sure everyone agrees on the situation to be analysed before proposals can be put forwards. Inevitably, people who are already up to speed - or who think they are - will hijack this process by asking questions, and there are only so many times you can promise to "get to that in just a couple more slides", especially with senior people, before you start losing your audience.
Amazon’s two-stage approach, with the author clarifying their thinking by setting down their analysis and proposal in writing, and the other participants absorbing that message in full before starting to discuss and question it, seems like a really productive way of avoiding the problem.
Sure, it takes a long time to write six-page documents, so maybe save those for the big strategic meetings - but if there is time for a meeting, there should be time for at least a one-page recap of the situation to date, some high-level proposals, and desired outcomes of the meeting. If there is no time to either write or read such a succinct summation, is the meeting really a valuable use of anyone’s time?
What can I say? I’m a fan of PowerPoint. There, I said it. Much like email, PowerPoint can be (and often is) used wrong, putting audiences at risk of death by PowerPoint, but it’s very effective when used well. Not to blow my own horn, but I get a lot of compliments on my presentations. Partly of course this is because people are used to such a low standard that it doesn’t take much to stand out - and partly it’s because I put thought, preparation, and the results of formal training into my slides. Sometimes this takes a bit more effort than it should, but the results are well worth it.
Invest in a couple of books - I like Slide:ology and Resonate by Nancy Duarte, and of course Presentation Zen by Garr Reynolds. If you can get to a training session, so much the better; talking through this sort of material with an instructor is really effective.
See you at the meeting
I’ll drop you an email about it, and send you my slides afterwards.
I’ve given up on hyphenating e-mail since I realised that apart from bills and greeting cards, I receive no physical mail whatsoever, and have not done for some time now. In fact, we could pretty much just go ahead and call it "mail", if it were not for the fact that then you would need a term to describe old-style mail, and "snail mail" is just a bit too precious and insider-y to catch on. ↩