New Mac Fever

Apple bloggers are all very excited about the announcement of a new Mac Pro. The best roundup I have seen is on Daring Fireball: The Mac Pro Lives.

I'm not a Mac Pro user, nor frankly am I ever likely to be. My tastes lie more at the other end of the spectrum, with the ultra-portable MacBook (aka MacBook Adorable). However, there was one interesting tidbit for me in the Daring Fireball report:

Near the end, John Paczkowski had the presence of mind to ask about the Mac Mini, which hadn’t been mentioned at all until that point. Schiller: "On that I’ll say the Mac Mini is an important product in our lineup and we weren’t bringing it up because it’s more of a mix of consumer with some pro use. … The Mac Mini remains a product in our lineup, but nothing more to say about it today."

While there are certainly Mac Mini users who choose it as the cheapest Mac, and perhaps as a way to keep using a monitor and other peripherals that used to be plugged into a PC, there is a substantial contingent of Mac Mini "pro" users. Without getting into Macminicolo levels of pro-ness, I run mine headless in a cupboard, where it serves iTunes and runs a few other services. It's cheap, quiet, and reliable, which makes it ideal for that role. I don't necessarily need ultimate power - average utilisation is extremely low, although there is the odd peak - but I do want to be reassured that this is a product line that will stick around, just in case my current Mac Mini breaks.

The most important Macs are obviously the MacBook and MacBook Pros, but it's good to know that Apple recognises a role for the Mac Pro - and for the Mac Mini.

Let Me Tell You A Story

Any good presentation is a story, and a good presenter is adept at telling their audience a story in a way that is compelling. Some are naturally good at this sort of thing - but all of us have been forced to sit through presentations with no unifying thread of story.

Luckily for the rest of us, there are techniques that can help us become better storytellers, and avoid boring our audiences to tears.

One of the most effective approaches I have learned is called SCIPAB, a technique developed by Steve Mandel and now spread by the company he founded, Mandel Communications. I was lucky enough to be trained in SCIPAB by Mandel Communications as part of a more general "presentation skills" training. I don’t want to steal their thunder (or their business!), but I do want to share some of the insights that I carry with me and use regularly.

SCIPAB is an acronym, which stands for the phases of a story:

  • Situation
  • Complication
  • Implication
  • Proposal1
  • Action
  • Benefit

These phases have a specific technical meaning within the Mandel technique, but they also align with the phases of another framing device, Joseph Campbell’s Hero’s Journey. There are seventeen phases to the Journey, which Steve Mandel wisely condensed to six for his audience of sales people and marketers. To quote Wikipedia:

In the Departure part of the narrative, the hero or protagonist lives in the ordinary world and receives a call to go on an adventure. The hero is reluctant to follow the call, but is helped by a mentor figure.

The Initiation section begins with the hero then traversing the threshold to the unknown or "special world", where he faces tasks or trials, either alone or with the assistance of helpers.

The hero eventually reaches "the innermost cave" or the central crisis of his adventure, where he must undergo "the ordeal" where he overcomes the main obstacle or enemy, undergoing "apotheosis" and gaining his reward (a treasure or "elixir").

The hero must then return to the ordinary world with his reward. He may be pursued by the guardians of the special world, or he may be reluctant to return, and may be rescued or forced to return by intervention from the outside.

In the Return section, the hero again traverses the threshold between the worlds, returning to the ordinary world with the treasure or elixir he gained, which he may now use for the benefit of his fellow man. The hero himself is transformed by the adventure and gains wisdom or spiritual power over both worlds.

Let us map SCIPAB onto the Hero’s Journey, so that we can take our audiences on a journey with us and lead them to a shared conclusion.

Situation

The S, Situation, is the status quo at the beginning of the story, where our audience is living today. In most heroic stories, this is some kind of idyll, but actually in most presentations, this part is present as an opportunity to confirm our understanding of our audience’s… well, Situation. In a general audience, this is to level-set that we all understand the main forces and trends affecting our industry or sector. In a more specific audience, this is our opportunity to confirm our understanding of their specific context, and to trot out all the homework that we have been doing on them. (You have been doing your homework on your audience, right?) If this phase goes well, we have successfully positioned ourselves as the right mentor to lead our audience on a the journey.

Complication

The C, Complication, is where we depart from the comfortable status quo. In this section, we are pointing out the trials and tribulations that are the consequence of the Situation. This is where we start to turn up the heat a little and say things that may be uncomfortable for the audience, pointing out ways in which the status quo is problematic or unsatisfactory. This often boils down to 'that was a great plan, until these changes occurred, which made it no longer such a good fit".

Implication

The I, Implication, is the nadir, the lowest point of the emotional journey. Here we describe the ordeal that is inevitable if the Complication is not addressed, the "innermost cave" of the Hero's Journey. This phase is specifically about the bad things that can happen: toil and trouble, with the ultimate possibility of failure in the background. At this point the audience should be deeply uncomfortable, facing unpleasant truths about the long-term consequences of staying on their current trajectory.

Proposal

Having brought the audience to this low point, we give them a vision of what is possible. The P, Proposal, is where we describe a different outcome, the "treasure or elixir" that our audience might win by confronting the monster that we described in the previous steps. Here we are selling a shining vision of a possible future - one that is accessible if only the Situation can be confronted in the right way, avoiding the Complications and their Implications.

This emotional alternation between high and low is very important. In a longer presentation (or blog post or white paper or any other kind of story, for that matter) you can even repeat this alternation multiple times, taking the audience with you on an emotional roller coaster ride. Too much doom & gloom in one dose, and you’ll start to lose them - not just because it makes for a depressing presentation, but also because you end up talking down their current situation. No matter how bad they might accept intellectually that things are, having someone else poke at the sore points over and over (and over) will trigger a negative emotional reaction sooner or later. Don’t call other people’s babies ugly - at least, no more than is strictly necessary!

Action

Because this is ultimately a storytelling methodology in service of a sales effort, the key is to include concrete requests and actions that the audience should take. This is the A of SCIPAB: specific Actions that you want to happen as a consequence of the story you have told. This could be a next-step workshop where you can go deeper into specifics of your Proposal, an opportunity to present to someone higher up the org chart, or a request for the audience to do something, such as download an evaluation version of your tool - but the key to ensuring progress and maintaining momentum is to ask for something at every step.

Benefit

Finally, close on the B, Benefits. This is the high point of that emotional roller-coaster, and also aligns to the Hero’s Journey. This is where the prospective customer gets concrete about the "treasure or elixir" they can gain from our Proposal - not to mention the "wisdom or spiritual power" they will gain along the way. This is to the Proposal what the Implication is to the Situation: the consequences that we can reasonably expect, given that starting point.

Above all, don’t be boring

By structuring your communications in this way, you will be able to have much more explicit and productive conversations with prospective customers - and at the very least, you won’t be boring them or inducing Death By Powerpoint.

Plus, this way is much more fun for the presenter. Try it, and let me know how it goes!


  1. This is also known as "Position", but "Proposal" is what I learned, plus I think it fits better within the flow.

Smart Swatch

Remember Swatch? The must-have colourful plastic watches of the 80s and 90s? They are back in the news, with their new plan to produce their own smartwatch operating system.

Swatch plans to develop its own operating system as the Swiss watchmaker seeks to combine smart technology with the country’s expertise in making timepieces and miniaturisation, chief executive Nick Hayek has said.

Mr Hayek added that he wanted to avoid relying on Apple’s iOS and Google’s Android and provide a "Swiss" alternative offering stronger data protection and ultra-low energy consumption.

This new plan has caused all sorts of consternation around the Internet, but I was disposed to ignore it - until now. I just received this week's Monday Note, by the usually reliable Jean-Louis Gassée.

M. Gassée makes some initially good points about the complexity of operating systems, the immaturity of the smartwatch market, and the short timescales involved. Swatch intends to ship actual products by the end of 2018, which is barely any time at all when it comes to developing and shipping an entirely new physical product at mass-market scale. However, I do wonder whether he is falling into the same trap that he accuses Hayek and Swatch of falling into.

… in 2013, Hayek fils publicly pooh-poohed smart watches:
"Personally, I don’t believe it’s the next revolution… Replacing an iPhone with an interactive terminal on your wrist is difficult. You can’t have an immense display."

I tend to agree with Hayek, as it happens; the "terminal on the wrist" is pretty much a side show. The one stand-out use case for smart watches1 right now appears to be sensors and fitness. If that's not compelling, then there is very little else to attract you to smartwatches, even if you are a committed technophile like me. For myself, after wearing a Jawbone Up! for a year or two, I determined that I was not making use of the data that were gathered. The activity co-processor in my iPhone is ample for my limited needs.

What Is A Smartwatch?

The key point, however, is that Swatch have not announced an actual smart watch, but rather "an ecosystem for connected objects". M. Gassée even calls out some previous IoT form within CSEM, Swatch's partner in this venture, which recently produced the world's smallest Bluetooth chip.

The case against the wisdom of the Swatch project - the complexity of OS development and maintenance, the need for a developer ecosystem, and so on - assumes that Swatch are contemplating a direct rival for Apple's watchOS and Google Gear. What if that's not what's going on at all?

What if Swatch is going back to its roots, and making something simple and undemanding, but with the potential to be ubiquitous? The ecosystem for a smartwatch is now widespread: everyone has a smartphone, NFC is everywhere, from payment terminals to subway turnstiles. What if Swatch just intends to piggyback on that by embedding a few small and cheap sensors in its watches, without even having a screen at all?

Now that would be a Swatch move. In fact, it's such a Swatch move that they've done it before, with their Snow Pass line:

Its ski watch stores ski pass information and has an antenna that communicates with a scanner at the fast-track ski lift entrance. One swipe of the wrist and you're through.

That description sounds a lot like ApplePay to me - or really any NFC system. Add some pretty basic sensors, and you've got 80% of the smartwatch functionality that people actually use for 20% of the price.

Seen through this lens, the focus on privacy and security makes sense. It has been said that "the S in IoT stands for 'security'", and we could certainly all use an IoT player that focuses on that missing S. If the sensors themselves are small and simple enough, they would not need frequent updates and patches, as there would be nothing to exploit. The companion smartphone app would be the brains of the operation and gateway to all the data gathered, and could be updated as frequently as necessary, without needing to touch the sensors on the watch.

So What Is Swatch Really Up To?

As to why Swatch would even be interested in entering into such a project, remember that these days Swatch is part of a group that sprawls across 70 different brands, most far more up-scale (albeit less profitable) than lowly Swatch with its plastic watches. Think Omega, Breguet, Glashütte, Longines, or Blancpain. The major threat to those kinds of watches is not any single other watch; most watch lovers own several different mechanical watches, and choose one or another to wear for each day, activity, or occasion. In my own small way, I own three mechanical watches (and two quartz), for instance.

For a while now, and accelerating since the release of the iPhone, the competition for watches was - no watch at all. Why bother to wear a watch, the thinking went, when your smartphone can tell the time much more accurately? But now, insidiously, the competition is a watch again - but it is the last watch its owners will ever wear. Once you start really using an Apple Watch, you don't want to take it off, lest you miss out on all those activities being measured. Circles will go unfilled if you wear your Rolex to dinner.

But what if every watch you buy, at least from The Swatch Group, gives you the same measurements and can maintain continuity through the app on your phone? What if all of your watches can also let you on the subway, pay for your groceries, and so on? Other players such as Breitling and Montblanc have also been looking into this, but I think Swatch has a better chance, if only because they start from scale.

Now we are back to the comfortable (and profitable) status quo ante for the Swiss watch industry, in which watch aficionados own several different watches which they mix and match, but with each one part of the same connected experience.

Analogies are dangerous things. The last few years have conditioned us to watch out for the "PC guys are not just going to figure this out"-type statements from incumbents about to be disrupted. What if this time, the arrow points the other way? What if Swatch has finally figured out a way for the traditional watch industry to fight back against the ugly, unclassy interloper?


  1. In a further sign of the fact that this is still a developing market, even auto-correct appears to get confused between "smartwatch" and "smart watch".

New Paths to Helicon

I was chatting to a friend last week, and we got onto the topic of where sysadmins come from. "When two sysadmins love each other very much…" - no, that doesn't bear thinking about. BRB, washing out my mind with bleach.

But seriously. There is no certification or degree that makes you a sysadmin. Most people come into the discipline by routes that are circuitous, sideways, if not entirely backwards. The one common factor is that most people scale up to it: they start running a handful of servers, move or grow to a 50-server shop, build out some tools and automation to help them get the job done, then upgrade to 500 servers, and so on.

The question my friend and I had was, what happens when there are no 10 and 50-server shops around? What happens when all the jobs that used to be done with on-premises servers are now done in SaaS or PaaS platforms? My own employer is already like that - we’re over a hundred people, and we are exactly the stereotypical startup that features in big infrastructure vendors' nightmares: a company that owns no physical compute infrastructure, beyond a clutch of stickered-up MacBooks, and runs everything in the cloud.

The 90s and Naughties, when I was cutting my teeth in IT, were a time when there was relative continuity between desktop and enterprise computing, but that is no longer the case. These days you’ve got to be pretty technical as a home user before anything you’re doing will be relevant at enterprise scale, because those in-between cases have mostly gone away. I got my start in IT working at the local Mac shop, but neighbourhood computer stores have gone the way of the dodo. There simply are not many chances to manage physical IT infrastructure any more.

Where Are Today’s On-Ramps?

There is one part of that early experience of mine which remains valid and replicable today. My first task was pure scut-work, transferring physical mail-in warranty cards into the in-house FileMaker Pro "database". After two weeks of this, I demanded (and received) permission to redo the UI, as it was a) making my eyes bleed, and b) frustrating me in my data entry. Once I’d fixed tab order and alignments, I got ambitious and started building out data-queries for auto-suggestions and cross-form validation and all sorts of other weird & wonderful functions to help me with the data entry. Pretty soon, I had just about automated myself out of that job; but in doing so, I had proven my value to the company, and received the traditional reward for a job well done - namely, another job.

That is today’s path into computing. People no longer have to edit autoexec.bat on their home computers just to play games, but on the other hand, they will start to mess around behind the scenes of their gaming forum or chat app, or later on, in Salesforce or ServiceNow or whatever. This is how they will develop an understanding of algorithms, and some of them will go on from there, gradually growing their skills and experience.

A Cloudy Future?

To be clear, this cloud-first world is not yet a reality - even at Moogsoft, only a fairly small percentage of our customer base opts for the SaaS deployment option. More use it for the pilot, though, and interest is picking up, even in unexpected quarters. On the other hand, these are big companies, often with tens or hundreds of thousands of servers. They have sunk costs that mean they lag behind the bleeding edge of the change.

Even if someone does have 50 servers in an in-house server room today, as the hardware reaches its end-of-life date, more and more organisations are opting not to replace them. I was talking to someone who re-does offices, and a big part of the job is ripping out the in-house "data closet" to make more work space. The migration to the cloud is not complete, and won't be for some time, but it has definitely begun, even for existing companies.

What will save human jobs in this brave new world will be "intersection theory" - people finding their niches where different sub-fields and specialisations meet. Intuitive leaps and non-obvious connections between widely separated fields are what humans are good at. Those intersections will be one of the last bastions of the human jobs, augmented by automation of the more narrowly-focused and predictable parts of the job.

There will be other hold-outs too, notably tasks that are too niche for it to be worth the compute time to train up a neural network. My own story is somewhere in between the two, and would probably remain a viable on-ramp to IT - asssuming, of course, that there are still local firms big enough to need that kind of service.

Constant Change Is The Only Constant

To be clear, this is not me opining from atop an ivory tower. Making those unexpected, non-obvious connections, and doing so in a way that makes sense to humans, is the most precise definition I’d be willing to sign up to of the job I expect to have twenty years from now.

As we all continue to reinvent ourselves and our worlds, let's not forget to bring the next generations in. Thinking that being irreplaceable is an unalloyed win is a fallacy; if you can't be replaced, you also can't be promoted. We had to make it up as we went along, but now it's time to systematise what we learned along the way and get other people in to help us cover more ground.

See you out there.

Replace or Augment?

One of the topics that currently exercise the more forward-looking among us is the potential negative impact of automation on the jobs market and the future of work in general. Comparisons are frequently made with the Industrial Age and its consequent widespread social disruption - including violent reactions, most famously the Luddite and saboteur movements.

Some cynics have pointed out that there was less concern when it was only blue-collar jobs that were being displaced, and that what made the chattering classes sit up and pay attention was the prospect of the disruption coming for their jobs too. I could not possibly comment on this view - but I can comment on what I have seen in years of selling automation software into large companies.

For more than a decade, I have been involved in pitching software that promised to automate manual tasks. My customers have always been large enterprises, usually the Global 2000 or their immediate followers. Companies like this do not buy software on a whim; rather, they build out extensive business cases and validate their assumptions in detail before committing themselves1. There are generally three different ways of building a business case for this kind of software:

  • Support a growth in demand without increasing staff levels (as much);
  • Support static demand with decreasing staff;
  • Quality improvement (along various different axes) and its mirror image, risk avoidance.

The first one is pretty self-evident - if you need to do more than you can manage with the existing team, you need to hire more people, and that costs money. There are some interesting second-order consequences, though. Depending on the specifics of the job to be done, it will take a certain amount of time to identify a new hire and train them up to be productive. Six months is a sensible rule of thumb, but I know of places where it takes years. If the rate of growth gets fast enough, that lag time starts to be a major issue. You can't just hire yourself out of the hole, even with endless money. The hole may also be getting deeper if other companies in the same industry and/or region are all going through the same transformation at the same time, and all competing for the same talent.

If instead you can adopt tooling that will make your existing people more efficient and let you keep up with demand, then it is worth investing some resources in doing so.

That second business case is the nasty one. In this scenario, the software will pay for itself by automating people's jobs, thus enabling the company to fire people - or in corporate talk, "reduce FTE2 count". The fear of this sort of initiative is what makes rank and file employees often reflexively suspicious of new automation tools - over and above their natural suspicion that a vendor might be pitching snake-oil.

Personally I try not to build business cases around taking away people's jobs, mainly because I like being able to look myself in the mirror in the mornings (it's hard to shave any other way, for one thing). There is also a more pragmatic reason not to build a business case this way, though, and I think it is worth exploring for its wider implications.

Where Are The Results?

The thing is, in my experience business cases for automation built around FTE reduction have never been delivered successfully - if focused on automation of existing tasks. That is an important caveat, but I will come back to that.

Sure, the business case might look very persuasive - "we execute this task roughly a dozen times a day, it takes half an hour each time, and if you add that up, it's the equivalent of a full-time employee (an FTE), so we can fire one person". When you look at the details, though, it's not quite so simple.

The fact is that people rarely work at discrete tasks. Instead, they spend their time on a variety of different tasks, more or less integrated into a whole process. There is a tension between the two extremes: at the one end you have workers on a repetitive assembly line, while at the other you have people jumping around so much they can never get anything done. Most organisational functions are somewhere in between those two poles.

If automation is focused on addressing those discrete tasks, it absolutely will bring benefits, but those benefits will add up to freeing up existing employees to catch up with other tasks that were being neglected. Every IT department I have ever seen has a long tail of to-dos that keep getting pushed down the stack by higher-priority items. Automation is the force multiplier that promises to let IT catch up with its to-do list.

This sort of benefit is highly tactical, and is generally the domain of point solutions that do one thing and do it well. This will enable the first kind of business case, delivering on new requirements faster. It will not deliver the second kind of business case. The FTEs freed up through automation get redeployed, not fired, and while the organisation is receiving benefit from that, it is not what was built into the assumptions of the project, which will cause problems for its sponsors. Simply put, if someone ever checks the return on the investment (an all too rare occurrence in my experience), the expected savings will not be there.

Strategic benefits of automation, on the other hand, are delivered by bundling many of these discrete tactical tasks together into a new whole.

Realising those strategic benefits is not as straightforward as dropping a new tool into an existing process. Actually achieving the projected returns will require wholesale transformation of the process itself. This is not the sort of project that can be completed in a quarter or two (although earlier milestones should already show improvement). It should also not be confused with a technology implementation project. Rather, it is a business transformation project, and must be approached as such.

Where does this leave us?

Go Away Or I Will Replace You With A Very Small Shell Script

In my experience in the field, while tactical benefits of automation are achievable, true strategic improvement through automation can only be delivered by bundling together disparate technical tasks into a new whole. The result is that it is not skilled workers that are replaced, but rather the sorts of undifferentiated discrete tasks that many if not most large enterprises have already outsourced.

This shows who the losers of automation will be: it is the arbitrageurs and rent-seekers, the body-rental shops who provide no added value beyond cheap labour costs. The jobs that are replaced are those of operators, what used to be known as tape jockeys; people who perform repetitive tasks over and over.

The jobs that will survive and even benefit from the wave of automation are those that require interaction with other humans in order to determine how to direct the automation, plus of course the specialists required to operate the automation tools themselves. The greatest value, however, will accrue to those who can successfully navigate the interface between the two worlds. This is why it is so important to own those interfaces.

What might change is the nature of the employment contracts for those new roles. While larger organisations will continue to retain in-house skills, smaller organisations for which such capabilities are not core requirements may prefer to bring them in on a consultative basis. This will mean that many specialists will need to string together sequences of temporary contracts to replace long-duration full-time employment.

This is its own scary scenario, of course. The so-called gig economy has not been a win so far, despite its much-trumpeted potential. Perhaps the missing part to making this model work is some sort of universal basic income to provide a base and a safety net between consulting jobs? As more and more of the economy moves in this direction, at least in part due to the potential of automation, UBI or something similar will be required to bridge the gap between the assumptions of the old economy and the harsh realities of the new one.

So, the robots are not going to take our jobs - but they are going to change them, in some cases into something unrecognisable. The best thing humans can do is to plan to take care of one other.


Images by Annie Spratt, Janko Ferlic, and Jayphen Simpson via Unsplash


  1. Well, in theory. Sometimes you lose a deal because the other vendor's CEO took your prospect's entire management team for a golfing weekend in the corporate jet. But we don't talk about that.

  2. An FTE is a Full-Time Equivalent: the amount of work expected of one employee, typically over a year, allowing for holidays and so on. Typically that means somewhere between 200 and 220 working days of 8 hours each, so 1600 to 1760 hours in a year. The "FTE cost" of an activity is calculated by taking the time required to perform an activity once, multiplying that by the number of times that activity needs to be performed, and dividing by the FTE rate.

Own Your Interfaces

The greatest benefit of the Internet is the democratisation of technology. Development of customised high-tech solutions is no longer required for success, as ubiquitous commodity technology makes it easy to bring new product offerings to market.

Together with the ongoing move from one-time to recurring purchases, this process of commoditisation moves the basis of the competition to the customer experience. For most companies, the potential lifetime value of a new customer is now many times the profit from their initial purchase. This hoped-for future revenue makes it imperative to control the customer's experience at every point.

As an illustration, let us consider two scenarios involving outsourcing of products that are literally right in front of their users for substantial parts of the day.

Google Takes Its Eye Off the Watch

The first is Google and Android's answer to the Apple Watch, Android Wear. As is (usually) their way, Google have not released their own smartwatch product. Instead, they have released the Android Wear software platform, and left it to their manufacturing partners to build the actual physical products.

Results have been less than entirely positive:

If Android Wear is to be taken as seriously as the Apple Watch, we actually need an Android version of the Apple Watch. And these LG watches simply aren't up to the task.

Lacking the sort of singular focus and vertical integration between hardware and software that Apple brings to bear, these watches fail to persuade, and not by a little:

I think Google and LG missed the mark on every level with the Style, and on the basis of features alone that it is simply a bad product.

So is the answer simply to follow Apple's every move?

It is certainly true Google have shown with their Nexus and Pixel phones just how much better a first-party Android phone can be, and it is tempting to extrapolate that success to a first-party Google Watch. However, smartwatches are still very much a developing category, and it is not at all clear whether they can go beyond the current fitness-focused market. In fact, I would not be surprised to see a contraction in the size of the overall smartwatch market. Many people who bought a first-generation device out of curiosity and general technophilia may well opt not to replace that device.

Apple Displays Rare Clumsiness

In that case, let us look at an example outside the smartwatch market - and one where the fumble was Apple's.

Ever since Retina displays became standard first on MacBooks1 and then on iMacs, Mac users have clamoured for a large external display from Apple, to replace the non-Retina Apple Thundebolt Display that still graces many desks. Bandwidth constraints meant that this was not easy to do until a new generation of hardware came to market, but Apple fans were disappointed when, instead of their long-awaited Apple Retina 5K Display, they were recommended to buy a pretty generic-looking offering from LG.

Insult was added to injury when it became known that the monitor was extremely sensitive to interference, and in fact became unusable if placed anywhere near a wifi router:

the hardware can become unusable when located within 2 meters of a router.

Two metres is not actually that close; it's over six feet, if you're not comfortable with metric units. Many home office setups would struggle with that constraint - I know mine would.

Many have pointed out that one of the reasons for preferring expensive Apple solutions is that they are known to be not only beautifully designed, but obsessively over-engineered. It beggars belief that perfectionist, nit-picking Apple would have let a product to market with such a basic flaw - and yet, today, if an Apple fan spends a few thousand dollars on a new MacBook Pro and a monitor in an Apple Store, they will end up looking at a generic-looking LG monitor all day - if, that is, they can use the display at all.

Google and Apple both ceded control of a vitally important part of the customer experience to a third party, and both are now paying the price in terms of dissatisfied users. There are lessons here that also apply outside of manufacturing and product development.

Many companies, for instance, outsource functions that are seen as ancillary to third parties. A frequent candidate for these arrangements is support - but to view support this way is a mistake. It is a critical component of the user experience, and all the more so because it is typically encountered at times of difficulty. A positive support experience can turn a customer into a long-term fan, while a negative one can put them off for good.

Anecdata Time

A long time ago and far far away, I did a stint in technical support. During my time there, my employer initiated a contract with a big overseas outsourcing firm. The objective was to add a "tier zero" level of support, which could deal with routine queries - the ones where the answer was a polite invitation to Read The Fine Manual, basically - and escalate "real" issues to the in-house support team.

The performance of the outsourcer was so bad that my employer paid a termination fee to end the contract early, after less than one year. Without going into the specifics, the problem was that the support experience was so awful that it was putting off our customers. Given that we sold mainly into the large enterprise space, where there is a relatively limited number of customers in the first place, and that we aimed to cross-sell our integrated products to existing customers, a sudden increase in the number of unhappy customers was a potential disaster.

We went back to answering the RTFM queries ourselves, customer sat went back up into the green, and everyone was happy - well, except for the outsourcer, presumably. The company had taken back control of an important interface with its customers.

Interface to Differentiate

There are only a few of these interfaces and touch-points where a company has an opportunity to interact with its customers. Each interaction is an opportunity to differentiate against the competition, which is why it is so vitally important to make these interactions as streamlined and pleasant as possible.

This requirement is doubly important for companies who sell subscription offerings, as they are even more vulnerable to customer flight. In traditional software sales, the worst that can happen is that you lose the 20% (or whatever) maintenance, as well as a cross-sell or up-sell opportunity that may or may not materialise. A cancelled subscription leaves you with nothing.

A customer who buys an Android Wear smartwatch and has a bad experience will not remember that the watch was manufactured by LG; they will remember that their Android Wear device was not satisfactory. In the same way, someone who spends their day looking at an LG monitor running full-screen third-party applications - say, Microsoft Word - will be more open to considering a non-Apple laptop, or not fighting so hard to get a MacBook from work next time around. Both companies ceded control of their interface with their customers.

Usually companies are very eager to copy Apple and Google's every move. This is one situation where instead there is an opportunity to learn from their mistakes. Interfaces with customers are not costs to be trimmed; instead, they can be a point of differentiation. Treat them as such.


Image by Austin Neill via Unsplash


  1. Yes yes, except for the Air.

Category Error

In discussing my last post about self-driving car technology, it was pointed out to me that I was being unduly pessimistic about the prospect of really smart people solving the problem of dealing with the dynamics nature of city traffic. Clarke’s First Law comes to mind:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

I am neither distinguished, nor (yet) elderly, and nobody’s idea of a scientist, and I certainly do not mean to imply that self-driving car technology is impossible. What I am saying is that adoption of self-driving tech, and indeed of any new technological offering, is not just a technology problem, and perhaps not even primarily a technology problem. For this reason, even more than because of the technical difficulties involved in navigating busy and unpredictable city streets, I expect the initial uptake of self-driving cars to be in more controlled environments, whether dedicated roadways, or inside industrial plants, airports, and the like, where random unexpected foot or bicycle traffic is not a factor. Actual go-anywhere self-driving cars will still not show up on public streets even some time after the technology has been proved out in those environments.

That said, there are signs of movement. The US has chosen ten proving grounds where this technology can be tested in real-world conditions. Also, a possible contradiction to my expectation of initial uptake in controlled environments is unpredictable Indian road traffic - and yet, Tata have announced their intention to test in Bangalore.

Just in case I was overly pessimistic, therefore, I wanted to run a quick thought experiment taking the opposite position: assuming that true, go-anywhere self-driving functionality becomes available - what then?

Everything changes

The main factor to take into account is that everything will change. To think of this future state in terms of "driverless cars" is to miss the point by as much as those who described early automobiles as "horseless carriages". It is natural to think of radical new products in the context of existing categories, but truly significant innovations define their own categories, with wide-ranging consequences.

If personal flying cars will ever be possible, it is a foregone conclusion that the results will not be what was depicted on the cover of 50s pulps, with the nuclear family setting off in Dad's finned and chromed sportster for a day of wholesome fun. The sheer traffic control and public safety requirements of a world where anybody can afford a flying car would act to limit uptake and adoption. In fact, pretty much the only way that personal air vehicles could ever become widespread would be if they were autonomous pods, with no fallible humans in the control loop. Either a central traffic control, or peer-to-peer connections between the vehicles (or a combination of the two) would be required to make that situation even remotely practical. At that point, would anyone bother to own one of these air transit pods, or would we just summon one as needed?

Many of the same factors apply to autonomous road cars. Right now, privately owned cars are idle for most of their lives. They are parked overnight, then driven to work, parked there all day, and driven back home to be parked. If the car becomes autonomous, it does not need to park near its owner; it could drive itself to a distant parking structure, or even right back home, to wait until it is needed again. At that point, why not enable other people to share the use of the car while its owner is otherwise engaged? And the logical consequence of that situation is, why own the car in the first place? Just summon it to where you are, and dismiss it when you’re done.

Right away, there is no longer any need for parking lots near offices and retail centres, just much smaller pick-up and drop-off areas. What does this do to the fabric of cities? Imagine every parking lot replaced by a park - or a reduction in sprawl, as new, much denser residential and commercial development can take the place of redundant parking structures. Even where new greenfield builds are required, they can become more efficient, no longer requiring as much space for all of those parking spaces.

Share - but how?

Some of the analysis also assumes an increase in ride-sharing. In this scenario, each vehicle has multiple occupants, which reduces the overall number of vehicles on the road, in turn reducing the need for road infrastructure. I’m not sure that this is plausible. Rather than this sort of simultaneous ride-sharing, with many people in the car at the same time, I think that the plausible future involves sequential ride-sharing, where the vehicle is in near-constant use (at least at peak times), but only by one person at a time.

The move from a large number of inefficient, single-user assets to a much smaller number of highly efficient assets with high utilisation rates will also have a dramatic impact on the automotive industry. The future of autonomous cars is not a Tesla in every driveway. It looks much more like Paris’ new driverless minibuses - in other words, the use case that we should extrapolate from is public transport, not private cars. The advent of autonomous self-driving technology will not add new capabilities to private cars. Rather, it will lead to an increase in the flexibility and capability of public transport networks.

This change will determine the end of the automotive industry as we know it. Already today, nobody cares especially about the marque of their Uber or Lyft vehicle1. In a scenario where those are the only types of vehicles on the road, the basis of competition between manufacturers would change dramatically to become something much more similar to the commercial vehicle market. The relevant drivers for competition would become the cost of operation and maintenance, without any particular brand cachet or driver experience factoring into selection. The overall size of the market would also shrink dramatically, as increased utilisation rates for individual vehicles lead to a requirement for a much smaller number of vehicles overall. Whatever happens, the sales volume for the industry will crash and most of the current manufacturers will exit the market one way or another.

Another expected advantage of autonomous vehicles is the new traffic control capabilities that they enable. One of the most frustrating types of traffic jam is the one that there is no apparent reason for. Traffic slows and stops, restarts, inches along - and then suddenly it’s flowing again. What happened in that situation is a signalling cascade: one human driver hits their brakes, the one behind them, unable to gauge intentions or speed accurately, hits their brakes a little harder, and pretty soon the whole line of cars has ground to a halt.

Instead, autonomous vehicles could enable swarming behaviours, where a whole line of vehicles can tailgate each other, safe in the knowledge that none of them is going to do anything unexpected. The immediate benefits would include optimal utilisation of road surfaces and reduced fuel consumption (from aerodynamic and other effects).

…but not today

All of this describes the ultimate end state, but for the reasons I discussed, I do not expect all of these consequences to manifest in the short term. What I do expect is that limited application of autonomous driving technology will deliver some initial benefits, and over time the technical, legal, and social hurdles will begin to fall, enabling some of these second-order benefits. However, none of that will happen in three to five years or even ten years, as some of the boosters argue, regardless of the technical progress that is made.

It is still important to think about the impact of technology adoption. Too often, IT people especially focus only on technical feasibility, and assume that just because something is possible, it will then be adopted. In actual fact, history is littered with the bones of products that failed to gain traction because of non-technical factors. Meanwhile, tech industry commentators lament the success of "inferior" products (whether or not they actually are inferior) that focus on user needs.

The canonical example of "not getting it" is of course Slashdot founder Rob Malda’s reaction to the iPod launch:

No wireless. Less space than a Nomad. Lame.

Of course we know how that played out: regardless of the technical specs, the iPod met users’ needs so completely that it defined the entire category - before, of course, it was in its own turn subsumed by the Next Big Thing.

The same will happen with driverless cars. Solving the tech issues is only a part of the problem. To achieve the sort of transformative effects that I described above will require a concerted push into all sorts of areas that I see as being currently ignored: city architecture, transportation policy, legal issues, insurance, and so on and so forth. The technology industry has a tendency to dismiss these types of issues as "soft factors".

When launching a new technology, ignore the soft factors at your peril.


Images by Austin Scherbarth and Peter Hershey via Unsplash


  1. Except that if I pay for Uber Black, I would be very disappointed it an UberX-grade Prius or whatever were to show up, instead of a big German luxury car.

Thinking Two Steps Behind

In my day job, I spend a lot of my time building business cases to help understand whether our technology is a good fit for a customer. When you are building a startup business, this is the the expected trajectory: in the very early days, you have to make the technology work, translating the original interesting idea into an actual product that people can use in the real world. Once you have a working product, though, it’s all about who can use it, and what they can do with it.

In this phase, you stop pitching the technology. Instead, you ask questions and try to understand what ultimate goals your prospective customer has. Only once you have those down do you start talking about what your technology can do to satisfy those goals. If you do not do this, you find yourself running lots of "kick the tyres" evaluations that never go anywhere. You might have lots of activity, but you won’t have many significant results to show for it.

This discipline of analysing goals and identifying a technology fit is very useful in analysing other fields too, and it helps to identify when others may be missing some important aspect of a story.

Let’s think about driverless cars

Limited forms of self-driving technology already exist, from radar cruise-control to more complete approaches such as Tesla’s Autopilot. None of these are quite ready for prime time, and there are fairly regular stories about their failures, with consequences from the comic to the tragic.

Uhoh, This content has sprouted legs and trotted off.

Because of these issues, Tesla and others require that a drivers keep their hands on the wheel even when the car is in Autopilot mode. This brings its own problems, falling into an "uncanny valley" of attention: the driver is neither fully engaged, nor can they fully disengage. Basically it’s the worst of both worlds, as drivers are no longer involved in the driving, but still cannot relax and read a book or watch a film.

These limitations have not stopped much of the commentary from assuming self-driving car technology to be, if not a problem that is already solved, at least one that is solvable. Extrapolations from that point lead to car ownership becoming a thing of the past as people simply summon self-driving pods to their location, which in turn causes massive transformations in both labour force (human drivers, whether truckers or Uber drivers, are no longer required) and the physical make-up of cities (enormous increases in the utilisation rate for cars mean that large permanent parking structures are no longer required) - let alone the consequences for automotive manufacturers, faced with a secular transformation in their market.

Okay, maybe not cars

Self-driving technology is not nearly capable (yet) of navigating busy city streets, full of unpredictable pedestrians, cyclists, and so on, so near-term projections focus on what is perceived as a more easily solvable problem: long-distance trucking.

The idea is that currently existing self-driving tech is already just about capable of navigating the constrained, more predictable environment of the highways between cities. Given some linear improvement, it does not seem that far-fetched to assume that a few more years of development would give us software capable of staying in lane and avoiding obstacles reliably enough to navigate a motorway in formation with other trucks.

Extrapolating this capability to the wholesale replacement of truckers with autonomous robot trucks, however, is a big reach - and not so much for technical reasons, as for less easily tractable external reasons.

Assuming for the sake of argument that Otto (or whoever) successfully develop their technology and build an autonomous truck that can navigate between cities, but not enter the actual city itself. This means that Otto or its customers would need to build warehouses right at the motorway junctions in areas where they wish to operate, to function as local hubs. From these locations, smaller, human-operated vehicles would make the last-mile deliveries to homes and businesses inside the city streets, which are still not accessible to the robot trucks.

This is all starting to sound very familiar. We already have a network optimised for long-distance freight between local distribution hubs. It is very predictable by design, allowing only limited variables in its environment, and it is already highly instrumented and very closely monitored. Even better, it has been in operation at massive scale for more than a century, and has a whole set of industry best practices and commercial relationships already in place.

I am of course talking about railways.

Get on the train

Let’s do something unusual for high-tech, and try to learn something from history for once. What can the example of railways teach us about the potential for self-driving technology on the road?

The reason for the shift from rail freight to road freight was to avoid trans-shipment costs. It’s somewhat inefficient to load your goods onto one vehicle, drive it to a warehouse, unload them, wait for many other shipments to be assembled together, load all of them onto another vehicle, drive that vehicle to another warehouse, unload everything, load your goods onto yet another vehicle, and finally drive that third vehicle to your final destination. It’s only really worthwhile to do this for bulk freight that is not time-sensitive. For anything else, it’s much easier to just back a truck up to your own warehouse, load up the goods, and drive them straight to their final destination.

Containerisation helped somewhat, but railways are still limited to existing routes; a new rail spur is an expensive proposition, and even maintenance of existing rail spurs to factories is now seen as unnecessary overhead, given the convenience of road transport’s flexibility and ability to deliver directly to the final destination.

In light of this, a network of self-driving trucks that are limited to predictable, pre-mapped routes on major highways can be expected to run into many of the same issues.

Don’t forget those pesky humans

Another interesting lesson that we can take from railways is the actual uptake of driverless technology. As noted above, railways are a far more predictable environment than roads: trains don’t have to manoeuvre, they just move forwards along the rails, stopping at locations that are predetermined. Changes of directions are handled by switching points in the rails, not by the operator needing to steer the train around obstacles. Intersections with other forms of transport are rare, as other traffic generally uses bridges and underpasses. Where this separation is not possible, level crossings are still far more controlled than road intersections. Finally, there are sensors everywhere on railways; controllers know exactly where a certain train is, what its destination and speed are, and what is the state of the network around it.

So why don’t we have self-driving trains?

The technology exists, and has done so for years - it’s a much simpler problem than self-driving cars - and it is in use in a few locations around the world (e.g. London and Milan); but still, human-operated trains are the norm. Partly, it’s a labour problem; those human drivers don’t want to be out of a job, and have been known to go on strike against even the possibility of the introduction of driverless trains. Partly, it’s a perception problem: trains are massive, heavy, powerful things, and most people simply feel more comfortable knowing that a human is in charge, rather than potentially buggy software. And partly, of course, it’s the economics; human train drivers are a known quantity, and any technology that wants to replace them is not.

This means that the added convenience of end-to-end transportation limits the uptake of rail transport, and human factors limit the adoption of driverless technology even when it is perfectly feasible - something that has not yet been proven in the case of road transport.

A more familiar example?

In Silicon Valley, people are often moving too fast and too busy breaking things that work to learn from other industries, let alone one that is over a hundred years old1, but there is a relevant example that is closer to home - literally.

When the Internet first opened to the public late last century, the way most people connected was through a dial-up modem over an analogue telephone line. We all become expert in arcane incantations in the Hayes AT command language, and we learned to recognise the weird squeals and hisses emitted by our modems and use them to debug the handshake with our ISP's modem at the far end. Modem speeds did accelerate pretty rapidly, going from the initial 9.6 kbits per second to 14.4, to 28.8, to weird 33.6, to a screamingly fast 56k (if the sun was shining and the wind was in the right quarter) in a matter of years.

However, this was still nowhere near fast enough. These days, if our mobile phones drop to EDGE - roughly equivalent to a 56k modem on a good day - we consider the network as being basically unusable. Therefore, there was a lot of angst about how to achieve higher speeds. Getting faster network speeds in general was not a problem - 10 Mbps Ethernet was widely available at the time. The issue was the last mile from the trunk line to subscribers' homes. Various schemes were mooted to get fast internet to the curb - or kerb, for Americans. Motivated individuals could sign up for ISDN lines, or more exotic connectivity depending on their location, but very few did. When we finally got widespread consumer broadband, it was in the form of ADSL over the existing copper telephone lines.

So where does this leave us?

Driverless vehicles will follow the same development roadmap2: until they can deliver the whole journey end to end, uptake will be limited. Otherwise, they are not delivering what people need.

More generally, to achieve any specific goals, it is usually better to work with existing systems and processes. That status quo came to be over time, and generally for good reason. Looking at something now, without the historical context, and deciding that it is wrong and needs to be disrupted, is the sort of Silicon Valley hubris that ends in tears.

Right now, with my business analyst hat on, driverless vehicles look like a cool idea (albeit one that is still unproven) that is being shoe-horned into a situation that it is not a good match for. If I were looking at a situation like this one in my day job, I would advise everyone to take a step back, re-evaluate what the actual goals are, and see whether a better approach might be possible. Until then, no matter how good the technology gets, it won’t actually deliver on the requirements.

But that doesn’t get as many visionary thinkpieces and TED talks.


Images by Nabeel Syed and Darren Bockman via Unsplash, and by ronnieb via Morguefile


  1. The old saw is that "In Europe, a hundred miles is a long way; in the US, a hundred years is a long time". In Silicon Valley, which was all groves of fruit trees fifty years ago, that time frame is shorter still.

  2. Sorry - not sorry.

Talkin' Bout a Revolution

Once again, the seemingly unkillable idea of modular phones rears its misshapen head.

The first offender is VentureBeat, with a breathless piece entitled The dream of Ara: Inside the rise and fall of the world’s most revolutionary phone.

record scratch

Let me stop you right there, VentureBeat. Ara is not a "revolutionary phone" at all, let alone "the world's most revolutionary phone", for the very good and sufficient reason that Project Ara never got around to shipping an actual phone before it was ignominiously shut down.

"Most ambitious phone design", maybe. I’d also settle for "most misguided", but that would be a different article. Whatever Ara was, it was not "revolutionary", because otherwise we would all be using modular phones. Even the most watered-down version of that idea, LG’s expandable G5 phone design, is now dead - although in their defence, at least LG did actually ship a product somewhat successfully.

Now Andy Rubin, creator of Android, is back in the news, with plans for a new phone… which sounds like it may well be modular:

It's expected to include […] the ability to gain new hardware features over time

This is a bold bet, and Andy Rubin certainly knows more about the mobile phone market than I do - but here’s why I don’t think a modular phone is the way to go.

Take a Step Back - No, Further Back

The reason I was sceptical about Project Ara’s chances from the beginning goes back to Clayton Christensen’s Disruption Theory. I have written about disruption theory before, so I won’t go into too much length about it here, but basically disruption states that in a fast-developing market, integrated products win because they can take advantage of rapid advances in the field. Vice versa, in a mature market products win by modularising, providing specific features with specific benefits or at a lower cost than the integrated solutions can deliver.

Disruption happens when innovation slows down because further innovation requires more resources than consumers are willing to invest. In this scenario, incumbent vendors continue to chase diminishing returns at the top of the market, only to find themselves undercut by modular competitors delivering "good enough" products. Over time, the modular products eat up the bulk of the market, leaving the ex-incumbents high and dry.

If you assume that the mobile phone market is mature and all development is just mopping up at the edges, then maybe a modular strategy makes sense, allowing consumers to start with a "good enough" basic phone and pick and choose the features most important to them, upgrading individual functionality over time. However, if the mobile phone market is still advancing rapidly and consumers still see the benefit from each round of improvements, then fundamental upgrades will happen frequently enough that integrated solutions will still have the advantage.

Some of the tech press seem to be convinced that we have reached the End of History in mobile technology. Last year’s iPhone 7 launch was the epitome of this view, with the consensus being that because the outside of the phone had not changed significantly compared to the previous generation, there was therefore no significant change to talk about.

The actual benchmarks tell a different story. The iPhone 7 is not only nearly a third faster than the previous generation of iPhone across the board, it also compares favourably to a 2013 MacBook Pro.

That type of year-over-year improvement is not the mark of a market that is ripe for modular disruption.

What Do Users Say?

The other question, beyond technical suitability, is whether users would consider a product like Project Ara, or LG’s expandable architecture. The answer, at least according to LG’s experience, is a resounding NO:

An LG spokesperson commented that consumers aren’t interested in modular phones. The company instead is planning to focus on functionality and design aspects

Consumers do not see significant benefits from the increase in complication that modularisation brings, preferring instead to upgrade the entire handset every couple of years, at which point every single component will be substantially better.

And that is why the mobile phone market is not ready for a modular product, instead preferring integrated ones. If every component in the phone needs to be upgraded anyway, modularisation brings no benefit; it’s an overhead at best, and a liability at worst, if modules can become unseated and get lost or cause software instability.

At some point the mobile phone market will probably be disrupted - but I doubt it will be done through a modularised hardware solution in the vein of Project Ara. Instead, I would expect modularisation to take place with more and more functionality being handed off to a cloud-based back-end. In this model, the handset will lose many of its independent capabilities, and revert to being what the telephone has been for most of its history: a dumb terminal connected to a smart network.

But we’re not there yet.


Images by Pavan Trikutam and Ian Robinson via Unsplash

What is Twitter for?

In today's "wait, what year is this again?" moment, Twitter is once again trying to figure out what it wants to be when it grows up - and because it’s Twitter, of course it did it in public:

The company's CMO, Leslie Berland […] in a speech at CES 2017 […] aimed to redefine Twitter and explain why 317 million people use it every month.

And what ultimate definition did Twitter’s CMO come up with for her big speech?

"So, we were a platform, a product, a service, a water cooler, a time square, a microphone, and we are every single one of those things"

Ugh - why not just say it's a dessert topping and a floor wax?

It does get better, as Ms Berland at least recognises the category Twitter needs to be playing in:

"The first thing we did is we actually took ourselves out of the social networking category in the app stores and we put ourselves where we belong, which is news"

After the year we have just had, I don’t think anyone can deny that Twitter is where news happens. US president-elect Donald Trump does not take to Facebook every morning to post his rants, and the Black Lives Matter movement did not start on Instagram or Snapchat. Twitter is a news platform, as is underlined by its asymmetrical nature.

Now there's dessert topping all over the floor

On a true social network such as Facebook, relationships are symmetrical and transitive: if I am your friend, you are also my friend.

On Twitter, that is not the case; I follow accounts that do not follow me, and I have followers that I do not follow. Twitter is where news is made, announced, and discussed; that is its role and its value.

Didn’t we go through all of this last time?

Twitter is not a social network. Not primarily, anyway. It’s better described as a social media platform, with the emphasis on "media platform." And media platforms should not be judged by the same metrics as social networks.
Social networks connect people with one another. Those connections tend to be reciprocal. […]
Media platforms, by contrast, connect publishers with their public. Those connections tend not to be reciprocal.

Now what?

The issue for Twitter is, as ever, how to monetise its role at the heart of the news cycle. Arguably it is shackled by the misplaced expectations of early investors who were looking for another Facebook. I for one hope that they manage to extricate themselves from their current difficulties without getting borged in a totally inappropriate acquisition by Google or whoever.

In particular, these investor expectations for continuing exponential growth are suspected to be interfering with some much-needed changes to curb ongoing abuse on the platform - whether simple problems like follower spam, or the truly nasty harassment that many experience every day. Both of these activities can look like user engagement, at least from a distance, potentially discouraging their prevention.

This is the strategy tax that Twitter is paying: the choices that it finds difficult to take today, because of the choices it made in the past. Some suggest that an acquisition would both inject some much-needed cash, and help break this trap.

I disagree. Twitter needs to be its own thing, not Google's latest attempt to buy more social visibility for itself. There is value in Twitter just being Twitter, if Twitter's management can figure out how to unlock that value.


Image by Daria Shevtsova via Unsplash