Showing all posts tagged work:

The VP of Nope

I have a character in my head, the VP of Nope. This is pure wish-fulfilment on my part: when everyone was in the room taking an utterly wrong and bone-headed decision, I wish there had been someone present who was sufficiently senior to just say “nnnope" and move on.

It seems I’m not the only one to feel that way:

(Scoff all you want, but those are pretty big engagement numbers for me.)

The VP of Nope has to be a VP in order not to have to get bogged down in particulars. Software engineers in particular are very susceptible to getting ideas into their heads which are great in a small context, but have all sorts of problems if you take a step back and look at them again in a wider context.

Here’s an example from my own history. I used to work for a company whose products used fat clients – as in, natively compiled applications for each supported platform. This was fine at the time, but web applications were obviously the future, and so it came to pass that a project was initiated to write thin web clients instead. I was part of the early review group, and we were all horrified to find that the engineers had opted to write everything in Flex.

If you are not familiar with Adobe Flex1, it had a very brief heyday as a way to write rich web interfaces, but had the very significant drawback of running on top of Adobe’s late, unlamented Flash technology. There were several very significant problems due to that dependency:

  • Corporate IT security policies almost never allowed the Flash plugin to be installed on people’s browsers. This meant that a Flex GUI was either a complete non-starter, or required exceptions to be requested and granted for every single machine that was going to connect to the application back-end, thereby losing most of the benefits of moving away from the fat client in the first place.
  • Thin clients are supposed to be less resource-hungry on the client machine than fat clients (although of course they are much more dependant on network performance). While web browsers were indeed lighter-weight than many fat clients, especially Java-based ones, the Flash browser plugin was a notorious resource hug, nullifying or even reversing any energy savings.
  • While Apple’s iPad was not yet nearly as dominant as it is today, when it is the only serious tablet, it was still very obvious that this was The Future. Every company was falling over its feet to provide some sort of tablet app, but famously, Steve Jobs hated Flash, and articulated why in his open letter, Thoughts on Flash. All of Steve’s reasons were of course in themselves valid reasons not to develop in Flex, or anything requiring Flash, but the fact that he was committing to never supporting Flash on his company’s devices killed Flash dead. Sure, it took a couple of years for the corpse to stop twitching, but the writing was on the wall.

Building any sort of strategic application in Flex after the release of that letter in April 2010 was a brain-meltingly idiotic and blinkered decision – and all of us on the early review programme said so, loudly, repeatedly, and (eventually) profanely. However, none of us had sufficient seniority to make our opinions count, and so the rough beast, its hour come round at last, slouched towards GA to be born into an uncaring world.

This Is Not A Rare Event

I have any number of examples like this one, where one group took a narrow view of a problem, unaware of or ignoring the wider context. In this particular case, Engineering had determined that they could develop a thin web client more quickly and easily by using a piece of Adobe technology than by dealing with the (admittedly still immature) HTML5 tools available at the time. Given their metrics and constraints, this may even have been the right decision, but it resulted in an outcome that was so wrong as to actively damage the prospects of what had been until then perfectly viable products.

In such situations, the knock-on effects of the initial fumble are often even worse, and so it was to prove in this case as well. First, enormous amounts of time, energy, and goodwill were wasted arguing back and forth, and then the whole GUI had to be re-written from scratch a second time without Flex once it became apparent to enough people what a disaster the first rewrite was. Meanwhile, customers were continuing to use the old fat client, which was falling further and further behind the state of the art, since all of Engineering’s effort was being expended on either rewriting the GUI yet again, or strenuously defending the most recent rewrite against its critics. All of this wasted and misdirected effort was a major contributing factor to later strategic stumbles whose far-reaching consequences are still playing out now, nearly a decade later.

This is what is referred to as an omnishambles, a situation that is comprehensively messed up in every possible way – and the whole thing could have been headed off before it even began by the VP of Nope, quietly clearing their throat at the back of the room and shaking their head, once.

Their salary would be very well earned.


Photo by Vladimir Kudinov on Unsplash


  1. Originally developed by Adobe, it now seems to be staggering through an unloved half-life as an open-source project under the umbrella of the Apache foundation. Just kill it already! Kill it with fire! 

How To Run A Good Presentation

There are all sorts of resources about creating a good slide deck, and about being a good public speaker – but there seems to be a gap when it comes to the actual mechanics of delivering a presentation. Since I regularly see even experienced presenters get some of this stuff wrong, I thought I’d write up some tips from my own experience.

I Can’t See My Audience

The first question is, are you presenting to a local audience, or is your audience somewhere else? This seriously changes things, and in ways that you might not have considered. For a start, any sort of rich animation in your slides is probably bad for a remote presentation, as it is liable to be jerky or even to fail entirely.

You should definitely connect to a remote meeting a few minutes ahead of time, even if you have already installed the particular client software required, as there can still be weird issues due to some combination of the version of the plugin itself, your web browser, or their server-side software. If the meeting requires some software you have not used before, give yourself at least fifteen minutes to take care of downloading, installing, and setting that up to your satisfaction.

Even when people turn on their webcam (and assuming you can see something useful through it, as opposed to some ceiling tiles), once you start presenting you probably won’t be able to see them any more, so remember to stop every few minutes to check that everyone is still with you, that they can see whatever you are currently presenting, and whether they have any questions. This is good advice in general, but it’s easier to remember when the audience is in the room with you. When you’re just talking away to yourself, it can be hard to remember that there are other people listening in – or trying to.

Fancy "virtual meeting room" setups like Cisco’s TelePresence are all very well – as long as all participants have access to the same setup. Most times that I have used such systems, a few participants were connecting in from desktop devices, from their computers, or even from phones, which of course gave them far less rich functionality. Don’t assume that everyone is getting the full “sitting right across the table from each other" experience!

My Audience Can’t See Me

In one way, presenting remotely without a webcam trained on you can be very freeing. I pace a lot; I do laps of the room while talking into a wireless headset. I think this helps me keep up the energy and momentum of a live presentation, which otherwise can be hard to maintain – both when I’m presenting and when I’m in the audience.

One complication is the lack of presenter mode. I rely on this heavily during live presentations, both for speaker notes on the current slide and to remind myself about the next slide. Depending on the situation, I may also use the presenter view to jump around in my deck, presenting slides in a different order than the one they were saved in. Remote presentation software won’t let you do this, or at least, not easily. You can hack it if you have two monitors available, by setting the “display screen" to be the one shared with the remote audience, and setting the other one to be the “presenter screen", but this is a bit fiddly to set up, and is very dependent on the precise meeting software being used.

This is particularly difficult when you’re trying to run a demo as well, because that generally means mirroring your screen so the remote audience sees the same thing as you do. This is basically impossible to manage smoothly in combination with presenter view, so don’t even try.

Be In The Room

If you are in the room with your audience, there’s a different set of advice. First of all, do use presenter mode, so that you can control the slides properly. Once you switch over to a demo, though, mirror your screen so that you are not craning your neck to look over your own shoulder like a demented owl while trying to drive a mouse that is backwards from your perspective. Make it so you can operate your computer normally, and just mirror the display. Practice switching between these modes beforehand. A tool that can really help here is the free DisplayMenu utility. This lives in your menu bar and lets you toggle mirroring and set the resolution of all connected displays independently.

Before you even get to selecting resolutions, you need to have the right adapters – and yes, you still need to carry dongles for both VGA and HDMI, although in the last year or so the proportions have finally flipped, and I do sometimes see Mini DisplayPort too. I have yet to see even the best-equipped conference rooms offer USB-C cables, but I am seeing more and more uptake of wireless display systems, usually either an AppleTV, or Barco ClickShare. The latter is a bit fiddly to set up the first time, so if you’re on your own without someone to run interference for five minutes, try to get a video cable instead. Once it’s installed, though, it’s seamless – and makes it very easy to switch devices, so that you can do things like use an iPad as a virtual whiteboard.

Especially during the Q&A, it is easy to get deeply enough into conversation that you don’t touch your trackpad or keyboard for a few minutes, and your machine goes to sleep. Now your humorous screensaver is on the big screen, and everyone is distracted – and even more so while you flail at the keyboard to enter your password in a hurry. To avoid this happening, there’s another wonderful free utility, called Caffeine. This puts a little coffee cup icon in your menu bar: when the cup is full, your Mac’s sleep settings are overridden and it will stay awake until the lid is closed or you toggle the cup to empty.

Whether the audience is local or remote, Do Not Disturb mode is your friend, especially when mirroring your screen. Modern presentation software is generally clever enough to set your system to not display on-screen alerts while you are showing slides (unless you are one of those monsters who share their decks in “slide sorter" view, in which case you deserve everything you get), but that won’t save you once you start running a demo in your web browser. Some remote meeting software lets you share a specific application rather than your whole screen, but all that means is that instead of the remote audience seeing the specific text of your on-screen alerts, they see ugly great redacted rectangles interfering with the display. Either way, it does not look great.

I hope these tips have been useful. Good luck with your presentations!


Photos by Headway and Olu Eletu on Unsplash

Work From Home

I was reading an interesting blog post about working from home, by Julia Evans. I also work from home, so it was interesting to compare my experiences of remote work with hers.

The two main benefits are the obvious ones – I get to live where I want (Montreal) and have the job that I want. And San Francisco tech companies in general pay a lot more than Montreal tech companies, so working for a SF tech company while living outside SF is great.

I can confirm this 100%. I live in Italy rather than in Canada, but the same factors apply: I’d rather be here than there, and the salary is very competitive with what I could make locally.

  • I have a lot of control over my working environment. It’s relatively easy to close Slack and focus.

True! I hated working in an open-plan office, and wore headphones a lot so that I could get some peace and quiet. It did not help that none of my team were in that office, so I was only going there to satisfy some HR mandate.

  • I basically haven’t had to set an alarm for 4 years.

Ha. Nnnope – I still have my alarm set far too early every weekday to take the kids to school. On the other hand, I can have breakfast with them and take them to school, and still get a full day of work in in. Part of that is down to time zone shift, which is both good and bad; more on that later.

  • There’s a nice community of remotes across the company. I’ve gotten to know a lot of wonderful people.

Yes! My team is spread out across four sites and three time zones, and so are many other teams, so there isn’t the sort of downside to being remote that there can be if it’s an exception.

  • I can work from another city/country if I want (like I went to Berlin for 6 weeks in 2016 and it wasn’t disruptive, especially since my 2 teammates at the time lived in Europe).

I haven’t tried this one (those kids and their schools again), but I know other people who’ve done it very successfully. This also works if your area of coverage gets large enough. I knew someone who was in charge of one particular technology alliance partner across the whole of EMEA, which meant that he spent a lot of his time flying. Soon, he realised that this meant he didn’t have to be anywhere in particular, as long as he was near an international airport – so he decamped to Barcelona for a year. Why not?

  • I live in a slightly shifted timezone (3 hours ahead of many people I work with), so I can get stuff done before anybody gets to work.

I am shifted a lot more than that: the difference from Italy to San Francisco is nine hours. The upside is I get a nice quiet start to my day to read, write, and think, and then the US wakes up and I start getting into conference calls. The downside is that there are only a few usable hours of overlap in our schedules, so compatible time slots go quickly. Sometimes you have to do careful triage of what actually needs an interactive voice call, and what can be discussed asynchronously over email or Slack. I make it a hard rule to keep family dinner time free, but I do take calls after dinner several times a month, when we can’t work out other slots.

Shout Louder

That last point is important: I joined a team that had previously been able to shout across a table at each other, and suddenly half the team was remote. We had to figure out how to communicate and manage projects across the time zone gap, and there were some stumbles and false starts along the way.

What we ended up figuring out was that different channels work for different tasks. Perhaps not revolutionary, I know, but we took the time while we were all together in person and thrashed it out with a whiteboard: what type of requests should go to which channel, what response times could be expected, and so on.

This is what is known as a "communications charter", and is recommended by HBR for virtual teams:

Communication on virtual teams is often less frequent, and always is less rich than face-to-face interaction, which provides more contextual cues and information about emotional states — such as engagement or lack thereof. The only way to avoid the pitfalls is to be extremely clear and disciplined about how the team will communicate. Create a charter that establishes norms of behavior when participating in virtual meetings, such as limiting background noise and side conversations, talking clearly and at a reasonable pace, listening attentively and not dominating the conversation, and so on. The charter also should include guidelines on which communication modes to use in which circumstances, for example when to reply via email versus picking up the phone versus taking the time to create and share a document.

Get In Their Face

Note that when we were working out our communications charter, we did it with a whiteboard. This is because we made it a goal to get together in person once a quarter or thereabouts. Don’t skimp on this! It’s not cheap: airfare, hotels, and meals all add up. However, the time you spend together face to face will pay off over and over. There is so much that gets done when the team is together, and the benefits continue after the remote team members fly out, because that face time has strengthened relationships and clarified questions.

In fact, face time is so important that it’s the very first point in that HBR list:

It may seem paradoxical to say in a post on virtual teams, but face-to-face communication is still better than virtual when it comes to building relationships and fostering trust, an essential foundation for effective team work. If you can’t do it, it’s not the end of the world (focus on doing some virtual team building). But if you can get the team together, use the time to help team members get to know each other better, personally and professionally, as well to create a shared vision and a set of guiding principles for how the team will work. Schedule the in-person meeting early on, and reconnect regularly (semi-annually or annually) if possible.

Feed The Mind

However, there is one final point that I have not seen listed anywhere else, and that is food. When I work from home, I can make my own meals, and share them with whoever else is around: my kids if they don’t have school, my wife if she is also working from home, or friends who might be taking their lunch breaks at the same time as me.

What do you think? Beats a soggy sandwich at your desk, right?


Top image by Seemann via Morguefile; bottom image courtesy of author.

Unbundling and Rebundling

What is a plumber, anyway?

Tim Harford, better known as the Undercover Economist, always has reliably entertaining thoughts. His latest piece explains Why Microsoft Office is a bigger productivity drain than Candy Crush Saga. Drawn in by that title like moths to a flame, we find the following critique:

Microsoft Office may be as much a drag on productivity as Candy Crush Saga. To see why, consider Adam Smith’s argument that economic progress was built on a foundation of the division of labour. His most celebrated example was a simple pin factory: "One man draws out the wire, another straights it, a third cuts it, a fourth points" and 10 men together made nearly 50,000 pins a day.
[…]
In a modern office there are no specialist typists; we all need to be able to pick our way around a keyboard. PowerPoint has made amateur slide designers of everyone. Once a slide would be produced by a professional, because no one else had the necessary equipment or training. Now anyone can have a go — and they do.
Well-paid middle managers with no design skills take far too long to produce ugly slides that nobody wants to look at. They also file their own expenses, book their own travel and, for that matter, do their own shopping in the supermarket. On a bill-by-the-minute basis none of this makes sense.

Superficially, this take is amusing as ever, but on reflection, I do find it a little disingenuous. Leaving the slides out of it, because actually those are part of my job, nowadays it is true that we all book our own travel and so on – but on the other hand very few of us office drones keep our own vegetable gardens or even know how to, and those that do mostly treat it as a hobby.

All that has happened is that the frontier of specialisation has moved, and what was once common knowledge for everyone is now a specialised job, while what once required specialists is now expected of everyone. Where once everybody knew how to grow their own food, now we delegate that to small groups of professionals using specific tools. Meanwhile, data processing, which used to be literally the preserve of a priestly caste, has been democratised to the point that any office job will require people to know at least the basics.

I would love to have an assistant to do my expenses and so on, and instead here I am toiling in the salt mines – but let’s face it, if your expense platform is at all decent, and you have reasonable IT skills, this should take roughly no time at all. Booking your own travel ensures that you get what you want, making your own compromises inside of the corporate travel policy.

This definitional error has some interesting consequences, as it is certainly true that most people are probably slower typists than professionals who worked in typing pools, when such things still existed. If your measurement of productivity is words banged out on keyboard per minute, it is almost certainly less efficient for professionals to do it themselves. And yet, hitting the keys yourself is always a far quicker way of getting your ideas out than dictating to even the fastest typing pool. How do you measure the productivity difference between an exec tapping out three lines on their iPhone while waiting to board a flight, versus having to wait until they get back to the office on Monday? Sure, those three lines are terse, jargon-filled, and probably stuffed with typos or interesting autocorrect-isms, but they get the point across.

All of this transformation informs Tim Harford’s predictions for 2118:

In an insightful essay from 1996, Paul Krugman predicted that there would be "no robot plumbers" in 2096. I agreed with him then. I am no longer so confident. It seems quite plausible that in 100 years’ time — and perhaps much sooner — plumbers, taxi drivers and many journalists, too, will simply have nothing serious to contribute to the labour market.

I would be seriously impressed by a robot with the combination of agility, strength, and inference reasoning required to work as a plumber. I may well be proved wrong by events (and if so, I will take refuge in probably not being around to be embarrassed by my wrongness), but I expect it won’t quite work out that way. Instead, I suspect that the job of "plumber" is one of the safest out there, and for many of the same reasons that it was impossible to outsource: in addition to knowledge, it requires great situational awareness, problem-solving capabilities, and flexibility – all of which are weak points for automated systems.

More vulnerable are the jobs that delaminate neatly into separate tasks, some of which will be re-bundled into new and different jobs, while others are automated away. The job of "typist" has gone the way of the dodo because it encapsulated a single task which could either be made part of other jobs (we all do our own typing) or automated (mail merge and variables make it easy to generate and circulate even quite complex documents, without human typing).

The job market will certainly be radically different in 2118 – that prediction is fairly safe – but I expect that there will still be jobs, and people doing them - people augmented by automated capabilities.


Photo by Jouni Rajala on Unsplash

Biting My Tongue

So I'm working with a prospect in the fashion and luxury goods area. We've been doing a Proof of Value for the last few weeks, and we're now at the point of presenting the results.

So I built this slide deck as if it were a fashion collaboration, "Moogsoft X $PROSPECT_NAME", "Spring-Summer 2017", and so on. I'm super proud of it - not just the conceit, but also the results we have been able to provide for very little effort - but I'm also kind of bummed that I can never show it to anyone outside the company.

This prospect does not want its name used anywhere, so even if - I mean, when we close the deal, they will only ever appear anywhere as "fashion & luxury goods house".

This is not the first time this has happened to me. At a previous startup, we sold to, umm, let’s call them a certain automotive manufacturer and motorsports team based near Modena. While negotiating the price, the customer asked for "a last effort" in terms of discounting. In exchange, we asked for them to provide us with an official reference. After consulting with their brand marketing people, it turned out that the fee for use of their trademark would have been nearly twice the total value of the software deal… We respectfully declined their kind offer.

After all, the main thing is to do the deal and provide value; even if we can't get the logo on our site, it's still a win.

My only remaining problem (beyond actually getting the deal over the line) is that my wife wants me to be paid for this current opportunity in handbags, while the Moogsoft colleague who helped me out wants her share in eau de toilette…

New Paths to Helicon

I was chatting to a friend last week, and we got onto the topic of where sysadmins come from. "When two sysadmins love each other very much…" - no, that doesn't bear thinking about. BRB, washing out my mind with bleach.

But seriously. There is no certification or degree that makes you a sysadmin. Most people come into the discipline by routes that are circuitous, sideways, if not entirely backwards. The one common factor is that most people scale up to it: they start running a handful of servers, move or grow to a 50-server shop, build out some tools and automation to help them get the job done, then upgrade to 500 servers, and so on.

The question my friend and I had was, what happens when there are no 10 and 50-server shops around? What happens when all the jobs that used to be done with on-premises servers are now done in SaaS or PaaS platforms? My own employer is already like that - we’re over a hundred people, and we are exactly the stereotypical startup that features in big infrastructure vendors' nightmares: a company that owns no physical compute infrastructure, beyond a clutch of stickered-up MacBooks, and runs everything in the cloud.

The 90s and Naughties, when I was cutting my teeth in IT, were a time when there was relative continuity between desktop and enterprise computing, but that is no longer the case. These days you’ve got to be pretty technical as a home user before anything you’re doing will be relevant at enterprise scale, because those in-between cases have mostly gone away. I got my start in IT working at the local Mac shop, but neighbourhood computer stores have gone the way of the dodo. There simply are not many chances to manage physical IT infrastructure any more.

Where Are Today’s On-Ramps?

There is one part of that early experience of mine which remains valid and replicable today. My first task was pure scut-work, transferring physical mail-in warranty cards into the in-house FileMaker Pro "database". After two weeks of this, I demanded (and received) permission to redo the UI, as it was a) making my eyes bleed, and b) frustrating me in my data entry. Once I’d fixed tab order and alignments, I got ambitious and started building out data-queries for auto-suggestions and cross-form validation and all sorts of other weird & wonderful functions to help me with the data entry. Pretty soon, I had just about automated myself out of that job; but in doing so, I had proven my value to the company, and received the traditional reward for a job well done - namely, another job.

That is today’s path into computing. People no longer have to edit autoexec.bat on their home computers just to play games, but on the other hand, they will start to mess around behind the scenes of their gaming forum or chat app, or later on, in Salesforce or ServiceNow or whatever. This is how they will develop an understanding of algorithms, and some of them will go on from there, gradually growing their skills and experience.

A Cloudy Future?

To be clear, this cloud-first world is not yet a reality - even at Moogsoft, only a fairly small percentage of our customer base opts for the SaaS deployment option. More use it for the pilot, though, and interest is picking up, even in unexpected quarters. On the other hand, these are big companies, often with tens or hundreds of thousands of servers. They have sunk costs that mean they lag behind the bleeding edge of the change.

Even if someone does have 50 servers in an in-house server room today, as the hardware reaches its end-of-life date, more and more organisations are opting not to replace them. I was talking to someone who re-does offices, and a big part of the job is ripping out the in-house "data closet" to make more work space. The migration to the cloud is not complete, and won't be for some time, but it has definitely begun, even for existing companies.

What will save human jobs in this brave new world will be "intersection theory" - people finding their niches where different sub-fields and specialisations meet. Intuitive leaps and non-obvious connections between widely separated fields are what humans are good at. Those intersections will be one of the last bastions of the human jobs, augmented by automation of the more narrowly-focused and predictable parts of the job.

There will be other hold-outs too, notably tasks that are too niche for it to be worth the compute time to train up a neural network. My own story is somewhere in between the two, and would probably remain a viable on-ramp to IT - asssuming, of course, that there are still local firms big enough to need that kind of service.

Constant Change Is The Only Constant

To be clear, this is not me opining from atop an ivory tower. Making those unexpected, non-obvious connections, and doing so in a way that makes sense to humans, is the most precise definition I’d be willing to sign up to of the job I expect to have twenty years from now.

As we all continue to reinvent ourselves and our worlds, let's not forget to bring the next generations in. Thinking that being irreplaceable is an unalloyed win is a fallacy; if you can't be replaced, you also can't be promoted. We had to make it up as we went along, but now it's time to systematise what we learned along the way and get other people in to help us cover more ground.

See you out there.

Replace or Augment?

One of the topics that currently exercise the more forward-looking among us is the potential negative impact of automation on the jobs market and the future of work in general. Comparisons are frequently made with the Industrial Age and its consequent widespread social disruption - including violent reactions, most famously the Luddite and saboteur movements.

Some cynics have pointed out that there was less concern when it was only blue-collar jobs that were being displaced, and that what made the chattering classes sit up and pay attention was the prospect of the disruption coming for their jobs too. I could not possibly comment on this view - but I can comment on what I have seen in years of selling automation software into large companies.

For more than a decade, I have been involved in pitching software that promised to automate manual tasks. My customers have always been large enterprises, usually the Global 2000 or their immediate followers. Companies like this do not buy software on a whim; rather, they build out extensive business cases and validate their assumptions in detail before committing themselves1. There are generally three different ways of building a business case for this kind of software:

  • Support a growth in demand without increasing staff levels (as much);
  • Support static demand with decreasing staff;
  • Quality improvement (along various different axes) and its mirror image, risk avoidance.

The first one is pretty self-evident - if you need to do more than you can manage with the existing team, you need to hire more people, and that costs money. There are some interesting second-order consequences, though. Depending on the specifics of the job to be done, it will take a certain amount of time to identify a new hire and train them up to be productive. Six months is a sensible rule of thumb, but I know of places where it takes years. If the rate of growth gets fast enough, that lag time starts to be a major issue. You can't just hire yourself out of the hole, even with endless money. The hole may also be getting deeper if other companies in the same industry and/or region are all going through the same transformation at the same time, and all competing for the same talent.

If instead you can adopt tooling that will make your existing people more efficient and let you keep up with demand, then it is worth investing some resources in doing so.

That second business case is the nasty one. In this scenario, the software will pay for itself by automating people's jobs, thus enabling the company to fire people - or in corporate talk, "reduce FTE2 count". The fear of this sort of initiative is what makes rank and file employees often reflexively suspicious of new automation tools - over and above their natural suspicion that a vendor might be pitching snake-oil.

Personally I try not to build business cases around taking away people's jobs, mainly because I like being able to look myself in the mirror in the mornings (it's hard to shave any other way, for one thing). There is also a more pragmatic reason not to build a business case this way, though, and I think it is worth exploring for its wider implications.

Where Are The Results?

The thing is, in my experience business cases for automation built around FTE reduction have never been delivered successfully - if focused on automation of existing tasks. That is an important caveat, but I will come back to that.

Sure, the business case might look very persuasive - "we execute this task roughly a dozen times a day, it takes half an hour each time, and if you add that up, it's the equivalent of a full-time employee (an FTE), so we can fire one person". When you look at the details, though, it's not quite so simple.

The fact is that people rarely work at discrete tasks. Instead, they spend their time on a variety of different tasks, more or less integrated into a whole process. There is a tension between the two extremes: at the one end you have workers on a repetitive assembly line, while at the other you have people jumping around so much they can never get anything done. Most organisational functions are somewhere in between those two poles.

If automation is focused on addressing those discrete tasks, it absolutely will bring benefits, but those benefits will add up to freeing up existing employees to catch up with other tasks that were being neglected. Every IT department I have ever seen has a long tail of to-dos that keep getting pushed down the stack by higher-priority items. Automation is the force multiplier that promises to let IT catch up with its to-do list.

This sort of benefit is highly tactical, and is generally the domain of point solutions that do one thing and do it well. This will enable the first kind of business case, delivering on new requirements faster. It will not deliver the second kind of business case. The FTEs freed up through automation get redeployed, not fired, and while the organisation is receiving benefit from that, it is not what was built into the assumptions of the project, which will cause problems for its sponsors. Simply put, if someone ever checks the return on the investment (an all too rare occurrence in my experience), the expected savings will not be there.

Strategic benefits of automation, on the other hand, are delivered by bundling many of these discrete tactical tasks together into a new whole.

Realising those strategic benefits is not as straightforward as dropping a new tool into an existing process. Actually achieving the projected returns will require wholesale transformation of the process itself. This is not the sort of project that can be completed in a quarter or two (although earlier milestones should already show improvement). It should also not be confused with a technology implementation project. Rather, it is a business transformation project, and must be approached as such.

Where does this leave us?

Go Away Or I Will Replace You With A Very Small Shell Script

In my experience in the field, while tactical benefits of automation are achievable, true strategic improvement through automation can only be delivered by bundling together disparate technical tasks into a new whole. The result is that it is not skilled workers that are replaced, but rather the sorts of undifferentiated discrete tasks that many if not most large enterprises have already outsourced.

This shows who the losers of automation will be: it is the arbitrageurs and rent-seekers, the body-rental shops who provide no added value beyond cheap labour costs. The jobs that are replaced are those of operators, what used to be known as tape jockeys; people who perform repetitive tasks over and over.

The jobs that will survive and even benefit from the wave of automation are those that require interaction with other humans in order to determine how to direct the automation, plus of course the specialists required to operate the automation tools themselves. The greatest value, however, will accrue to those who can successfully navigate the interface between the two worlds. This is why it is so important to own those interfaces.

What might change is the nature of the employment contracts for those new roles. While larger organisations will continue to retain in-house skills, smaller organisations for which such capabilities are not core requirements may prefer to bring them in on a consultative basis. This will mean that many specialists will need to string together sequences of temporary contracts to replace long-duration full-time employment.

This is its own scary scenario, of course. The so-called gig economy has not been a win so far, despite its much-trumpeted potential. Perhaps the missing part to making this model work is some sort of universal basic income to provide a base and a safety net between consulting jobs? As more and more of the economy moves in this direction, at least in part due to the potential of automation, UBI or something similar will be required to bridge the gap between the assumptions of the old economy and the harsh realities of the new one.

So, the robots are not going to take our jobs - but they are going to change them, in some cases into something unrecognisable. The best thing humans can do is to plan to take care of one other.


Images by Annie Spratt, Janko Ferlic, and Jayphen Simpson via Unsplash


  1. Well, in theory. Sometimes you lose a deal because the other vendor's CEO took your prospect's entire management team for a golfing weekend in the corporate jet. But we don't talk about that. 

  2. An FTE is a Full-Time Equivalent: the amount of work expected of one employee, typically over a year, allowing for holidays and so on. Typically that means somewhere between 200 and 220 working days of 8 hours each, so 1600 to 1760 hours in a year. The "FTE cost" of an activity is calculated by taking the time required to perform an activity once, multiplying that by the number of times that activity needs to be performed, and dividing by the FTE rate. 

Misunderstanding Tools

The sour taste in my espresso this morning is courtesy of yet another dudebro tech VC, opining about how ties are uncool, maaaaann! and basically nobody should write on LinkedIn.

If you have a tie on in 2015, it probably means you are a salesman in a non-transparent industry and are generally not to be trusted at any cost. When I see a tie on somebody, I get that funny feeling you get right before the dentist. Let’s face it, the people left wearing ties every day are the confidence-men stealing your money. Think insurance, financial services, bad shoes and, of course, car salesmen.

Well now.

I am on record as not only a tie wearer, but also a tie apologist. To quote myself once again:

In fact, suits & ties are actually the ultimate nerd apparel. You have to put some effort into shopping, sure, and they tend to cost a bit more than a random vendor T-shirt and ancient combats, but the advantage is that you can thereafter completely forget about wondering what to wear. You can get dressed in the dark and be sure that the results will be perfectly presentable. If you want you can go to a little bit more effort and inject some personality into the process, but the great thing is that you don’t have to. By wearing a suit & tie, you lead people to pay attention to what you say and do, not to what you are wearing. And isn’t that the whole point?

This mindset of “distrust anyone dressed like a grown-up" is just one more symptom of the Revenge of the Nerds chauvinism that is rife in the tech industry. The nerds complain about being victimised by the jocks, but it’s not the victimisation itself that they object to, it’s just being on the receiving end of it. “They mocked me for dressing differently from them, but now I mock them for dressing differently from me! Haha, I win!"

No, no you don’t win. You just look like an overgrown, entitled man-child. Grown-ups wear ties as a sign of respect to one another. If some sleaze balls wear suits & ties, that is because they are trying to fake that respect - but just because something is faked, does not mean that it’s not aping something real.

If I visit a customer or a prospect, I am a guest, and I dress and act appropriately. I’m not more “genuine" or “passionate" if I show up in jeans, sneakers and a Zuckerberg-approved hoodie. If I’m doing it right, my passion and competence will show regardless of what I wear. Today, wearing a hoodie to work is not transgressive or cool - it’s just imitating a more successful person. And let’s not even pretend that your hoodie doesn’t get judged for materials, cut, brand, etc., as much or more than suits ever were.

Basically, he is wilfully misunderstanding what people use LinkedIn for and why they would want to write there. Yes, it’s an advertising tool - that’s what we are all there for! LinkedIn is buttoned-down, professional me - although I like to think that I still put some personality in there. Twitter is where I let it all hang out, and talk about what I am up to at work right beside books, music, and whatever has got the Internet in a bunch lately.

Amusingly, Dudebro VC's piece ends up being an example of exactly the sort of writing he decries, since it’s a listicle:

1) LinkedIn has become a giant branded entertainment platform for selling us crappy fake expertise.

2) Crappy writing

3) No real authentic sentiment

4) LinkedIn notifications are predatory

The real kicker is at the end, though, where he says that it’s perfectly okay for him to write a listicle, because it’s not on LinkedIn, plus he got paid for it and doesn’t care about how many times it gets viewed.

Firstly, this is insultingly disingenuous. Writing this sort of flamebait, custom-designed to go viral and provoke reactions1 and then making a big show of turning away and not watching the ensuing furore is a cheap trick - but one that is perfectly in line with the rest of the piece.

Secondly, this is pretty transparently elitist. He's attempting to pull up the ladder behind him, mocking anyone who has not achieved his supposed level of clout in the industry. What he is saying with this piece is, if you’re a big shot, you can wear a hoodie to work and be paid for your opinions. If you have to dress professionally and are still having to work hard to get your opinions out there, you’re a loser.

Just in case you thought Martin Schkreli - he of the 5000% drug price increases and one-off Wu-Tang Clan albums - was an outlier: now you know that he is not. There are plenty of utter tools in VC.


I also took special pleasure in cross-posting this piece to LinkedIn Pulse, just to make my point one more time.


Image by Olu Eletu via Unsplash


  1. Such as this one - hi! Congratulations, it worked! 

Happiness in Typesetting

In my usual spirit of always wanting to try an alternative way of doing something - partly in hope that the alternative might be better, partly due to my latent hipster gene trying to express itself - I have always been curious about LaTeX. It's a big commitment, though, and until now I had lacked data to drive my decision.

Somebody (shockingly, they are in Germany) has done a scientific comparison of LaTeX vs Word.

So it turns out that a specialised tool is really good at a specialised task, while a jack-of-all-trades tool does better at general tasks. Big whoop.

The interesting question to me would be a breakdown of how many Word users know how to use even fairly basic features. The style sheet functionality seems to be a mystery even to people who really should know better.

People complain a lot about Word being obtrusive, and there is definitely truth to that complaint: try nesting tables, or trying to pad them, or doing two-column layouts that don't flow, and then come back and tell me about Word - but only once you stop swearing and twitching, please. However, many of the complaints that I hear tend to be more about people not knowing about a feature in Word, or not using it properly.

Part of that problem is of course due the design and usability of Word itself, but it's noticeable that all of the alternatives to Word run into the exact same problem of complexity, as soon as they get past the basics. It's often said that 80% of users use only 20% of the features of software - but Word is the perfect example of the fact that everybody has a different 20% subset that is critical to them.

Anyway, while it looks like LaTeX only really shines for mathematical equations, since LaTeX users appear to be happier, I may yet have to give it a go.

Watch this space…

Cube dwelling

I complain a lot on Twitter about open-plan offices, but they are not the worst working environment. Every time I spend any length of time in a US-style cube farm, I long for an open-plan office. Cubes are the worst of both worlds: enclosed enough that you feel hemmed in and cannot see daylight, but without any meaningful sound isolation.

enhance_team_pr.jpg

Working from the living room table is good, but I really need to sort out the connectivity to my home office so that I can use it properly. A project for my copious free time!