Draining The Moat

Zoom is in a bit of a post-pandemic slump, describing its own Q2FY23 results as "disappointing and below our expectations". This is quite a drop for a company that at one point was more valuable than ExxonMobil. Zoom does not disclose the total number of users, only "enterprise users", of which there are 204,100. "Enterprise users" are defined in a footnote to the slides from those Q2FY23 results as "customers who have been engaged by Zoom’s direct sales team, channel partners, or independent software vendor (ISV) partners." Given that Zoom only claims 3,116 customers contributing >$100k in revenue over the previous year, that is hardly a favourable comparison with Cisco's claim of six million users of WebEx Calling in March 2022.

As I wrote in The Thing With Zoom, Zoom's original USP was similar to WebEx's, namely the lowest time-to-meeting with people outside company. As a sales person, how quickly can I get my prospect in the meeting and looking at my presentation? Zoom excelled at this metric, although they did cut a number of corners to get there. In particular, their software would stick around even after users thought they had uninstalled it, just in case they ever needed it again in the future.

Over the past year or two, though, Teams usage has absolutely taken off. At the beginning the user experience was very rough, even by Microsoft standards, confusing users with the transition from its previous bandwagon-jumping branding as Skype for Business. Joining a Teams meeting as an outsider to the Teams-using organisation was (and largely still is) a mess, with the client failing to connect as often as not, or leaving meeting invitees in a loop of failed authentication, stuck between a web client and a native client, neither of which is working.

And yet, Teams is still winning in the market. Why?

There is more to this situation than just Microsoft's strength in enterprise sales. Certainly, Microsoft did not get distracted trying to cater to Zoom cocktails or whatever, not least because nobody in their right mind would ever try to party over Teams, but also for the very pragmatic and Microsoftian move that those users don't pay.

Teams is not trying to play Zoom and WebEx at their own game. Microsoft doesn't care about people outside their client organisations. Instead, Microsoft Teams focuses on offering the richest possible meeting experience to people inside those organisations.

I didn't fully appreciate this distinction, since throughout this transition I was working for companies that used the standard hipster tech stack of Slack, Google Docs, and Zoom. What changed my understanding was doing some work with a couple of organisations that had standardised on Teams. Having the text chat, video call, and documents all in one place was wonderfully seamless, and felt native in a way that Google's inevitable attempt to shoehorn Hangouts into a Google Docs sidebar or comment thread never could.

This all-in-one approach was already calculated to appeal to enterprises who like simplicity in their tech stack — and in the associated procurement processes. Pay for an Office 365 license for everybody, done. Teams would probably have won out anyway just on that basis, but the trend was enormously accelerated by the very factor everyone assumed would favour Zoom: remote work.

While everyone was focusing on Zoom dating, Zoom board games, Zoom play dates, and whatever else, something different was happening. Sales people were continuing to meet with their customers over Zoom/WebEx/whatever, but in addition to that, all of the intra-company meetings were also flipping online. This transition lead to an explosion in the ratio of internal video meetings to outside-facing ones, changing the priority from "how quickly can I get the other people in here, especially if they haven't got the meeting client installed" to "everyone has the client installed, how productive can we be in the meeting".

As the ratio of outside video meetings to inside meetings flips, Zoom's moat gets filled in

Zoom could not compete on that metric. All Zoom could do was facilitate someone sharing their screen, just like twenty years ago. Maybe what was being shared was a Google Doc, and the other people in the meeting were collaborating in the doc — but then what was Zoom's contribution? Attempts to get people to use built-in chat features or whiteboarding never took off; people used their Slack for chatting, and I never saw anyone use the whiteboard feature in anger.

Once an organisation had more internal remote video meetings than outside-facing ones, these differences became glaring deficiencies in Zoom compared to Teams.1

Zoom squandered the boost that the pandemic gave them. Ultimately, video chat is a feature, not a product, and Zoom will either wither away, or get bought and folded into an actual product.


🖼️ Photos by Chris Montgomery and Christina @wocintech.chat on Unsplash


  1. The same factors are also driving a slight resurgence in Hangouts, based on my anecdotal experience, although Google does not disclose clear numbers. If you're already living in Google Docs, why not just use Hangouts? (Because it's awful UX, but since when did that stop Google or even slow them down?) 

Fun In The Sun

A reliable way for companies to be seen as villains these days is to try to roll back concessions to remote work that were made during the pandemic1. Apple is of course a perennial scapegoat here, and while it seems reasonable that people working on next year's iPhone hardware might have to be in locked-down secure labs with all the specialised equipment they need, there is a lurking suspicion that much of the pressure on other Apple employees to return to work is driven by the need to justify the massive expense of Apple Park. Jony Ive's last project for Apple supposedly cost over $4B, after all. Even for a company with Apple's revenues, that sort of spending needs to be justified. It's not a great look if your massive new vanity building is empty most of the time.

The same mechanisms are playing out in downtown business districts around the world, with commercial landlords worried about the long-term value of their holdings, and massive impacts on the services sector businesses (cafes, restaurants, bars, dry-cleaners, etc etc) that cluster around those office towers.

With all of this going on, it was probably inevitable that companies would try to jump on the bandwagon of being remote-work friendly — some with greater plausibility than others. I already mentioned Airbnb in a past post; they have an obvious incentive to facilitate remote work.

Other claims are, let's say, more far-fetched.

In a recent example of the latter genre, it seems that Citi is opening a hub in Málaga for junior bankers:

  • Over 3,000 Málaga hopefuls applied for just 27 slots in the two-year program, which promises eight-hour days and work-free weekends -- practically unheard of in the traditional banking hubs in Manhattan and London. In exchange, Málaga analysts will earn roughly half the starting salaries of their peers.
  • The new Spain office will represent just a minuscule number of the 160 analysts Citi hired in Europe, the Middle East, and Africa, on top of another 300+ in New York.

This is… a lot less than meets the eye. 27 people, out of a worldwide intake of ~500 — call it 5% — will be hired on a two-year contract in one admittedly attractive location, and in exchange for reasonable working hours, will take a 50% hit on their starting salary. In fairness the difference in cost of living between Málaga and London will make up a chunk of that difference, and having the weekends free to enjoy the place is not nothing, but apart from that, what is the upside here?

After the two years are up, the people who have been busy brown-nosing and visibly burning the midnight oil at head office will be on the promotion track. That is how banking works; if you can make it through the first few years, you have a) no social life any more, and b) a very remunerative career track in front of you. Meanwhile, it is a foregone conclusion that the people from the Málaga office will either not have their contract renewed after the two years are up, or will have to start their career track all over again in a more central location.

In other words, what this story boils down to is some short-term PR for Citi, a bunch of cheap(er) labour with a built-in termination date, and not much more.

Then again, it could be worse (it can always be worse). Goldman Sachs opted for the stick instead of the carrot with its own return to the office2 mandate, ending the free coffee that had been a perk of its offices.

Even after all these years in the corporate world, I am amazed by these utterly obvious PR own goals. The value of the coffee cart would have been infinitesimal, completely lost in Goldman's facilities budget. But what is the negative PR impact to them of this move? At one stroke they have hollowed out all the rhetoric of teamwork and empowerment that is the nominal justification for the return to office.

Truly committing to a remote work model would look rather different. I love the idea of Citi opening a Málaga hub. The difference is that in a truly remote-friendly organisation, that office would not have teams permanently based in it (apart from some local support staff). Instead, it would be a destination hub for teams that are truly remote to assemble on a regular basis for planning sessions. The rest of the time, everyone would work remotely wherever they currently live.

Some teams do need physical proximity to work well, some customer-facing roles benefit from having access to meeting space at a moment's notice — but a lot of the work of modern companies does not fall into these categories. Knowledge workers can do their work anywhere — trust me, I've been working this way for more than fifteen years. Some of my most productive work has been done in airport lounges, not even in my fully equipped home office! With instant messaging, video calls, and collaboration tools, there is no real downside to working this way. Meanwhile, the upside is access to a global and distributed talent pool. When I did have to go into an office, it was so painful to be in an open-space with colleagues that were not on my actual team that I wore noise-cancelling headphones. If that's the situation, what's the point of commuting to an office?

This sort of reorganisation would admittedly not be great for the businesses that currently cluster around Citi offices and cater to the Citi employees working in those offices — but the flip side would be the massive benefits to businesses in those Citi employees' own home neighbourhoods. If you're not spending all your waking hours in Canary Wharf or Wall Street, you can do your dry cleaning at your local place, you can buy lunch around the corner instead of eating some over-priced plastic sandwich hunched over your desk, and you can get a better quality of life that way — maybe even in Málaga!

The only downside of working from home is that you have to pay for your own coffee and can't just get Goldman to foot the bill.


🖼️ Photos by Carles Rabada, Jonas Denil, and Tim Mossholder on Unsplash


  1. Not that the pandemic is quite over yet, but let's not get into that right now. 

  2. Never "return to work". This is a malicious rhetorical framing that implies we've all been slacking off at home. People are being asked to continue to work, and to return to the office to do so. They may want to pick up noise-cancelling headphones on their way in. 

Growing Pains

The iPad continues to (slowly, slowly) evolve into a Real Computer. My iPad Pro Is my only personal computer — I don't have a Mac of my own, except for an ancient Mac Mini that is plugged into a TV and isn't really practical to use interactively. It's there to host various network services or display to that TV.

For reasons I don't feel like going into right now, I don't currently have a work Mac to plug into my desk setup, so I thought I'd try out the new Stage Manager feature in iPadOS 16.

So, the bottom line is that it does work, and it makes the iPad feel suddenly like a rather different machine.

Some setup is required. Of course it needs iPadOS 16; I've been running the beta on my iPad all summer, and it seems pretty stable. Using Stage Manager with an external display also requires an external keyboard and mouse, and these have to be connected by Bluetooth; the USB keyboard connected to my dock was not recognised. Without those, it only works for screen mirroring, which is a bit pointless in my opinion. Mirroring the iPad's screen to an external display makes sense if you are showing something to someone, but then, why would you need Stage Manager?

Anyway, once I had those connected, the external display started working as a second display. I was able to arrange the two displays correctly from Settings; some new controls appeared under Display & Brightness to enable management of the second display.

It's interesting to see what does and does not work. The USB microphone plugged into the dock — and the analogue headphones daisy-chained from that — worked without any additional configuration, but the speakers connected to the dock's SPDIF port were not visible to iPadOS. Luckily these speakers also support Bluetooth, so I'm still able to use them. The second desktop display does not show up at all, but that's fair enough; even the first generation of M1 Macs didn't support two external displays. External cameras also do not show up, and there's not even any control, so it's the iPad's built-in camera or nothing.

There's some other weird stuff that I assume and hope is due to the still-beta status of iPadOS 16.

  • The Settings app does not like being on the external display in the least, and appears all squashed. My display is an Ultrawide, but weirdly, the Settings window is squashed horizontally. Maybe the Settings app in iPadOS has not received much attention given the troubled gestation of the new Settings app in macOS Ventura?
  • Typing in Mail and a couple of other apps (Evernote, Messages, possibly others I haven’t encountered yet) sometimes lagged — or rather, the keystrokes were all being received, but they would not be displayed, until I did something different such as hitting backspace or clicking the mouse. At other times, keystrokes showed up normally.
  • The Music App goes straight into its full-screen display mode when it's playing, even when the window is not full-screen. The problem is that the touch control at the top of that window which would normally return to the usual display mode does not work. Also, Music is one of the apps whose preview in the Stage Manager side area does not work, so it's always blank. This seems like an obvious place to display static cover art, even if we can't have live-updating song progression or whatever.
  • Sometimes apps jump from the external display to the iPad’s built-in, for instance if you open something in Safari from a different app.

What does work is that apps can be resized and rearranged, giving a lot more flexibility than the previous single-screen hover or side-by-side multitasking options. App windows can also be grouped to keep apps together in logical groups, such as the editor I'm typing this into and a Safari window to look up references. Again, this is something that I already did quite a lot with the pre-existing multi-tasking support in iPadOS, but it only really worked for two apps, plus one in a slide-over if you're really pushing it. Now, you can do a whole lot more.

I am glad that I came back to give Stage Manager another chance. I had played with the feature on my iPad without connecting it to anything, and found it unnecessarily complex. I do wonder how much of that is because I'm rocking an 11" rather than a 13"? Certainly, I can see this feature being much more useful on a Mac, even standalone. However, Stage Manager on iPadOS truly comes into its own with an external display. This is a big step on way to the iPad becoming a real computer rather than a side device for a Mac or a bigger iPhone.

It's worth noting that Stage Manager only works with the very latest iPads that use Apple silicon: iPad Air (5th generation), 11-inch iPad Pro (2021), and 12.9-inch iPad Pro (2021). It's probably not the time to be buying a new iPad Pro, with rumours that it's due for a refresh soon, maybe to an M2, unless you really really want to try Stage Manager right now. However, if you have an iPad that can support it, and an external display, keyboard, and mouse, it's worth trying it out to get a better idea of the state of the iPadOS art.


🖼️ Photos by author, except Stage Manager screenshot from Apple

Sights From A Bike Ride

One of the positive aspects I often cite when talking up the place where I live is that I can be in fields in ten minutes' ride from my front door in the old town — as in, my windows look out onto the old city walls.1

Once out in the fields, though, you never know what you might find. Here are some scenes from my latest ride.

Roadside shrine to the Madonna della Notte, complete with offerings and ex-voto (thanks for successful prayers)

Not sure what's up with this old car planted in a farm yard, but it looks cool!

Here I just liked the contrast between the red tomatoes waiting for the harvest and the teal frame of my Bianchi.

Bike rides are so great for getting out of my head, whether it’s a technical piece of single-track on my mountain bike where I have to concentrate so hard I can’t think of anything else, or a ride like this where I’m bowling along the flat with a podcast in my (bone-conduction) headphones. The trick is staying off main roads as much as possible — hence the gravel bike.


  1. Which are actually the newest city walls, dating from the sixteenth century CE, post-dating various earlier medieval and Roman walls of which only traces remain. These Renaissance walls were later turned into a linear park known as the "Facsal", a distortion of London's famous Vauxhall gardens, among the first and best-known pleasure gardens in nineteenth-century Europe. In more modern times, the Facsal was part of the street circuit for the 1947 Grand Prix of Piacenza, famously the first race entered by a Ferrari car — although not the site of the Scuderia's first win. Pictures 

Nice Tech, Pity About The Product

Like many IT types, my workspace has a tendency to acquire obsolete technology. When I shared a flat in London with somebody else who lives with the same condition, computers significantly outnumbered people; heck, operating systems sometimes outnumbered people, even after our then-girlfriends/now-wives moved in! At one point, we even had an AS/400 desk-side unit that we salvaged, until we realised we really didn't have anything fun to do with it and moved it on again.

In the big clear-out last year, I got rid of a bunch of the old stuff — yes, even some of the cables! One item made the opposite journey, though, from the depths of a box inside a cupboard of toner cartridges underneath a monitor so old it still has a 4:3 aspect ratio, to pride of place in my line of sight from my desk chair.

That item is the installation media for a thoroughly obsolete computer operating system from the 90s.

What Even Is BeOS?

BeOS was the brain-child of a bunch of ex-Apple people, including Jean-Louis Gassée, who worked for Apple through the 80s and was instrumental in the creation of the Newton, among other things. While Apple spent the 90s trying and failing to create a new operating system to replace the aging MacOS, Gassée and his merry band created a brand-new operating system called BeOS. The 90s were probably the last time in history that it was possible to do something like that; the platforms that have emerged since then (iOS and Android) are variations on existing platforms (NeXTSTEP/OS X, which slightly predates BeOS, and Linux respectively).

Initially targeted at AT&T's Hobbit CPUs, BeOS was soon ported to the PowerPC architecture. These were the CPUs that powered Apple computers at the time, the product of an alliance between Apple, IBM, and Motorola. Between them, the three companies hoped to foster the emergence of an ecosystem to rival (or at least provide an alternative to) Intel's dominant x86. In those days, Apple licensed a handful of manufacturers to build MacOS-compatible PowerPC computers, so Be quickly stopped manufacturing their own BeBox hardware and switched to offering the BeOS to people who owned these computers — or actual Apple Macs, I suppose, but even at the time you didn't hear of many people doing that.

This is where BeOS first entered my life. If you can believe it, the way you found out about cool software in those pre-broadband days was to buy a printed magazine that would come with a CD full of demos, shareware, utilities, wallpapers, icon sets, and more. There were a few magazines that catered to the Apple enthusiast market, and in 1997, I happened to pick one up that included Preview Release 2 of the BeOS.1

Luckily for me, I owned a whopping 500MB external SCSI drive, so I didn't have to mess around with reformatting the main HDD of the family computer (which would probably have run all of 2GB at the time, kids!). I was quickly up and running with the BeOS, which absolutely blew away the contemporary Macintosh operating system.

Why Bother With BeOS?

The performance was the first and most obvious difference between BeOS and MacOS. Just watching GLTeapot spinning around in real time was amazing, especially compared to what I was used to in MacOS on the same hardware. Check out this contemporary review, focusing specifically on BeOS’ multimedia capabilities.

This was also my first exposure to a bash terminal, or indeed any command-line interface beyond MS-DOS, and I can safely say that it was love at first sight, especially once I started understanding how the output of one command could be passed to another, and then the whole thing wired up into a script.

BeOS was properly multi-user, in a way that Classic MacOS very definitely wasn't. This factor made me consider it as a full-time replacement for MacOS on the family computer, but the lack of hardware support killed that idea. Specifically, the Global Village Teleport fax/modem which was our connection to the early Internet, running at a blazing fast 14.4kbps, did not work in BeOS.

This lack was doubly annoying since BeOS shipped with an actual web browser: NetPositive, one of whose claims to fame was its haiku error messages. At the time, Mac users were stuck between Netscape Navigator, Microsoft Internet Explorer, Apple's almost wilfully obscure Cyberdog, and early versions of Opera.

What Happened To BeOS?

This is where we get to the point of the story. What killed BeOS was not any sort of issue with the technology. It was leaps and bounds ahead of both dominant operating systems of the day, with massive developer interest.

Unfortunately, Be did not own its own destiny. After failing to sell itself to Apple, Be staggered on for a few more years. Once it became obvious that Apple was going to kill the MacOS clone business which powered the ecosystem of non-Apple PowerPC hardware that BeOS ran on, an x86 port was quickly added. By this point dual-booting operating systems on x86 had become, if not exactly mainstream, at least somewhat common in technical circles. Unfortunately for Be, the second OS (of course after Windows) was almost always Linux. A second commercial operating system was always going to be a hard sell in a world where everyone had already paid for a Windows license as part of the purchase price for their PC, to the point that Be literally couldn't even give it away. In fact Be actually sued Microsoft over its alleged monopolistic practices, possibly the last gasp of the First Browser War of the late 90s.2

Be was eventually sold to Palm, and after Palm's own travails, the last vestiges of BeOS disappeared from public view only a few years later.

The lesson here is that the best technology does not always win — or at least, does not win unaided. Execution is key, and Be, despite some very agile pivots, failed to execute to the point of making any meaningful dent in the personal-computer-OS market.

What could Be have done differently? It's hard to say, even with the benefit of hindsight. None of the alternative desktop operating systems that sprang up in the late 80s and early 90s have survived. BeOS? Gone. OS/2 Warp? Gone. All the commercial UNIX systems? Gone — but maybe next year will be the year of Linux on the desktop. NeXT? It got acquired by Apple, and the tech is still with us in every current Apple platform — but if Be had been the one to get bought to replace the failed Copland project, NeXT would certainly have been the one to disappear.

That is the one inflection point really worth considering: what if Gassée had managed to negotiate a deal with Apple back then? What would OS X be like today if it were based on BeOS rather than on NeXTSTEP?3 And… what would Apple be like without Steve Jobs, in hindsight the most valuable part of the NeXT acquisition? There would probably still be a mobile product; one of the key Be employees was Steve Sakoman, godfather of the Newton, so it seems fairly certain that a descendant of some sort would have emerged from a Be-infused Apple. But would it have become the globe-spanning success of the iPhone (and iPad) without Steve Jobs to market it?

One day I would like to own both a BeBox and a NeXTcube,3 but for now I just keep that BeOS PR2 CD as a tech industry memento mori, a reminder to myself not to get caught up in the elegance of the tech, but always to remember the product and the use cases which that tech enables.


  1. I could have sworn it was MacAddict, which was definitely my favourite magazine at the time, but the only references I can find online say it was MacTech, and it's been long enough that I can't be sure. 

  2. Be's travails did inspire at least one high-profile fan, with Neal Stephenson discussing BeOS in his book-length essay In the Beginning... Was the Command Line, as well as giving it a cameo in Cryptonomicon (alongside "Finux", his gossamer-thin Linux-analogue). 

  3. Yes, weird capitalisation has always been part of the computer industry. 

Good Outcomes Grow From Failure

Failure Is Good, Actually

No, this is not going to be some hustleporn screed about failing fast and learning from it. I am talking about actual failure, crashing and burning and flaming out and really really bad outcomes. Here's my point: when these bad things happen to the right people, they can be really good for the rest of us — and not just because we can enjoy the schadenfreude of terrible people messing up in public.

Here's how it works: a terrible person, let's call him Travis (for that is his name) spots an actual gap in the market: hailing taxis sucks, and when you can get one, they all mysteriously have broken credit card terminals. Travis therefore founds a company called, just for the sake of realism, Uber, and goes after that opportunity in the worst way imaginable.

Here's the thing: Travis and Uber weren't wrong about the opportunity, which is why Uber took off the way it did. Uber even had a very explicit strategy of weaponising the love users had for the service to put pressure on local governments to allow the service to launch in different locales. This strategy succeeded in both the short and the long term, but in very different ways.

In the early years, Uber was the latest poster child for the "move fast and break things" Silicon Valley tech bro attitude. Sure, Parisian taxi drivers rioted and set Uber cars on fire, and Italian taxi drivers managed to get UberX (known locally as Uber Pop — don't ask) banned, but in most places, Uber triumphed, mainly because the service was genuinely so much better than the status quo: you could summon a car right to your location, and when you arrived at your destination, you just got out and strolled off, no haggling or searching for the right currency.

So much for the short term. In the longer term, all that moving fast and breaking things caught up with Travis and his company, as VCs got tired of subsidising the true cost of Uber rides, making them far less competitive with actual licensed taxis. However, in the mean time, something interesting happened: the previously somnolent local taxi industries in every city suddenly woke up to this new existential threat. They had been used to being monopolies, so they could set their own rules and control the number of entrants. Uber (and Lyft, Grab, et al) upended that cozy status quo — but after some flailing, and some bonfiring of Uber cars, they woke up to the threat, and addressed it in the best way: by going straight to the root of what customers had demonstrated they wanted.

Now, I can rock up in almost any decent-sized city in Europe, and with an app called Free Now, I can summon a car to my location, pay with a stored credit card, and hop out at my destination without worrying about currency conversion or losing a printed receipt. It sounds a lot like Uber, with a crucial distinction: the cars are locally-licensed taxis, subject to all the standard licensing checks.

Uber is still a going concern, to be clear, but it's struggling as its costs rise and the negative externalities come home to roost. The investment case for Uber was always based on them securing either a monopoly on the ride-hailing market, or alternatively a breakthrough in self-driving technology that would let them do away with their highest cost: the pesky human element, the actual drivers.

I think it's inarguable that this original investment case has not worked out, and a lot of the shine has come off Uber as the investor subsidy goes away and prices rise to reflect actual costs.

From Four Wheels To Two

Now, the same mechanisms are playing out in the dockless scooter — aka "micromobility" — market:

Today, a scooter rental ride hardly seems like a bargain. At typical rates, which include an upfront and per-minute fee, a 20-minute ride would cost about $6. That’s more than a quick bus or subway ride in places that offer those options.

Still, last-mile transportation remains a tricky niche to fill in urban networks, and scooters do have a place in the mix. We’re not done with them yet. Just don’t expect the days—or valuations—of the peak scooter era to return any time soon.

I have used these services, and broadly speaking, I'm a fan. They are not worth bazillions of CURRENCY_UNITS because they are obviously terrible markets for the purpose: low barriers to entry, and operating costs that scale linearly with network size.

As it happens, both of these issues can be addressed with some good old-fashioned regulation — the sort of thing that happens in maturing markets. Now that the public has expressed interest in these new options, each city can choose how the services should operate. In my small hometown, a single vendor has been approved, with a cap on the number of vehicles and on speed in the centre of town (GPS-enforced, natch). Crucially, the scooters are not just abandoned wherever, getting in people's way; they live in specific "parking lots" (repurposed car parking spots). Paris has taken a similar approach, requiring riders to photograph where they left their ride to ensure it's not placed somewhere it shouldn't be, and fining or barring riders who do not park correctly.

I just hope that we can reach the same result as Uber — all of the good aspects of the service, without the horrible VC-inflated bits. I like that I can rock up in a strange city, pull out my phone, and within a minute or two be on an e-bike. It's not often practical to travel with my own bike, so these rental services have a real potential.

Moses did not get to see the Promised Land. Uber and Lime are still with us, but with rather diminished ambitions. But as long as we get to that promised land of a fully-integrated and ubiquitous transport network, the creative destruction was worth it, and we travellers will be happy.


🖼️ Photos by Austin Distel and Hello I'm Nik on Unsplash

Systems of Operation

I have, to misquote J. R. R. Tolkien, a cordial dislike of overly rigid classification systems. The fewer the dimensions, the worse they tend to be. The classic two-by-two grid, so beloved of management consultants, is a frequent offender. I suspect I am not alone, as most such systems quickly get complicated by the addition of precise placement along each axis, devolving into far more granular coordinate systems on at least one plane, rather than the original four simple boxes. But surely the worst of the lot are simple binary choices, this or that, no gradations on the spectrum allowed.

We have perhaps more than our fair share of these divisions in tech — or perhaps it makes sense that we have more than other fields? (That's a joke, because binary) Anyway, one of the recurring binary splits is the one between development and operations. That it is obviously a false binary is clear by the fact that these days, the grey area at the intersection — DevOps — gets far more consideration than either extreme. And yet, as it is with metaphors and allegories (back to JRRT!), so it is with classifications: all of them are wrong, but some of them are useful.

The Dev/Ops dichotomy is a real one, no matter how blurred the intersection has got, because it is based in a larger division. People tend to prefer either the work of creation, architecting and building, or the work of maintaining, running and repairing. The first group get visibility and recognition, so certain personality traits cluster at this end of the spectrum — flashy and extrovert, dismissive of existing constraints. At the opposite end, we find people who value understanding a situation deeply, including how it came to be a certain way, and who act within it to achieve their goals.

I am trying to avoid value judgments, but I think it is already clear where my own sympathies lie. Someone I have worked with for a long time subscribes to Isaiah Berlin's analogy: the fox knows many things, but the hedgehog knows one big thing. I am an unashamed fox: I know a little about a lot, I love accumulating knowledge even if I do not have an immediate obvious use for it, and I never saw a classification system I did not immediately question and find the corner-cases of. These traits set me up to be a maintainer and an extender rather than a creator.

I value the work of maintenance; designing a new thing starting with a clean sheet is an indulgence, while working within the constraints of an existing situation and past choices to reach my objectives is a discipline that requires understanding both of my own goals and those of others who have worked on the same thing in the past. In particular, good maintainers extend their predecessors the grace of assuming good intent. Even if a particular choice seems counter-intuitive or sub-optimal, this attitude does the courtesy of assuming there was a good and valid reason for making it, or a constraint which prevented the more obvious choice.

Embrace Failure — But Not Too Tightly

There are many consequences to this attitude. One is embracing failure as an opportunity for learning. The best way to learn how something works is often to break it and then fix it — but please don't blame me if you break prod! Putting something back together is the best way to truly understand how different components fit one another and interact with one another in ways that may or may not be planned in the original design. It is also often a way of finding unexpected capabilities and new ways of assembling the same bits into something new. I did both back when I was a sysadmin — broke prod (only the once) and learned from fixing things that were broken.

Embracing failure also does not mean that we should allow it to happen; in fact the maintainer mindset assumes failure and values redundancy over efficiency or elegance of design. Healthy systems are redundant, both to tolerate failure and to enable maintenance. I had a car with a known failure mode, but unfortunately the fix was an engine-out job, making preventative maintenance uneconomical. The efficiency of the design choice to use plastic tubing and routing it in a hot spot under the engine ultimately came back to bite me in the shape of a late-night call to roadside assistance and an eye-watering bill.

Hyperobjects In Time

There is one negative aspect to the maintainer mindset, beyond the lack of personal recognition; people get awards for the initial design, not for keeping it operating afterwards. Lack of maintenance (or of the right sort of maintenance) is not immediately obvious, especially to hedgehog types. It is not the sort of one big thing that they tend to focus on. Instead, it is more of a hyperobject, visible only if you take a step back and add a time dimension. Don't clean the kitchen floor for a day, it's probably fine. Leave it for a week, it's nasty, and probably attracting pests. I know this from my own student days, where my flatmates explored the boundaries of entropy with enthusiasm.

Hyperobjects extend through additional dimensions beyond the usual three. In the same way that a cube is a three-dimensional object whose faces are two-dimensional squares, a hypercube or tesseract is a four-dimensional object whose faces are all three-dimensional cubes. This sort of thing can give you a headache to think about, but does make for cool screensaver visualisations. In this particular formulation, the fourth dimension is time; deferred maintenance is visible only by looking at its extent in time, while its projection into our everyday dimensions seems small and inconsequential when viewed in isolation.

These sorts of hyperobjects are difficult for hedgehogs to reason about precisely because they do not fit neatly into their two-by-two grids and one big thing. They can even sneak up on foxes because there is always something else going on, so the issues can remain undetected, hidden by other things, until some sort of failure mode is encountered. If that failure can be averted or at least minimised, maintainer foxes can learn something from it and modify the system so that it can be maintained more easily and avoid the failure recurring.

All of these reflections are grounded in my day job. I own a large and expanding library of content, which is continuously aging and becoming obsolete, and must be constantly maintained to remain useful. Leave one document untouched for a month or so, and it's probably fine; the drift is minimal, a note here or there. Leave it for a year, and it's basically as much work to bring it back up to date as it would be to rewrite it entirely. It's easy to forget this factor in the constant rush of everyday work, so it's important to have systems to remind us of the true extent of problems left unaddressed.

In my case, all of this rapidly-obsolescing content is research about competitors. This is also where the intellectual honesty comes in: it's important to recognise that creators of competing technology may have had good reasons for making the choices they made, even when they result in trade-offs that seem obviously worse. In the same way, someone who adopted a different technology probably did so for reasons that were good and valid for their time and place, and dismissing those reasons as irrelevant will not help to persuade them to consider a change. This is known as "calling someone's baby ugly", and tends to provoke similar negative emotional reactions as insulting someone’s actual offspring.

Good competitive positioning is not about pitching the One True Way and explaining all the ways in which other approaches are Wrong. Instead, it's about trying to understand what the ultimate goal is or was for all of the other participants in the conversation, and engaging with those goals honestly. Of course I have an agenda, I'm not just going to surrender because someone made a choice years ago — but I can put my agenda into effect more easily by understanding how it fits with someone else's agenda, by working with the existing complicated system as it is, rather than trying to raze it to the ground and start again to build a more perfect design, whatever the people who rely on the existing system might think.

I value the work of maintainers, the people who keep the lights on, at least as much as that of the initial designers. And I know that every maintainer is also a little bit of a designer, in the same way that every good designer is also thinking at least a little bit about maintenance. Maybe that is my One Big Thing?

From Provincial Italy To London — And Back Again

More reflections on remote work

Well, I'm back to travelling, and in a pretty big way — as in, I'm already to the point of having to back out of one trip because I was getting overloaded! I've been on the road for the past couple of weeks, in London and New York, and in fact I will be back in New York in a month.

It has honestly been great to see people, and so productive too. Even though I was mostly meeting the same people I speak to week in, week out via Zoom, it was different to all be in the same room together. This was also the first time I was able to get my whole team together since its inception: I hired everyone remotely, and while I have managed to meet up with each of them individually, none of the people on the team had actually met each other in person… We had an amazingly productive whiteboarding session, where we knocked out some planning in a couple of hours that might otherwise have taken weeks, and probably justified a chunk of the cost of the trip on its own.

This mechanism also showed up in an interesting study in Nature, entitled Virtual communication curbs creative idea generation. The study shows that remote meetings are better for some things and worse for others. Basically, if the meeting has a fixed agenda and clear outcomes, a remote meeting is a more efficient way of banging through those items. However, when it comes to ideation and creativity, in-person meetings are better than remote ones.

As with all the best studies, this result tallies with my experience and reinforces my prejudices. I have been remote for a long time, way before the recent unpleasantness, but I always combined remote work with regular in-person catch-up meetings. You do the ideation and planning when you can all gather together around the whiteboard — not to mention reinforcing personal ties by gathering around a table in a restaurant or a bar! Then that planning and those personal ties take you through the rest of the quarter, with regular check-ins for tactical day-to-day actions to implement the strategic goals decided at the in-person meeting.

Leaving London

Something else that was interesting about my recent trips was meeting a whole lot of people who were curious about my living situation in Italy — how I came to be there, and what it was like to work a global role from provincial Italy, rather than from one of the usual global nerve centres. Telling the story in New York, coming fresh from my trip to London, led me to reflect back on how come I left London and whether it was the right call (spoiler: it totally was).

The London connection also showed up in a pair of articles by Marie Le Conte, who recently spent a couple of months in Venice before returning to London. It has been long enough since I left London that I no longer worry about whether prices in my favourite haunts will be different, but whether any of them are still there or still recognisable — and sadly, most of them are not. But then again, this is London we are talking about, so I have new favourites, and find a new one almost every trip.

Leaving London was a wrench: it was the first place I lived after university, and I enjoyed it to the hilt. Of course I had to share a flat, and I drove ancient unreliable cars1. But we were out and about all the time, in bars and theatres, eating out and meeting up and just enjoying the place.

However, over the following years most of my London friends moved away in turn, either leaving the UK outright or moving out to the commuter belt. The latter choice never quite made sense to me: why live somewhere nearly as expensive as London (especially when you factor in the cost of that commute), which offers none of the benefits of being in actual London, and still has awful traffic and so on? But as my friends started to settle down and want to raise families and so on, they could no longer afford London prices. Those prices get especially hard to justify once you could no longer balance them out by enjoying everything London has to offer — because you're at home with the kids, who also need to be near a decent school, and get back and forth from sports and activities, and so on and so forth.

My friends and I experienced the same London in our twenties that Marie Le Conte did: it didn't matter if you "rent half a shoebox in a block of flats where nothing really worked", because "there was always something to do". But if you're not out doing all the things, and you need more than half a shoebox to put kids in, London requires a serious financial commitment for not much return.

But why commute to the office at all?

Even before the pandemic, remote work allowed many of us to square that circle. We could live in places that were congenial to us, way outside commuting range of any office we might nominally be attached to, but travel regularly for those all-important ideation sessions that guided and drove the regular day-to-day work.

The pandemic has opened the eyes of many more people and companies to the possibilities of remote work. Airbnb notably committed to a full remote-work approach, which of course makes particular sense to Airbnb, expecially the bit about "flexibility to live and work in 170 countries for up to 90 days a year in each location". I admit they are an extreme case, but other companies have an opportunity to implement the parts of that model that make sense for them.

Certain functions benefit from being in the office all the time, so they require permanent space. This means both individual desks and meeting rooms. Meanwhile, remote workers will need to come in regularly, but when they do, they will have different needs. They will absolutely require meeting rooms, and large, well-equipped ones at that, and those are on top of whatever the baseline needs are for the in-office teams. On the other hand, the out-of-towners will spend most of their time in meetings (or, frankly, out socialising), and so they do not need huge numbers of hot desks — just a few for catching up with emails in gaps between meetings.

If you rotate the in-office meetings so you don't have the place bursting at the seams one week and empty the rest of the time, this starts to look like a rather different office setup than what most companies have now. You can even start thinking of cloud-computing analogies, no longer provisioning office space for peak utilisation, but instead spreading work to take advantage of unused capacity, and maybe bursting by renting external capacity as needed (WeWork2 et al)

If you go further down the Airbnb route and go fully remote, you might even start thinking more about where you put that office. Does it need to be in a downtown office core, or can it be in a more fun part of town — or in a different city entirely? Maybe it can even be in a resort-type location, as long as it has good transport links. Hey, a guy can dream…

But in the mean time, remote work unlocks the ability for many more people to make better choices about where to live. Raising a family is hard enough; doing it when both parents work is basically impossible without a strong local support network. Maybe the model should be something like the Amish Rumspringa, where young Amish go spend time out in the world before going back home and committing to the Amish way of life. Enjoy your twenties in the big city, get started on your career with the sort of hands-on guidance that is hard to get remotely, and then move back home near parents and friends when it's time to settle down, switching to remote working models — with careful scheduling to avoid both parents being away at once.

Once you start looking at it like that, provincial Italy is hard to beat. Quality of life is top-notch, with the sort of lifestyle that would require an extra zero on the salary in London or NYC. If you combine that with regular visits to the big cities, it's honestly pretty great.


🖼️ Photos by Kaleidico and Jason Goodman on Unsplash; London photograph author’s own (the view from my hotel room on my most recent London trip).


  1. I only had a car in the first place because I commuted out of London, to a place not well-served by trains; I never drove into central London if I could avoid it, even before the congestion charge was introduced. 

  2. Just because WeWork is a terrible company doesn't mean that the fundamental idea is wrong. See also Uber: while Uber-the-company is obviously unsustainable and has a number of terrible side-effects, it has forced into existence a ride-hailing market that almost certainly would not exist absent Uber. Free Now gives me an Uber-like experience (summon a car from my phone in most cities, pay with a stored card), but using regular licensed taxis and without the horrible exploitative Uber model. 

Old Views For Today's News

Here's a blog post I wrote back in 2015 for my then-employer that I was reminded of while recording the latest episode of the Roll For Enterprise podcast. Since the original post no longer seems to be available via the BMC web site, I assume they won't mind me reposting it here, with some updated commentary.
cia.png

xkcd, CIA

There has been a certain amount of excitement in the news media, as someone purportedly associated with ISIL has taken over and defaced US Central Command's Twitter account. The juxtaposition with recent US government pronouncements on "cyber security" (ack) is obvious: Central Command’s Twitter Account Hacked…As Obama Speaks on Cybersecurity.

The problem here is the usual confusion around IT in general, and IT security in particular. See for instance CNN:

The Twitter account for U.S. Central Command was suspended Monday after it was hacked by ISIS sympathizers -- but no classified information was obtained and no military networks were compromised, defense officials said.

To an IT professional, even without specific security background, this is kind of obvious.

shucking-a-tutorial.jpgPenny Arcade, Brains With Urgent Appointments

However, there is a real problem here. IT professionals also have a blind spot here: they don't think of things like Twitter accounts when they are securing IT infrastructure. This oversight can expose organisations to serious problems.

One way this can happen is credential re-use and leaking in general. Well-run organisations will use secure password-sharing services such as LastPass, but many times without IT guidance teams might instead opt for storing credentials in a spreadsheet, as we now know happened at Sony. If someone got their hands on even one set of credentials, what other services might they be able to unlock?

The wider issue is the notion of perimeter defence. IT security to date has been all about securing the perimeter - firewalls, DMZs, NAT, and so on. Today, though, what is the perimeter? End-user services like Dropbox, iCloud, or Google Docs, as well as multi-tier enterprise applications, span back and forth across the firewall, with data stored and code executed both locally and remotely.

I don't mean to pick on Sony in particular - they are just the most recent victims - but their experience has shown once and for all that focusing only on the perimeter is no longer sufficient. The walls are porous enough that it is no longer possible to assume that bad guys are only outside. Systems and procedures are needed to detect anomalous activity inside the network, and once that occurs, to handle it rapidly and effectively.

This cannot happen if IT is still operating as "the department of NO", reflexively refusing user requests out of fear or potential consequences. If the IT department tries to ban everything, users will figure out a way to go around the restrictions to achieve their goals. The risk then is that they make choices which put the entire organisation and even its customers at risk. Instead, IT needs to engage with those users and find creative, novel ways to deliver on their requirements without compromising on their mandate to protect the organisation.

While corporate IT cannot be held responsible for the security of services such as Twitter, they can and should advise social-media teams and end-users in general on how to protect all of their services, inside and outside the perimeter.

There are a still a lot of areas where IT is focused on perimeter defence. Adopting Okta or another SSO service is not a panacea; you still do need to consider what would happen when (not if) someone gets inside the first layer of defence. How would you detect them? How would you stop them?

The Okta breach has also helpfully provided an example of another important factor in security breaches: comms. Okta's comms discipline has not been great, reacting late, making broad denials that they later had to walk back, and generally adding to the confusion rather than reducing it. Legislation is being written around the world (with the EU as usual taking the lead) to mandate disclosure in situations like these, which may focus minds — but really, if you're not sufficiently embarrassed as a security provider that a bunch of teenagers were apparently running around your network for at least two weeks without you detecting them, you deserve all the fines you're going to get.

These are no longer purely tech problems. Once you get messy humans in the mix, the conversation changes from "how many bits of entropy does the encryption algorithm need" to "what is the correct trade-off between letting people get their jobs done and ensuring a reasonable level of security, given our particular threat model". Working with humans means communicating with them, so you’d better have a plan ready to go for what to say in a given situation. Hint: blanket denials early on are generally a bad idea, leaving hostages to fortune unnecessarily.

Have a plan ready to go for what you will say in a given situation (including what you may be legally mandated to disclose, and on what timeframe), and avoid losing your customers’ trust. Believe me, that’s one sort of zero trust that you don’t want!

Kids

Make no mistake: having kids is messy, stressful, and expensive. You should absolutely not have kids if you like having free time, disposable income, or any say in what to watch on TV. But there are also those moments when you walk into a room and you are greeted by an excitable small human who was unable to roll over an eyeblink ago, but now is gabbling on about the amazing castle they built with their wooden blocks, and who lives behind this door or in that tower, and what they will do next, and it all seems worth it. Well, at least until it's time to clear up…