Retracing My Steps

Another ride report post! This time, I decided on the spur of the moment to try a route I hadn't ridden before. It turned out to be a wee bit longer than I had really allowed for, which made me slightly late for family Sunday lunch — oops. I had also forgotten to charge my Apple Watch, so this ride went unrecorded, but I'm pretty sure the distance was around 80km, so not bad. The highest point was around 550m, but there was a fair bit of up and down, so the total vert would be quite a bit more.

Two of the things that make me happiest are bicycles and mountains, though, so riding up into the mountains like this does me an enormous amount of good. Here are some of the highlights of Sunday's ride.

I had only just left the tarmac when I saw three deer bouncing through the wispy fog that was still drifting across the ploughed fields. They moved fast enough that by the time I had stopped and got my phone out, I needed the 3x zoom — and one of the deer got away entirely. For such an extreme shot from a phone camera, I'm not unhappy with the results.

I also love that the scenery looks pretty wild in this framing, but actually it's still pretty close to a bunch of warehouses and factories, a true liminal space. The early part of this route is stitched together from tracks between fields to avoid busy roads, but it's still pretty close to industrial areas.

A little further along, and with the sun burning off the last vestiges of the mist, I stopped again because I liked the view of the river rippling across the stones. After this stop, though, I hit some pretty technical riding and had to concentrate on where I was putting my wheels. Some rain has finally arrived after the long drought, and then motorbikes (ugh) had come through, so all the mud was churned up into mire.

On my mountain bike I'd probably have been fine, but the Bianchi has some intermediate gravel tyres that are pretty smooth in the centre and with only a little bit of tread on the sides, as well as being narrower than MTB tyres. This is the sort of terrain where I'm glad to have proper pedals that I can unclip from and ride along with my feet free just in case I lose my balance and need to put a foot down in a hurry. Anyway, I got through without too much trouble, despite a lot of slipping and sliding. I did have to stop to clear out the plug of mud between rear wheel and frame once I got out of the woods, and then I walked the bike along the edge of one field that had been ploughed right to the river's edge, not leaving any smooth terrain to ride on.

Nothing much to say about this tower, I just always like the look of it. This is also where the trail finally starts to climb out of the plain.

This is an old railway bridge, and because the road bridge is just upstream, it's reserved for walking and riding. It's not at all signposted, either, so you have to know it's there; I rarely see anyone else on it.

One of the reasons I ride a gravel bike is so that I can spend as little time as possible sharing the road with cars. It's tough to avoid that when it comes to river crossings, though! One newer bridge around here has a cycle path slung underneath it, and one of the busier bridges carved out a cycle path in a redesign, but this one is the best of all.

After that I rode properly up into the hills, climbing up out of the Nure valley and over the watershed down into the Trebbia valley before heading home. Unfortunately the day clouded over a bit too, so although I did stop to take a few more shots, they aren't nearly so scenic. I did want to share this one, though, because that rocky outcrop in the middle distance already featured in a past ride report.

Business Case In The Clouds

A perennial problem in tech is people building something that is undeniably cool, but is not a viable product. The most common definition of "viable" revolves around the size and accessibility of the target market, but there are other factors as well: sustainability, profitability, growth versus funding, and so on.

I am as vulnerable as the next tech guy to this disease, which is just one of many reasons why I stay firmly away from consumer tech. I know myself well enough to be aware that I would fall in love with something that is perfectly suited to my needs and desires — and therefore has a minuscule target market made up of me and a handful of other weirdos.

One of the factors that makes this a constant ongoing problem, as opposed to one that we as an industry can resolve and move on from, is that advancing tech continuously expands the frontiers of what is possible, but market positioning does not evolve in the same direction or at the same speed. If something simply can't be done, you won't even get to the "promising demo video on Kickstarter" stage. If on the other hand you can bodge together some components from the smartphone supply chain into something that at least looks like it sort of works, you might fool yourself and others into thinking you have a product on your hands.

The thing is, a product is a lot more than just the technology. There are a ton of very important questions that need to be answered — and answered very convincingly, with data to back up the answers — before you have an actual product. Here are some of the key questions:

  • How many people will buy one?
  • How much are they willing to pay?
  • Given those two numbers, can we even manufacture our potential product at a cost that lets us turn a profit? If we have investors, what are their expectations for the size of that profit?
  • Are there any regulations that would bar us from entering a market (geographical or otherwise)? How much would it cost to comply with those regulations? Are we still profitable after paying those costs?
  • How are we planning to do customer acquisition? If we have a broad market and a low-cost product, we're going to want to blanket that segment with advertising and have as self-service a sales channel as possible. On the other hand, if we are going high-end and bespoke, we need an equally bespoke sales channel. Both options cost money, and they are largely mutually exclusive. And again, that cost comes out of our profit margin.
  • What's the next step? Is this just a one-shot campaign, or do we have plans for a follow-on product, or an expansion to the product family?
  • Who are our competitors? Do they set expectations for our potential customers?
  • How might those competitors react? Can they lower their own prices enough that we have to reduce ours and erode our profit margin? Can they cross-promote with other products while we are stuck being a one-trick pony?

These are just some of the obvious questions, the ones that you should not move a single step forward without being able to answer. There are all sorts of second- and third-order follow-ups to these. Nevertheless, things-that-are-not-viable-products keep showing up, simply because they are possible and technically cool.

Possible, Just Not Viable

One example of how this process can play out would be Google Stadia (RIP). At the time of its launch, everyone was focused on technical feasibility:

[...] streaming games from datacenters like they’re Netflix titles has been unproven tech, and previous attempts have failed. And in places like the US with fixed ISP data caps, how would those hold up to 4-20 GB per hour data usage?

[...] there was one central question. Would it even work?

Some early reviewers did indeed find that the streaming performance was not up to scratch, but all the long-term reports I heard from people like James Whatley were that the streaming was not the problem:

The gamble was always: can Google get good at games faster than games can get good at streaming. And I guess we know (we always knew) the answer now. To be clear: the technology is genuinely fantastic but it was an innovation that is looking - now even more overtly - for a problem to solve.

As far as we can tell from the outside (and it will be fascinating to read the tell-all book when it comes out), Google fixated on the technical aspect of the problem. In fairness, they were and are almost uniquely well-placed to make the technology work that enables game streaming: data centers everywhere, fast network connections, and in-house expertise on low-latency data streaming. The part which apparently did not get sufficient attention was how to turn those technical capabilities into a product that would sell.

Manufacturing hardware is already not Google's strong suit. Sure, they make various phones and smart home devices, but they are bit-players in terms of volume, preferring to supply software to an ecosystem of OEMs. However, what really appears to have sunk Stadia is the pricing strategy. The combination of both a monthly subscription and having to buy individual games appears to have been a deal-killer, especially in the face of other streaming services from long-established players such as Microsoft or Sony which only charge a subscription fee.

To recap: Google built some legitimately very cool technology, but priced it in a way that made it unattractive to its target customers. Those customers were already well-served by established suppliers, who enjoyed positive reputations — as opposed to Google's reputation for killing services, one that has been further reinforced by the whole Stadia fiasco. Finally, there was no uniquely compelling reason to adopt Stadia — no exclusives, no special integration with other Google services, just "isn't it cool to play games streamed from the cloud instead of running on your local console?" Gamers already own consoles or game on their phones, especially the ones with the sort of fat broadband connection required to enable Stadia to work; there is not a massive untapped market to expand into here.

So much for Google. Can Facebook — sorry, Meta — do any better?

Open Questions In An Open World

Facebook rebranded as Meta to underline its commitment to a bright AR/VR future in the Metaverse (okay, and to jettison the increasingly stale and negative branding of the Blue App). The question is, will it work?

Early indications are not good: Meta’s flagship metaverse app is too buggy and employees are barely using it, says exec in charge. Always a sign of success when even the people building the thing can't find a reason to spend time with it. Then again, in fairness, the NYT reports that spending time in Meta's Horizon VR service was "surprisingly fun", so who knows.

The key point is that the issue with Meta is not one of technical feasibility. AR/VR are possible-ish today, and will undoubtedly get better soon. Better display tech, better battery life, and better bandwidth are all coming anyway, driven by the demands of the smartphone ecosystem, and all of that will also benefit the VR services. AR is probably a bit further out, except for industrial applications, due to the need for further miniaturisation if it's going to be accepted by users.

The relevant questions for Meta are not tech questions. Benedict Evans made the same point discussing Netflix:

As I look at discussions of Netflix today, all of the questions that matter are TV industry questions. How many shows, in what genres, at what quality level? What budgets? What do the stars earn? Do you go for awards or breadth? What happens when this incumbent pulls its shows? When and why would they give them back? How do you interact with Disney? These are not Silicon Valley questions - they’re LA and New York questions.

The same factors apply to Horizon. It's a given that Meta can build this thing; the tech exists or is already on the roadmap, and they have (or can easily buy) the infrastructure and expertise. The questions that remain are all "but why, tho" questions:

  • Who will use Horizon? How many of these people exist?
  • How will Horizon pay for itself? Subscriptions — in exchange for what value? Advertising — in what new formats?
  • What's the plan for customer acquisition? Meta keeps trying to integrate its existing services, with unified messaging across Facebook, Instagram, and WhatsApp, but it doesn't really seem to be getting anywhere with consumers.
  • Following on from that point, is any of this going to be profitable at Meta's scale? That qualification is important: to move the needle for Zuckerberg & co., this thing has to rope in hundreds of millions of users. It can't just hit a Kickstarter milestone and declare victory.
  • What competitors are out there, and what expectations have they already set? If Valve failed to get traction with VR when everybody was locked down at home and there was a new VR-exclusive Half-Life game1, what does that say about the addressable market?

None of these are questions that can be answered based on technical capabilities. It doesn't matter how good the display tech in the headsets is, or whether engineers figure out how to give Horizon avatars innovative features such as, oh I don't know, legs. What matters is what people can do in Horizon that they can't do today, IRL or in Flatland. Nobody will don a VR headset to look at Instagram photos; that works better on a phone. And while some people will certainly try to become VR influencers, that is a specialised skill requiring a ton of support; it's not going to be every aspiring singer, model, or fitness instructor who is going to make that transition. Meta will need a clear and convincing answer that is not "what if work meetings but worse in every way".

So there you have it, one failed product and one that is still unproven, both cautionary tales of putting the tech before the actual product.


  1. I love this devastating quote from PCGamesN: "Half-Life: Alyx, [...] artfully crafted though it was, [...] had all the cultural impact of a Michael Bublé album." Talk about vicious! 

Network TV

It is hardly news that the ad load on YouTube has become ridiculous, with both pre-roll, and several mid-roll slots, even on shorter videos. In parallel with the rise of annoying ads, YouTube is also deluging me with come-ons for YouTube Premium, their paid ad-free experience. I haven't coughed up because a) I'm cheap, and b) this feels like blackmail: "pay up or we'll make even more annoying unskippable ads".

YouTube charges through the nose for add-on offerings like YouTube Premium or YouTube TV, the US-only streaming replacement for cable TV. The expense of these services highlights just how profitable advertising is for them — and still they need to add more and more slots. The suspicion is of course that individual ads cost less, so YouTube needs to show ever more in order to continue their growth trajectory in the face of competition for eyeballs from the likes of TikTok.

Now news emerges that YouTube is negotiating to add content to its subscription services:

YouTube has been in closed-door talks with streaming broadcasters about a new product the video giant wants to launch in Australia, which industry insiders say is an ambitious play to own the home screen of televisions.
The company is seeking deals with Australian broadcasters to sell subscriptions to services such as Nine-owned Stan and Foxtel’s Binge directly through YouTube, which would then showcase the streamers’ TV and movie content to users.

Not being in Australia, I'm not familiar with either Stan or Binge, but the idea would appear to be to get more users habituated to paying for subscriptions through YouTube. There are already paid-subscription YouTube channels out there, but not many; it seems that most creators have opted for the widest possible distribution and monetisation via ads, instead of direct monetisation via paying subscriptions in exchange for a smaller audience. Perhaps the pull of these shows will be enough to jump-start that model? Presumably the reason for launching this offering in Australia is that it will be a pilot whose results will be watched closely before rolling out in other markets (or not).

This whole approach seems a bit backward to me though. YouTube Is pretty unassailably established as the platform for video on the web; TikTok is effectively mobile-only and playing a somewhat different game. What if Google exploited that position by working with ISPs? I'm resistant to paying for YouTube Premium specifically, but if you hid the same amount somewhere in my ISP bill, or made a bundle around it with something else, I'd probably cough up. ISPs that sign up could also implement local caches (presumably part-funded by Google) to improve performance for their users, maybe get better traffic data to optimise the service — without illegal preferencing, of course.

Instead of trying to jump-start a new revenue stream by getting users to pay for something they already get for free by offering a slightly nicer experience, better for YouTube to get into a channel where users are already habituated to paying for add-on services, and where the incumbents (the ISPs) are desperate to position themselves as more than undifferentiated dumb pipes. A better streaming video experience is already the most obvious reason for most households to upgrade their internet connection, so the link is already there in consumers' minds.

Susan Wojcicki, have your people call me.


🖼️ Photo by Erik Allen on Unsplash

Draining The Moat

Zoom is in a bit of a post-pandemic slump, describing its own Q2FY23 results as "disappointing and below our expectations". This is quite a drop for a company that at one point was more valuable than ExxonMobil. Zoom does not disclose the total number of users, only "enterprise users", of which there are 204,100. "Enterprise users" are defined in a footnote to the slides from those Q2FY23 results as "customers who have been engaged by Zoom’s direct sales team, channel partners, or independent software vendor (ISV) partners." Given that Zoom only claims 3,116 customers contributing >$100k in revenue over the previous year, that is hardly a favourable comparison with Cisco's claim of six million users of WebEx Calling in March 2022.

As I wrote in The Thing With Zoom, Zoom's original USP was similar to WebEx's, namely the lowest time-to-meeting with people outside company. As a sales person, how quickly can I get my prospect in the meeting and looking at my presentation? Zoom excelled at this metric, although they did cut a number of corners to get there. In particular, their software would stick around even after users thought they had uninstalled it, just in case they ever needed it again in the future.

Over the past year or two, though, Teams usage has absolutely taken off. At the beginning the user experience was very rough, even by Microsoft standards, confusing users with the transition from its previous bandwagon-jumping branding as Skype for Business. Joining a Teams meeting as an outsider to the Teams-using organisation was (and largely still is) a mess, with the client failing to connect as often as not, or leaving meeting invitees in a loop of failed authentication, stuck between a web client and a native client, neither of which is working.

And yet, Teams is still winning in the market. Why?

There is more to this situation than just Microsoft's strength in enterprise sales. Certainly, Microsoft did not get distracted trying to cater to Zoom cocktails or whatever, not least because nobody in their right mind would ever try to party over Teams, but also for the very pragmatic and Microsoftian move that those users don't pay.

Teams is not trying to play Zoom and WebEx at their own game. Microsoft doesn't care about people outside their client organisations. Instead, Microsoft Teams focuses on offering the richest possible meeting experience to people inside those organisations.

I didn't fully appreciate this distinction, since throughout this transition I was working for companies that used the standard hipster tech stack of Slack, Google Docs, and Zoom. What changed my understanding was doing some work with a couple of organisations that had standardised on Teams. Having the text chat, video call, and documents all in one place was wonderfully seamless, and felt native in a way that Google's inevitable attempt to shoehorn Hangouts into a Google Docs sidebar or comment thread never could.

This all-in-one approach was already calculated to appeal to enterprises who like simplicity in their tech stack — and in the associated procurement processes. Pay for an Office 365 license for everybody, done. Teams would probably have won out anyway just on that basis, but the trend was enormously accelerated by the very factor everyone assumed would favour Zoom: remote work.

While everyone was focusing on Zoom dating, Zoom board games, Zoom play dates, and whatever else, something different was happening. Sales people were continuing to meet with their customers over Zoom/WebEx/whatever, but in addition to that, all of the intra-company meetings were also flipping online. This transition lead to an explosion in the ratio of internal video meetings to outside-facing ones, changing the priority from "how quickly can I get the other people in here, especially if they haven't got the meeting client installed" to "everyone has the client installed, how productive can we be in the meeting".

As the ratio of outside video meetings to inside meetings flips, Zoom's moat gets filled in

Zoom could not compete on that metric. All Zoom could do was facilitate someone sharing their screen, just like twenty years ago. Maybe what was being shared was a Google Doc, and the other people in the meeting were collaborating in the doc — but then what was Zoom's contribution? Attempts to get people to use built-in chat features or whiteboarding never took off; people used their Slack for chatting, and I never saw anyone use the whiteboard feature in anger.

Once an organisation had more internal remote video meetings than outside-facing ones, these differences became glaring deficiencies in Zoom compared to Teams.1

Zoom squandered the boost that the pandemic gave them. Ultimately, video chat is a feature, not a product, and Zoom will either wither away, or get bought and folded into an actual product.


🖼️ Photos by Chris Montgomery and Christina @wocintech.chat on Unsplash


  1. The same factors are also driving a slight resurgence in Hangouts, based on my anecdotal experience, although Google does not disclose clear numbers. If you're already living in Google Docs, why not just use Hangouts? (Because it's awful UX, but since when did that stop Google or even slow them down?) 

Fun In The Sun

A reliable way for companies to be seen as villains these days is to try to roll back concessions to remote work that were made during the pandemic1. Apple is of course a perennial scapegoat here, and while it seems reasonable that people working on next year's iPhone hardware might have to be in locked-down secure labs with all the specialised equipment they need, there is a lurking suspicion that much of the pressure on other Apple employees to return to work is driven by the need to justify the massive expense of Apple Park. Jony Ive's last project for Apple supposedly cost over $4B, after all. Even for a company with Apple's revenues, that sort of spending needs to be justified. It's not a great look if your massive new vanity building is empty most of the time.

The same mechanisms are playing out in downtown business districts around the world, with commercial landlords worried about the long-term value of their holdings, and massive impacts on the services sector businesses (cafes, restaurants, bars, dry-cleaners, etc etc) that cluster around those office towers.

With all of this going on, it was probably inevitable that companies would try to jump on the bandwagon of being remote-work friendly — some with greater plausibility than others. I already mentioned Airbnb in a past post; they have an obvious incentive to facilitate remote work.

Other claims are, let's say, more far-fetched.

In a recent example of the latter genre, it seems that Citi is opening a hub in Málaga for junior bankers:

  • Over 3,000 Málaga hopefuls applied for just 27 slots in the two-year program, which promises eight-hour days and work-free weekends -- practically unheard of in the traditional banking hubs in Manhattan and London. In exchange, Málaga analysts will earn roughly half the starting salaries of their peers.
  • The new Spain office will represent just a minuscule number of the 160 analysts Citi hired in Europe, the Middle East, and Africa, on top of another 300+ in New York.

This is… a lot less than meets the eye. 27 people, out of a worldwide intake of ~500 — call it 5% — will be hired on a two-year contract in one admittedly attractive location, and in exchange for reasonable working hours, will take a 50% hit on their starting salary. In fairness the difference in cost of living between Málaga and London will make up a chunk of that difference, and having the weekends free to enjoy the place is not nothing, but apart from that, what is the upside here?

After the two years are up, the people who have been busy brown-nosing and visibly burning the midnight oil at head office will be on the promotion track. That is how banking works; if you can make it through the first few years, you have a) no social life any more, and b) a very remunerative career track in front of you. Meanwhile, it is a foregone conclusion that the people from the Málaga office will either not have their contract renewed after the two years are up, or will have to start their career track all over again in a more central location.

In other words, what this story boils down to is some short-term PR for Citi, a bunch of cheap(er) labour with a built-in termination date, and not much more.

Then again, it could be worse (it can always be worse). Goldman Sachs opted for the stick instead of the carrot with its own return to the office2 mandate, ending the free coffee that had been a perk of its offices.

Even after all these years in the corporate world, I am amazed by these utterly obvious PR own goals. The value of the coffee cart would have been infinitesimal, completely lost in Goldman's facilities budget. But what is the negative PR impact to them of this move? At one stroke they have hollowed out all the rhetoric of teamwork and empowerment that is the nominal justification for the return to office.

Truly committing to a remote work model would look rather different. I love the idea of Citi opening a Málaga hub. The difference is that in a truly remote-friendly organisation, that office would not have teams permanently based in it (apart from some local support staff). Instead, it would be a destination hub for teams that are truly remote to assemble on a regular basis for planning sessions. The rest of the time, everyone would work remotely wherever they currently live.

Some teams do need physical proximity to work well, some customer-facing roles benefit from having access to meeting space at a moment's notice — but a lot of the work of modern companies does not fall into these categories. Knowledge workers can do their work anywhere — trust me, I've been working this way for more than fifteen years. Some of my most productive work has been done in airport lounges, not even in my fully equipped home office! With instant messaging, video calls, and collaboration tools, there is no real downside to working this way. Meanwhile, the upside is access to a global and distributed talent pool. When I did have to go into an office, it was so painful to be in an open-space with colleagues that were not on my actual team that I wore noise-cancelling headphones. If that's the situation, what's the point of commuting to an office?

This sort of reorganisation would admittedly not be great for the businesses that currently cluster around Citi offices and cater to the Citi employees working in those offices — but the flip side would be the massive benefits to businesses in those Citi employees' own home neighbourhoods. If you're not spending all your waking hours in Canary Wharf or Wall Street, you can do your dry cleaning at your local place, you can buy lunch around the corner instead of eating some over-priced plastic sandwich hunched over your desk, and you can get a better quality of life that way — maybe even in Málaga!

The only downside of working from home is that you have to pay for your own coffee and can't just get Goldman to foot the bill.


🖼️ Photos by Carles Rabada, Jonas Denil, and Tim Mossholder on Unsplash


  1. Not that the pandemic is quite over yet, but let's not get into that right now. 

  2. Never "return to work". This is a malicious rhetorical framing that implies we've all been slacking off at home. People are being asked to continue to work, and to return to the office to do so. They may want to pick up noise-cancelling headphones on their way in. 

Growing Pains

The iPad continues to (slowly, slowly) evolve into a Real Computer. My iPad Pro is my only personal computer — I don't have a Mac of my own, except for an ancient Mac Mini that is plugged into a TV and isn't really practical to use interactively. It's there to host various network services or display to that TV.

For reasons I don't feel like going into right now, I don't currently have a work Mac to plug into my desk setup, so I thought I'd try out the new Stage Manager feature in iPadOS 16.

So, the bottom line is that it does work, and it makes the iPad feel suddenly like a rather different machine.

Some setup is required. Of course Stage Manager needs iPadOS 16; I've been running the beta on my iPad all summer, and it seems pretty stable. The second display needs to connect via USB-C; I already have my CalDigit dock set up that way, so that part was no problem. Using Stage Manager with an external display also requires an external keyboard and mouse, and these have to be connected by Bluetooth; the USB keyboard connected to my dock was not recognised. Without those peripherals, the external display only works for screen mirroring, which is a bit pointless in my opinion. Mirroring the iPad's display to another screen makes sense if you are showing something to someone, but then, why would you need Stage Manager?

Anyway, once I had everything connected, the external display started working as a second display. I was able to arrange the two displays correctly from Settings; some new controls appeared under Display & Brightness to enable management of the second display.

It's interesting to see what does and does not work. The USB microphone plugged into the dock — and the analogue headphones daisy-chained from that — worked without any additional configuration, but the speakers connected to the dock's SPDIF port were not visible to iPadOS. Luckily these speakers also support Bluetooth, so I'm still able to use them; it’s just a bit of a faff to have to connect three Bluetooth devices (keyboard, mouse, and speakers) every time I want to sit at my desk. The Mac is way easier: one USB-C cable, and you’re done. The second desktop display does not show up at all, but that's fair enough; even the first generation of M1 Macs didn't support two external displays. External cameras also do not show up, and there's not even any control, so it's the iPad's built-in camera or nothing.

There's some other weird stuff that I assume and hope is due to the still-beta status of iPadOS 16.

  • The Settings app does not like being on the external display in the least, and appears all squashed. My display is an Ultrawide, but weirdly, the Settings window is squashed horizontally. Maybe the Settings app in iPadOS has not received much attention given the troubled gestation of the new Settings app in macOS Ventura?
  • Typing in Mail and a couple of other apps (Evernote, Messages, possibly others I haven’t encountered yet) sometimes lagged — or rather, the keystrokes were all being received, but they would not be displayed, until I did something different such as hitting backspace or clicking the mouse. At other times, keystrokes showed up normally.
  • The Music App goes straight into its full-screen display mode when it's playing, even when the window is not full-screen. The problem is that the touch control at the top of that window which would normally return to the usual display mode does not work. Also, Music is one of the apps whose preview in the Stage Manager side area does not work, so it's always blank. This seems like an obvious place to display static cover art, even if we can't have live-updating song progression or whatever.
  • Sometimes apps jump from the external display to the iPad’s built-in, for instance if you open something in Safari from a different app.

What does work is that apps can be resized and rearranged, giving a lot more flexibility than the previous single-screen hover or side-by-side multitasking options. App windows can also be grouped to keep apps together in logical groups, such as the editor I'm typing this into and a Safari window to look up references. Again, this is something that I already did quite a lot with the pre-existing multi-tasking support in iPadOS, but it only really worked for two apps, plus one in a slide-over if you're really pushing it. Now, you can do a whole lot more.

I am glad that I came back to give Stage Manager another chance. I had played with the feature on my iPad without connecting it to anything, and found it unnecessarily complex. I do wonder how much of that is because I'm rocking an 11" rather than a 13"? Certainly, I can see this feature being much more useful on a Mac, even standalone. However, Stage Manager on iPadOS truly comes into its own with an external display. This is a big step on way to the iPad becoming a real computer rather than merely a side device for a Mac or a bigger iPhone.

It's worth noting that Stage Manager only works with the very latest iPads that use Apple silicon: iPad Air (5th generation), 11-inch iPad Pro (2021), and 12.9-inch iPad Pro (2021). It's probably not the time to be buying a new iPad Pro, with rumours that it's due for a refresh soon, maybe to an M2, unless you really really want to try Stage Manager right now. However, if you have an iPad that can support it, and an external display, keyboard, and mouse, it's worth trying it out to get a better idea of the state of the iPadOS art.


🖼️ Photos by author, except Stage Manager screenshot from Apple

Sights From A Bike Ride

One of the positive aspects I often cite when talking up the place where I live is that I can be in fields in ten minutes' ride from my front door in the old town — as in, my windows look out onto the old city walls.1

Once out in the fields, though, you never know what you might find. Here are some scenes from my latest ride.

Roadside shrine to the Madonna della Notte, complete with offerings and ex-voto (thanks for successful prayers).

Not sure what's up with this old Lancia planted in a farm yard, but it looks cool!

Here I just liked the contrast between the red tomatoes waiting for the harvest and the teal frame of my Bianchi.

Bike rides are so great for getting out of my head, whether it’s a technical piece of single-track on my mountain bike where I have to concentrate so hard I can’t think of anything else, or a ride like this where I’m bowling along the flat with a podcast in my (bone-conduction) headphones. The trick is staying off main roads as much as possible — hence the gravel bike.


  1. Which are actually the newest city walls, dating from the sixteenth century CE, post-dating various earlier medieval and Roman walls of which only traces remain. These Renaissance walls were later turned into a linear park (pictures) known as the "Facsal", a distortion of London's famous Vauxhall gardens, among the first and best-known pleasure gardens in nineteenth-century Europe. In more modern times, the Facsal was part of the street circuit for the 1947 Grand Prix of Piacenza, famously the first race entered by a Ferrari car — although not the site of the Scuderia's first win. 

Nice Tech, Pity About The Product

Like many IT types, my workspace has a tendency to acquire obsolete technology. When I shared a flat in London with somebody else who lives with the same condition, computers significantly outnumbered people; heck, operating systems sometimes outnumbered people, even after our then-girlfriends/now-wives moved in! At one point, we even had an AS/400 desk-side unit that we salvaged, until we realised we really didn't have anything fun to do with it and moved it on again.

In the big clear-out last year, I got rid of a bunch of the old stuff — yes, even some of the cables! One item made the opposite journey, though, from the depths of a box inside a cupboard of toner cartridges underneath a monitor so old it still has a 4:3 aspect ratio, to pride of place in my line of sight from my desk chair.

That item is the installation media for a thoroughly obsolete computer operating system from the 90s.

What Even Is BeOS?

BeOS was the brain-child of a bunch of ex-Apple people, including Jean-Louis Gassée, who worked for Apple through the 80s and was instrumental in the creation of the Newton, among other things. While Apple spent the 90s trying and failing to create a new operating system to replace the aging MacOS, Gassée and his merry band created a brand-new operating system called BeOS. The 90s were probably the last time in history that it was possible to do something like that; the platforms that have emerged since then (iOS and Android) are variations on existing platforms (NeXTSTEP/OS X, which slightly predates BeOS, and Linux respectively).

Initially targeted at AT&T's Hobbit CPUs, BeOS was soon ported to the PowerPC architecture. These were the CPUs that powered Apple computers at the time, the product of an alliance between Apple, IBM, and Motorola. Between them, the three companies hoped to foster the emergence of an ecosystem to rival (or at least provide an alternative to) Intel's dominant x86. In those days, Apple licensed a handful of manufacturers to build MacOS-compatible PowerPC computers, so Be quickly stopped manufacturing their own BeBox hardware and switched to offering the BeOS to people who owned these computers — or actual Apple Macs, I suppose, but even at the time you didn't hear of many people doing that.

This is where BeOS first entered my life. If you can believe it, the way you found out about cool software in those pre-broadband days was to buy a printed magazine that would come with a CD full of demos, shareware, utilities, wallpapers, icon sets, and more. There were a few magazines that catered to the Apple enthusiast market, and in 1997, I happened to pick one up that included Preview Release 2 of the BeOS.1

Luckily for me, I owned a whopping 500MB external SCSI drive, so I didn't have to mess around with reformatting the main HDD of the family computer (which would probably have run all of 2GB at the time, kids!). I was quickly up and running with the BeOS, which absolutely blew away the contemporary Macintosh operating system.

Why Bother With BeOS?

The performance was the first and most obvious difference between BeOS and MacOS. Just watching GLTeapot spinning around in real time was amazing, especially compared to what I was used to in MacOS on the same hardware. Check out this contemporary review, focusing specifically on BeOS’ multimedia capabilities.

This was also my first exposure to a bash terminal, or indeed any command-line interface beyond MS-DOS, and I can safely say that it was love at first sight, especially once I started understanding how the output of one command could be passed to another, and then the whole thing wired up into a script.

BeOS was properly multi-user, in a way that Classic MacOS very definitely wasn't. This factor made me consider it as a full-time replacement for MacOS on the family computer, but the lack of hardware support killed that idea. Specifically, the Global Village Teleport fax/modem which was our connection to the early Internet, running at a blazing fast 14.4kbps, did not work in BeOS.

This lack was doubly annoying since BeOS shipped with an actual web browser: NetPositive, one of whose claims to fame was its haiku error messages. At the time, Mac users were stuck between Netscape Navigator, Microsoft Internet Explorer, Apple's almost wilfully obscure Cyberdog, and early versions of Opera.

What Happened To BeOS?

This is where we get to the point of the story. What killed BeOS was not any sort of issue with the technology. It was leaps and bounds ahead of both dominant operating systems of the day, with massive developer interest.

Unfortunately, Be did not own its own destiny. After failing to sell itself to Apple, Be staggered on for a few more years. Once it became obvious that Apple was going to kill the MacOS clone business which powered the ecosystem of non-Apple PowerPC hardware that BeOS ran on, an x86 port was quickly added. By this point dual-booting operating systems on x86 had become, if not exactly mainstream, at least somewhat common in technical circles. Unfortunately for Be, the second OS (of course after Windows) was almost always Linux. A second commercial operating system was always going to be a hard sell in a world where everyone had already paid for a Windows license as part of the purchase price for their PC, to the point that Be literally couldn't even give it away. In fact Be actually sued Microsoft over its alleged monopolistic practices, possibly the last gasp of the First Browser War of the late 90s.2

Be was eventually sold to Palm, and after Palm's own travails, the last vestiges of BeOS disappeared from public view only a few years later.

The lesson here is that the best technology does not always win — or at least, does not win unaided. Execution is key, and Be, despite some very agile pivots, failed to execute to the point of making any meaningful dent in the personal-computer-OS market.

What could Be have done differently? It's hard to say, even with the benefit of hindsight. None of the alternative desktop operating systems that sprang up in the late 80s and early 90s have survived. BeOS? Gone. OS/2 Warp? Gone. All the commercial UNIX systems? Gone — but maybe next year will be the year of Linux on the desktop. NeXT? It got acquired by Apple, and the tech is still with us in every current Apple platform — but if Be had been the one to get bought to replace the failed Copland project, NeXT would certainly have been the one to disappear.

That is the one inflection point really worth considering: what if Gassée had managed to negotiate a deal with Apple back then? What would OS X be like today if it were based on BeOS rather than on NeXTSTEP?3 And… what would Apple be like without Steve Jobs, in hindsight the most valuable part of the NeXT acquisition? There would probably still be a mobile product; one of the key Be employees was Steve Sakoman, godfather of the Newton, so it seems fairly certain that a descendant of some sort would have emerged from a Be-infused Apple. But would it have become the globe-spanning success of the iPhone (and iPad) without Steve Jobs to market it?

One day I would like to own both a BeBox and a NeXTcube,3 but for now I just keep that BeOS PR2 CD as a tech industry memento mori, a reminder to myself not to get caught up in the elegance of the tech, but always to remember the product and the use cases which that tech enables.


  1. I could have sworn it was MacAddict, which was definitely my favourite magazine at the time, but the only references I can find online say it was MacTech, and it's been long enough that I can't be sure. 

  2. Be's travails did inspire at least one high-profile fan, with Neal Stephenson discussing BeOS in his book-length essay In the Beginning... Was the Command Line, as well as giving it a cameo in Cryptonomicon (alongside "Finux", his gossamer-thin Linux-analogue). 

  3. Yes, weird capitalisation has always been part of the computer industry. 

Good Outcomes Grow From Failure

Failure Is Good, Actually

No, this is not going to be some hustleporn screed about failing fast and learning from it. I am talking about actual failure, crashing and burning and flaming out and really really bad outcomes. Here's my point: when these bad things happen to the right people, they can be really good for the rest of us — and not just because we can enjoy the schadenfreude of terrible people messing up in public.

Here's how it works: a terrible person, let's call him Travis (for that is his name) spots an actual gap in the market: hailing taxis sucks, and when you can get one, they all mysteriously have broken credit card terminals. Travis therefore founds a company called, just for the sake of realism, Uber, and goes after that opportunity in the worst way imaginable.

Here's the thing: Travis and Uber weren't wrong about the opportunity, which is why Uber took off the way it did. Uber even had a very explicit strategy of weaponising the love users had for the service to put pressure on local governments to allow the service to launch in different locales. This strategy succeeded in both the short and the long term, but in very different ways.

In the early years, Uber was the latest poster child for the "move fast and break things" Silicon Valley tech bro attitude. Sure, Parisian taxi drivers rioted and set Uber cars on fire, and Italian taxi drivers managed to get UberX (known locally as Uber Pop — don't ask) banned, but in most places, Uber triumphed, mainly because the service was genuinely so much better than the status quo: you could summon a car right to your location, and when you arrived at your destination, you just got out and strolled off, no haggling or searching for the right currency.

So much for the short term. In the longer term, all that moving fast and breaking things caught up with Travis and his company, as VCs got tired of subsidising the true cost of Uber rides, making them far less competitive with actual licensed taxis. However, in the mean time, something interesting happened: the previously somnolent local taxi industries in every city suddenly woke up to this new existential threat. They had been used to being monopolies, so they could set their own rules and control the number of entrants. Uber (and Lyft, Grab, et al) upended that cozy status quo — but after some flailing, and some bonfiring of Uber cars, they woke up to the threat, and addressed it in the best way: by going straight to the root of what customers had demonstrated they wanted.

Now, I can rock up in almost any decent-sized city in Europe, and with an app called Free Now, I can summon a car to my location, pay with a stored credit card, and hop out at my destination without worrying about currency conversion or losing a printed receipt. It sounds a lot like Uber, with a crucial distinction: the cars are locally-licensed taxis, subject to all the standard licensing checks.

Uber is still a going concern, to be clear, but it's struggling as its costs rise and the negative externalities come home to roost. The investment case for Uber was always based on them securing either a monopoly on the ride-hailing market, or alternatively a breakthrough in self-driving technology that would let them do away with their highest cost: the pesky human element, the actual drivers.

I think it's inarguable that this original investment case has not worked out, and a lot of the shine has come off Uber as the investor subsidy goes away and prices rise to reflect actual costs.

From Four Wheels To Two

Now, the same mechanisms are playing out in the dockless scooter — aka "micromobility" — market:

Today, a scooter rental ride hardly seems like a bargain. At typical rates, which include an upfront and per-minute fee, a 20-minute ride would cost about $6. That’s more than a quick bus or subway ride in places that offer those options.

Still, last-mile transportation remains a tricky niche to fill in urban networks, and scooters do have a place in the mix. We’re not done with them yet. Just don’t expect the days—or valuations—of the peak scooter era to return any time soon.

I have used these services, and broadly speaking, I'm a fan. They are not worth bazillions of CURRENCY_UNITS because they are obviously terrible markets for the purpose: low barriers to entry, and operating costs that scale linearly with network size.

As it happens, both of these issues can be addressed with some good old-fashioned regulation — the sort of thing that happens in maturing markets. Now that the public has expressed interest in these new options, each city can choose how the services should operate. In my small hometown, a single vendor has been approved, with a cap on the number of vehicles and on speed in the centre of town (GPS-enforced, natch). Crucially, the scooters are not just abandoned wherever, getting in people's way; they live in specific "parking lots" (repurposed car parking spots). Paris has taken a similar approach, requiring riders to photograph where they left their ride to ensure it's not placed somewhere it shouldn't be, and fining or barring riders who do not park correctly.

I just hope that we can reach the same result as Uber — all of the good aspects of the service, without the horrible VC-inflated bits. I like that I can rock up in a strange city, pull out my phone, and within a minute or two be on an e-bike. It's not often practical to travel with my own bike, so these rental services have a real potential.

Moses did not get to see the Promised Land. Uber and Lime are still with us, but with rather diminished ambitions. But as long as we get to that promised land of a fully-integrated and ubiquitous transport network, the creative destruction was worth it, and we travellers will be happy.


🖼️ Photos by Austin Distel and Hello I'm Nik on Unsplash

Systems of Operation

I have, to misquote J. R. R. Tolkien, a cordial dislike of overly rigid classification systems. The fewer the dimensions, the worse they tend to be. The classic two-by-two grid, so beloved of management consultants, is a frequent offender. I suspect I am not alone, as most such systems quickly get complicated by the addition of precise placement along each axis, devolving into far more granular coordinate systems on at least one plane, rather than the original four simple boxes. But surely the worst of the lot are simple binary choices, this or that, no gradations on the spectrum allowed.

We have perhaps more than our fair share of these divisions in tech — or perhaps it makes sense that we have more than other fields? (That's a joke, because binary) Anyway, one of the recurring binary splits is the one between development and operations. That it is obviously a false binary is clear by the fact that these days, the grey area at the intersection — DevOps — gets far more consideration than either extreme. And yet, as it is with metaphors and allegories (back to JRRT!), so it is with classifications: all of them are wrong, but some of them are useful.

The Dev/Ops dichotomy is a real one, no matter how blurred the intersection has got, because it is based in a larger division. People tend to prefer either the work of creation, architecting and building, or the work of maintaining, running and repairing. The first group get visibility and recognition, so certain personality traits cluster at this end of the spectrum — flashy and extrovert, dismissive of existing constraints. At the opposite end, we find people who value understanding a situation deeply, including how it came to be a certain way, and who act within it to achieve their goals.

I am trying to avoid value judgments, but I think it is already clear where my own sympathies lie. Someone I have worked with for a long time subscribes to Isaiah Berlin's analogy: the fox knows many things, but the hedgehog knows one big thing. I am an unashamed fox: I know a little about a lot, I love accumulating knowledge even if I do not have an immediate obvious use for it, and I never saw a classification system I did not immediately question and find the corner-cases of. These traits set me up to be a maintainer and an extender rather than a creator.

I value the work of maintenance; designing a new thing starting with a clean sheet is an indulgence, while working within the constraints of an existing situation and past choices to reach my objectives is a discipline that requires understanding both of my own goals and those of others who have worked on the same thing in the past. In particular, good maintainers extend their predecessors the grace of assuming good intent. Even if a particular choice seems counter-intuitive or sub-optimal, this attitude does the courtesy of assuming there was a good and valid reason for making it, or a constraint which prevented the more obvious choice.

Embrace Failure — But Not Too Tightly

There are many consequences to this attitude. One is embracing failure as an opportunity for learning. The best way to learn how something works is often to break it and then fix it — but please don't blame me if you break prod! Putting something back together is the best way to truly understand how different components fit one another and interact with one another in ways that may or may not be planned in the original design. It is also often a way of finding unexpected capabilities and new ways of assembling the same bits into something new. I did both back when I was a sysadmin — broke prod (only the once) and learned from fixing things that were broken.

Embracing failure also does not mean that we should allow it to happen; in fact the maintainer mindset assumes failure and values redundancy over efficiency or elegance of design. Healthy systems are redundant, both to tolerate failure and to enable maintenance. I had a car with a known failure mode, but unfortunately the fix was an engine-out job, making preventative maintenance uneconomical. The efficiency of the design choice to use plastic tubing and route it in a hot spot under the engine ultimately came back to bite me in the shape of a late-night call to roadside assistance and an eye-watering bill.

Hyperobjects In Time

There is one negative aspect to the maintainer mindset, beyond the lack of personal recognition; people get awards for the initial design, not for keeping it operating afterwards. Lack of maintenance (or of the right sort of maintenance) is not immediately obvious, especially to hedgehog types. It is not the sort of one big thing that they tend to focus on. Instead, it is more of a hyperobject, visible only if you take a step back and add a time dimension. Don't clean the kitchen floor for a day, it's probably fine. Leave it for a week, it's nasty, and probably attracting pests. I know this from my own student days, when my flatmates explored the boundaries of entropy with enthusiasm.

Hyperobjects extend through additional dimensions beyond the usual three. In the same way that a cube is a three-dimensional object whose faces are two-dimensional squares, a hypercube or tesseract is a four-dimensional object whose faces are all three-dimensional cubes. This sort of thing can give you a headache to think about, but does make for cool screensaver visualisations. In this particular formulation, the fourth dimension is time; deferred maintenance is visible only by looking at its extent in time, while its projection into our everyday dimensions seems small and inconsequential when viewed in isolation.

These sorts of hyperobjects are difficult for hedgehogs to reason about precisely because they do not fit neatly into their two-by-two grids and one big thing. They can even sneak up on foxes because there is always something else going on, so the issues can remain undetected, hidden by other things, until some sort of failure mode is encountered. If that failure can be averted or at least minimised, maintainer foxes can learn something from it and modify the system so that it can be maintained more easily and avoid the failure recurring.

All of these reflections are grounded in my day job. I own a large and expanding library of content, which is continuously aging and becoming obsolete, and must be constantly maintained to remain useful. Leave one document untouched for a month or so, and it's probably fine; the drift is minimal, a note here or there. Leave it for a year, and it's basically as much work to bring it back up to date as it would be to rewrite it entirely. It's easy to forget this factor in the constant rush of everyday work, so it's important to have systems to remind us of the true extent of problems left unaddressed.

In my case, all of this rapidly-obsolescing content is research about competitors. This is also where the intellectual honesty comes in: it's important to recognise that creators of competing technology may have had good reasons for making the choices they made, even when they result in trade-offs that seem obviously worse. In the same way, someone who adopted a different technology probably did so for reasons that were good and valid for their time and place, and dismissing those reasons as irrelevant will not help to persuade them to consider a change. This is known as "calling someone's baby ugly", and tends to provoke similar negative emotional reactions as insulting someone’s actual offspring.

Good competitive positioning is not about pitching the One True Way and explaining all the ways in which other approaches are Wrong. Instead, it's about trying to understand what the ultimate goal is or was for all of the other participants in the conversation, and engaging with those goals honestly. Of course I have an agenda, I'm not just going to surrender because someone made a choice years ago — but I can put my agenda into effect more easily by understanding how it fits with someone else's agenda, by working with the existing complicated system as it is, rather than trying to raze it to the ground and start again to build a more perfect design, whatever the people who rely on the existing system might think.

I value the work of maintainers, the people who keep the lights on, at least as much as that of the initial designers. And I know that every maintainer is also a little bit of a designer, in the same way that every good designer is also thinking at least a little bit about maintenance. Maybe that is my One Big Thing?