Good Outcomes Grow From Failure

Failure Is Good, Actually

No, this is not going to be some hustleporn screed about failing fast and learning from it. I am talking about actual failure, crashing and burning and flaming out and really really bad outcomes. Here's my point: when these bad things happen to the right people, they can be really good for the rest of us — and not just because we can enjoy the schadenfreude of terrible people messing up in public.

Here's how it works: a terrible person, let's call him Travis (for that is his name) spots an actual gap in the market: hailing taxis sucks, and when you can get one, they all mysteriously have broken credit card terminals. Travis therefore founds a company called, just for the sake of realism, Uber, and goes after that opportunity in the worst way imaginable.

Here's the thing: Travis and Uber weren't wrong about the opportunity, which is why Uber took off the way it did. Uber even had a very explicit strategy of weaponising the love users had for the service to put pressure on local governments to allow the service to launch in different locales. This strategy succeeded in both the short and the long term, but in very different ways.

In the early years, Uber was the latest poster child for the "move fast and break things" Silicon Valley tech bro attitude. Sure, Parisian taxi drivers rioted and set Uber cars on fire, and Italian taxi drivers managed to get UberX (known locally as Uber Pop — don't ask) banned, but in most places, Uber triumphed, mainly because the service was genuinely so much better than the status quo: you could summon a car right to your location, and when you arrived at your destination, you just got out and strolled off, no haggling or searching for the right currency.

So much for the short term. In the longer term, all that moving fast and breaking things caught up with Travis and his company, as VCs got tired of subsidising the true cost of Uber rides, making them far less competitive with actual licensed taxis. However, in the mean time, something interesting happened: the previously somnolent local taxi industries in every city suddenly woke up to this new existential threat. They had been used to being monopolies, so they could set their own rules and control the number of entrants. Uber (and Lyft, Grab, et al) upended that cozy status quo — but after some flailing, and some bonfiring of Uber cars, they woke up to the threat, and addressed it in the best way: by going straight to the root of what customers had demonstrated they wanted.

Now, I can rock up in almost any decent-sized city in Europe, and with an app called Free Now, I can summon a car to my location, pay with a stored credit card, and hop out at my destination without worrying about currency conversion or losing a printed receipt. It sounds a lot like Uber, with a crucial distinction: the cars are locally-licensed taxis, subject to all the standard licensing checks.

Uber is still a going concern, to be clear, but it's struggling as its costs rise and the negative externalities come home to roost. The investment case for Uber was always based on them securing either a monopoly on the ride-hailing market, or alternatively a breakthrough in self-driving technology that would let them do away with their highest cost: the pesky human element, the actual drivers.

I think it's inarguable that this original investment case has not worked out, and a lot of the shine has come off Uber as the investor subsidy goes away and prices rise to reflect actual costs.

From Four Wheels To Two

Now, the same mechanisms are playing out in the dockless scooter — aka "micromobility" — market:

Today, a scooter rental ride hardly seems like a bargain. At typical rates, which include an upfront and per-minute fee, a 20-minute ride would cost about $6. That’s more than a quick bus or subway ride in places that offer those options.

Still, last-mile transportation remains a tricky niche to fill in urban networks, and scooters do have a place in the mix. We’re not done with them yet. Just don’t expect the days—or valuations—of the peak scooter era to return any time soon.

I have used these services, and broadly speaking, I'm a fan. They are not worth bazillions of CURRENCY_UNITS because they are obviously terrible markets for the purpose: low barriers to entry, and operating costs that scale linearly with network size.

As it happens, both of these issues can be addressed with some good old-fashioned regulation — the sort of thing that happens in maturing markets. Now that the public has expressed interest in these new options, each city can choose how the services should operate. In my small hometown, a single vendor has been approved, with a cap on the number of vehicles and on speed in the centre of town (GPS-enforced, natch). Crucially, the scooters are not just abandoned wherever, getting in people's way; they live in specific "parking lots" (repurposed car parking spots). Paris has taken a similar approach, requiring riders to photograph where they left their ride to ensure it's not placed somewhere it shouldn't be, and fining or barring riders who do not park correctly.

I just hope that we can reach the same result as Uber — all of the good aspects of the service, without the horrible VC-inflated bits. I like that I can rock up in a strange city, pull out my phone, and within a minute or two be on an e-bike. It's not often practical to travel with my own bike, so these rental services have a real potential.

Moses did not get to see the Promised Land. Uber and Lime are still with us, but with rather diminished ambitions. But as long as we get to that promised land of a fully-integrated and ubiquitous transport network, the creative destruction was worth it, and we travellers will be happy.


🖼️ Photos by Austin Distel and Hello I'm Nik on Unsplash

Systems of Operation

I have, to misquote J. R. R. Tolkien, a cordial dislike of overly rigid classification systems. The fewer the dimensions, the worse they tend to be. The classic two-by-two grid, so beloved of management consultants, is a frequent offender. I suspect I am not alone, as most such systems quickly get complicated by the addition of precise placement along each axis, devolving into far more granular coordinate systems on at least one plane, rather than the original four simple boxes. But surely the worst of the lot are simple binary choices, this or that, no gradations on the spectrum allowed.

We have perhaps more than our fair share of these divisions in tech — or perhaps it makes sense that we have more than other fields? (That's a joke, because binary) Anyway, one of the recurring binary splits is the one between development and operations. That it is obviously a false binary is clear by the fact that these days, the grey area at the intersection — DevOps — gets far more consideration than either extreme. And yet, as it is with metaphors and allegories (back to JRRT!), so it is with classifications: all of them are wrong, but some of them are useful.

The Dev/Ops dichotomy is a real one, no matter how blurred the intersection has got, because it is based in a larger division. People tend to prefer either the work of creation, architecting and building, or the work of maintaining, running and repairing. The first group get visibility and recognition, so certain personality traits cluster at this end of the spectrum — flashy and extrovert, dismissive of existing constraints. At the opposite end, we find people who value understanding a situation deeply, including how it came to be a certain way, and who act within it to achieve their goals.

I am trying to avoid value judgments, but I think it is already clear where my own sympathies lie. Someone I have worked with for a long time subscribes to Isaiah Berlin's analogy: the fox knows many things, but the hedgehog knows one big thing. I am an unashamed fox: I know a little about a lot, I love accumulating knowledge even if I do not have an immediate obvious use for it, and I never saw a classification system I did not immediately question and find the corner-cases of. These traits set me up to be a maintainer and an extender rather than a creator.

I value the work of maintenance; designing a new thing starting with a clean sheet is an indulgence, while working within the constraints of an existing situation and past choices to reach my objectives is a discipline that requires understanding both of my own goals and those of others who have worked on the same thing in the past. In particular, good maintainers extend their predecessors the grace of assuming good intent. Even if a particular choice seems counter-intuitive or sub-optimal, this attitude does the courtesy of assuming there was a good and valid reason for making it, or a constraint which prevented the more obvious choice.

Embrace Failure — But Not Too Tightly

There are many consequences to this attitude. One is embracing failure as an opportunity for learning. The best way to learn how something works is often to break it and then fix it — but please don't blame me if you break prod! Putting something back together is the best way to truly understand how different components fit one another and interact with one another in ways that may or may not be planned in the original design. It is also often a way of finding unexpected capabilities and new ways of assembling the same bits into something new. I did both back when I was a sysadmin — broke prod (only the once) and learned from fixing things that were broken.

Embracing failure also does not mean that we should allow it to happen; in fact the maintainer mindset assumes failure and values redundancy over efficiency or elegance of design. Healthy systems are redundant, both to tolerate failure and to enable maintenance. I had a car with a known failure mode, but unfortunately the fix was an engine-out job, making preventative maintenance uneconomical. The efficiency of the design choice to use plastic tubing and routing it in a hot spot under the engine ultimately came back to bite me in the shape of a late-night call to roadside assistance and an eye-watering bill.

Hyperobjects In Time

There is one negative aspect to the maintainer mindset, beyond the lack of personal recognition; people get awards for the initial design, not for keeping it operating afterwards. Lack of maintenance (or of the right sort of maintenance) is not immediately obvious, especially to hedgehog types. It is not the sort of one big thing that they tend to focus on. Instead, it is more of a hyperobject, visible only if you take a step back and add a time dimension. Don't clean the kitchen floor for a day, it's probably fine. Leave it for a week, it's nasty, and probably attracting pests. I know this from my own student days, where my flatmates explored the boundaries of entropy with enthusiasm.

Hyperobjects extend through additional dimensions beyond the usual three. In the same way that a cube is a three-dimensional object whose faces are two-dimensional squares, a hypercube or tesseract is a four-dimensional object whose faces are all three-dimensional cubes. This sort of thing can give you a headache to think about, but does make for cool screensaver visualisations. In this particular formulation, the fourth dimension is time; deferred maintenance is visible only by looking at its extent in time, while its projection into our everyday dimensions seems small and inconsequential when viewed in isolation.

These sorts of hyperobjects are difficult for hedgehogs to reason about precisely because they do not fit neatly into their two-by-two grids and one big thing. They can even sneak up on foxes because there is always something else going on, so the issues can remain undetected, hidden by other things, until some sort of failure mode is encountered. If that failure can be averted or at least minimised, maintainer foxes can learn something from it and modify the system so that it can be maintained more easily and avoid the failure recurring.

All of these reflections are grounded in my day job. I own a large and expanding library of content, which is continuously aging and becoming obsolete, and must be constantly maintained to remain useful. Leave one document untouched for a month or so, and it's probably fine; the drift is minimal, a note here or there. Leave it for a year, and it's basically as much work to bring it back up to date as it would be to rewrite it entirely. It's easy to forget this factor in the constant rush of everyday work, so it's important to have systems to remind us of the true extent of problems left unaddressed.

In my case, all of this rapidly-obsolescing content is research about competitors. This is also where the intellectual honesty comes in: it's important to recognise that creators of competing technology may have had good reasons for making the choices they made, even when they result in trade-offs that seem obviously worse. In the same way, someone who adopted a different technology probably did so for reasons that were good and valid for their time and place, and dismissing those reasons as irrelevant will not help to persuade them to consider a change. This is known as "calling someone's baby ugly", and tends to provoke similar negative emotional reactions as insulting someone’s actual offspring.

Good competitive positioning is not about pitching the One True Way and explaining all the ways in which other approaches are Wrong. Instead, it's about trying to understand what the ultimate goal is or was for all of the other participants in the conversation, and engaging with those goals honestly. Of course I have an agenda, I'm not just going to surrender because someone made a choice years ago — but I can put my agenda into effect more easily by understanding how it fits with someone else's agenda, by working with the existing complicated system as it is, rather than trying to raze it to the ground and start again to build a more perfect design, whatever the people who rely on the existing system might think.

I value the work of maintainers, the people who keep the lights on, at least as much as that of the initial designers. And I know that every maintainer is also a little bit of a designer, in the same way that every good designer is also thinking at least a little bit about maintenance. Maybe that is my One Big Thing?

From Provincial Italy To London — And Back Again

More reflections on remote work

Well, I'm back to travelling, and in a pretty big way — as in, I'm already to the point of having to back out of one trip because I was getting overloaded! I've been on the road for the past couple of weeks, in London and New York, and in fact I will be back in New York in a month.

It has honestly been great to see people, and so productive too. Even though I was mostly meeting the same people I speak to week in, week out via Zoom, it was different to all be in the same room together. This was also the first time I was able to get my whole team together since its inception: I hired everyone remotely, and while I have managed to meet up with each of them individually, none of the people on the team had actually met each other in person… We had an amazingly productive whiteboarding session, where we knocked out some planning in a couple of hours that might otherwise have taken weeks, and probably justified a chunk of the cost of the trip on its own.

This mechanism also showed up in an interesting study in Nature, entitled Virtual communication curbs creative idea generation. The study shows that remote meetings are better for some things and worse for others. Basically, if the meeting has a fixed agenda and clear outcomes, a remote meeting is a more efficient way of banging through those items. However, when it comes to ideation and creativity, in-person meetings are better than remote ones.

As with all the best studies, this result tallies with my experience and reinforces my prejudices. I have been remote for a long time, way before the recent unpleasantness, but I always combined remote work with regular in-person catch-up meetings. You do the ideation and planning when you can all gather together around the whiteboard — not to mention reinforcing personal ties by gathering around a table in a restaurant or a bar! Then that planning and those personal ties take you through the rest of the quarter, with regular check-ins for tactical day-to-day actions to implement the strategic goals decided at the in-person meeting.

Leaving London

Something else that was interesting about my recent trips was meeting a whole lot of people who were curious about my living situation in Italy — how I came to be there, and what it was like to work a global role from provincial Italy, rather than from one of the usual global nerve centres. Telling the story in New York, coming fresh from my trip to London, led me to reflect back on how come I left London and whether it was the right call (spoiler: it totally was).

The London connection also showed up in a pair of articles by Marie Le Conte, who recently spent a couple of months in Venice before returning to London. It has been long enough since I left London that I no longer worry about whether prices in my favourite haunts will be different, but whether any of them are still there or still recognisable — and sadly, most of them are not. But then again, this is London we are talking about, so I have new favourites, and find a new one almost every trip.

Leaving London was a wrench: it was the first place I lived after university, and I enjoyed it to the hilt. Of course I had to share a flat, and I drove ancient unreliable cars1. But we were out and about all the time, in bars and theatres, eating out and meeting up and just enjoying the place.

However, over the following years most of my London friends moved away in turn, either leaving the UK outright or moving out to the commuter belt. The latter choice never quite made sense to me: why live somewhere nearly as expensive as London (especially when you factor in the cost of that commute), which offers none of the benefits of being in actual London, and still has awful traffic and so on? But as my friends started to settle down and want to raise families and so on, they could no longer afford London prices. Those prices get especially hard to justify once you could no longer balance them out by enjoying everything London has to offer — because you're at home with the kids, who also need to be near a decent school, and get back and forth from sports and activities, and so on and so forth.

My friends and I experienced the same London in our twenties that Marie Le Conte did: it didn't matter if you "rent half a shoebox in a block of flats where nothing really worked", because "there was always something to do". But if you're not out doing all the things, and you need more than half a shoebox to put kids in, London requires a serious financial commitment for not much return.

But why commute to the office at all?

Even before the pandemic, remote work allowed many of us to square that circle. We could live in places that were congenial to us, way outside commuting range of any office we might nominally be attached to, but travel regularly for those all-important ideation sessions that guided and drove the regular day-to-day work.

The pandemic has opened the eyes of many more people and companies to the possibilities of remote work. Airbnb notably committed to a full remote-work approach, which of course makes particular sense to Airbnb, expecially the bit about "flexibility to live and work in 170 countries for up to 90 days a year in each location". I admit they are an extreme case, but other companies have an opportunity to implement the parts of that model that make sense for them.

Certain functions benefit from being in the office all the time, so they require permanent space. This means both individual desks and meeting rooms. Meanwhile, remote workers will need to come in regularly, but when they do, they will have different needs. They will absolutely require meeting rooms, and large, well-equipped ones at that, and those are on top of whatever the baseline needs are for the in-office teams. On the other hand, the out-of-towners will spend most of their time in meetings (or, frankly, out socialising), and so they do not need huge numbers of hot desks — just a few for catching up with emails in gaps between meetings.

If you rotate the in-office meetings so you don't have the place bursting at the seams one week and empty the rest of the time, this starts to look like a rather different office setup than what most companies have now. You can even start thinking of cloud-computing analogies, no longer provisioning office space for peak utilisation, but instead spreading work to take advantage of unused capacity, and maybe bursting by renting external capacity as needed (WeWork2 et al)

If you go further down the Airbnb route and go fully remote, you might even start thinking more about where you put that office. Does it need to be in a downtown office core, or can it be in a more fun part of town — or in a different city entirely? Maybe it can even be in a resort-type location, as long as it has good transport links. Hey, a guy can dream…

But in the mean time, remote work unlocks the ability for many more people to make better choices about where to live. Raising a family is hard enough; doing it when both parents work is basically impossible without a strong local support network. Maybe the model should be something like the Amish Rumspringa, where young Amish go spend time out in the world before going back home and committing to the Amish way of life. Enjoy your twenties in the big city, get started on your career with the sort of hands-on guidance that is hard to get remotely, and then move back home near parents and friends when it's time to settle down, switching to remote working models — with careful scheduling to avoid both parents being away at once.

Once you start looking at it like that, provincial Italy is hard to beat. Quality of life is top-notch, with the sort of lifestyle that would require an extra zero on the salary in London or NYC. If you combine that with regular visits to the big cities, it's honestly pretty great.


🖼️ Photos by Kaleidico and Jason Goodman on Unsplash; London photograph author’s own (the view from my hotel room on my most recent London trip).


  1. I only had a car in the first place because I commuted out of London, to a place not well-served by trains; I never drove into central London if I could avoid it, even before the congestion charge was introduced. 

  2. Just because WeWork is a terrible company doesn't mean that the fundamental idea is wrong. See also Uber: while Uber-the-company is obviously unsustainable and has a number of terrible side-effects, it has forced into existence a ride-hailing market that almost certainly would not exist absent Uber. Free Now gives me an Uber-like experience (summon a car from my phone in most cities, pay with a stored card), but using regular licensed taxis and without the horrible exploitative Uber model. 

Old Views For Today's News

Here's a blog post I wrote back in 2015 for my then-employer that I was reminded of while recording the latest episode of the Roll For Enterprise podcast. Since the original post no longer seems to be available via the BMC web site, I assume they won't mind me reposting it here, with some updated commentary.
cia.png

xkcd, CIA

There has been a certain amount of excitement in the news media, as someone purportedly associated with ISIL has taken over and defaced US Central Command's Twitter account. The juxtaposition with recent US government pronouncements on "cyber security" (ack) is obvious: Central Command’s Twitter Account Hacked…As Obama Speaks on Cybersecurity.

The problem here is the usual confusion around IT in general, and IT security in particular. See for instance CNN:

The Twitter account for U.S. Central Command was suspended Monday after it was hacked by ISIS sympathizers -- but no classified information was obtained and no military networks were compromised, defense officials said.

To an IT professional, even without specific security background, this is kind of obvious.

shucking-a-tutorial.jpgPenny Arcade, Brains With Urgent Appointments

However, there is a real problem here. IT professionals also have a blind spot here: they don't think of things like Twitter accounts when they are securing IT infrastructure. This oversight can expose organisations to serious problems.

One way this can happen is credential re-use and leaking in general. Well-run organisations will use secure password-sharing services such as LastPass, but many times without IT guidance teams might instead opt for storing credentials in a spreadsheet, as we now know happened at Sony. If someone got their hands on even one set of credentials, what other services might they be able to unlock?

The wider issue is the notion of perimeter defence. IT security to date has been all about securing the perimeter - firewalls, DMZs, NAT, and so on. Today, though, what is the perimeter? End-user services like Dropbox, iCloud, or Google Docs, as well as multi-tier enterprise applications, span back and forth across the firewall, with data stored and code executed both locally and remotely.

I don't mean to pick on Sony in particular - they are just the most recent victims - but their experience has shown once and for all that focusing only on the perimeter is no longer sufficient. The walls are porous enough that it is no longer possible to assume that bad guys are only outside. Systems and procedures are needed to detect anomalous activity inside the network, and once that occurs, to handle it rapidly and effectively.

This cannot happen if IT is still operating as "the department of NO", reflexively refusing user requests out of fear or potential consequences. If the IT department tries to ban everything, users will figure out a way to go around the restrictions to achieve their goals. The risk then is that they make choices which put the entire organisation and even its customers at risk. Instead, IT needs to engage with those users and find creative, novel ways to deliver on their requirements without compromising on their mandate to protect the organisation.

While corporate IT cannot be held responsible for the security of services such as Twitter, they can and should advise social-media teams and end-users in general on how to protect all of their services, inside and outside the perimeter.

There are a still a lot of areas where IT is focused on perimeter defence. Adopting Okta or another SSO service is not a panacea; you still do need to consider what would happen when (not if) someone gets inside the first layer of defence. How would you detect them? How would you stop them?

The Okta breach has also helpfully provided an example of another important factor in security breaches: comms. Okta's comms discipline has not been great, reacting late, making broad denials that they later had to walk back, and generally adding to the confusion rather than reducing it. Legislation is being written around the world (with the EU as usual taking the lead) to mandate disclosure in situations like these, which may focus minds — but really, if you're not sufficiently embarrassed as a security provider that a bunch of teenagers were apparently running around your network for at least two weeks without you detecting them, you deserve all the fines you're going to get.

These are no longer purely tech problems. Once you get messy humans in the mix, the conversation changes from "how many bits of entropy does the encryption algorithm need" to "what is the correct trade-off between letting people get their jobs done and ensuring a reasonable level of security, given our particular threat model". Working with humans means communicating with them, so you’d better have a plan ready to go for what to say in a given situation. Hint: blanket denials early on are generally a bad idea, leaving hostages to fortune unnecessarily.

Have a plan ready to go for what you will say in a given situation (including what you may be legally mandated to disclose, and on what timeframe), and avoid losing your customers’ trust. Believe me, that’s one sort of zero trust that you don’t want!

Kids

Make no mistake: having kids is messy, stressful, and expensive. You should absolutely not have kids if you like having free time, disposable income, or any say in what to watch on TV. But there are also those moments when you walk into a room and you are greeted by an excitable small human who was unable to roll over an eyeblink ago, but now is gabbling on about the amazing castle they built with their wooden blocks, and who lives behind this door or in that tower, and what they will do next, and it all seems worth it. Well, at least until it's time to clear up…

Help, I'm Being Personalised!

As the token European among the Roll For Enterprise hosts, I'm the one who is always raising the topic of privacy. My interest in privacy is partly scarring from an early career as a sysadmin, when I saw just how much information is easily available to the people who run the networks and systems we rely on, without them even being particularly nosy.

Because of that history, I am always instantly suspicious of talk of "personalising the customer experience", even if we make the charitable assumption that the reality of this profiling is more than just raising prices until enough people balk. I know that the data is unquestionably out there; my doubts are about the motivations of the people analysing it, and about their competence to do so correctly.

Let's take a step back to explain what I mean. I used to be a big fan of Amazon's various recommendations, for products often bought with the product you are looking at, or by the people who looked at the same product. Back in the antediluvian days when Amazon was mainly about (physical) books, I discovered many a new book or author through these mechanisms.

One of my favourite aspects of Amazon's recommendation engine was that it didn't try to do it all. If I bought a book for my then-girlfriend, who had (and indeed still has, although she is now my wife) rather different tastes from me, this would throw the recommendations all out of whack. However, the system was transparent and user-serviceable. Amazon would show me transparently why it had recommended Book X, usually because I had purchased Book Y. Beyond showing me, it would also let me go back into my purchase history and tell it not to use Book Y for recommendations (because it was not actually bought for me), thereby restoring balance to my feed. This made us both happy: I got higher-quality recommendations, and Amazon got a more accurate profile of me, that it could use to sell me more books — something it did very successfully.

Forget doing anything like that nowadays! If you watch Netflix on more than one device, especially if you ever watch anything offline, you'll have hit that situation where you've watched something but Netflix doesn't realise it or won't admit it. And can you mark it as watched, like we used to do with local files? (insert hollow laughter here) No, you'll have that "unwatched" episode cluttering up your "Up next" queue forever.

This is an example of the sort of behaviour that John Siracusa decried in his recent blog post, Streaming App Sentiments. This post gathers responses to his earlier unsolicited streaming app spec, where he discussed people's reactions to these sorts of "helpful" features.

People don’t feel like they are in control of their "data," such as it is. The apps make bad guesses or forget things they should remember, and the user has no way to correct them.

We see the same problem with Twitter's plans for ever greater personalisation. Twitter defaulted to an algorithmic timeline a long time ago, justifying the switch away from a simple chronological feed with the entirely true fact that there was too much volume for anyone to be a Twitter completist any more, so bringing popular tweets to the surface was actually a better experience for people. To repeat myself, this is all true; the problem is that Twitter did not give users any input into the process. Also, sometimes I actually do want to take the temperature of the Twitter hive mind right now, in this moment, without random twenty-hour-old tweets popping up out of sequence. The obvious solution of giving users actual choice was of course rejected out of hand, forcing Twitter into ever more ridiculous gyrations.

The latest turn is that for a brief shining moment they got it mostly right, but hilariously and ironically, completely misinterpreted user feedback and reversed course. So much for learning from the data… Twitter briefly gave users the option of adding a "Latest Tweets" tab with chronological listing alongside the algorithmic default "Home" tab. Of course such an obviously sensible solution could not last, because unless you used lists, the tabbed interface was new and (apparently) confusing. Another update therefore followed rapidly on the heels of the good one, which forced users to choose between "Latest Tweets" or "Home", instead of simply being able to have both options one tap apart.

Here's what it boils down to: to build one of these "personalisation" systems, you have to believe one of two things (okay, or maybe some combination):

  • You can deliver a better experience than (most) users can achieve for themselves
  • Controlling your users' experience benefits you in some way that is sufficiently important to outweigh the aggravation they might experience

The first is simply not true. It is true that it is important to deliver a high-quality default that works well for most users, and I am not opposed in principle to that default being algorithmically-generated. Back when, Twitter used to have "While you were away" section which would show you the most relevant tweets since you last checked the app. I found it a very valuable feature — except for the fact that I could not access it at will. It would appear at random in my timeline, or then again, perhaps not. There was no way to trigger it manually, or any place where it would appear reliably and predictably. You just had to hope — and then, instead of making it easier to access on demand, Twitter killed the entire feature in an update. The algorithmic default was promising, but it needed just a bit more control to make it actually good.

This leads us directly to the second problem: why not show the "While you were away" section on demand? Why would Netflix not give me an easy way to resume watching what I was watching before? They don't say, but the assumption is that the operators of these services have metrics showing higher engagement with their apps when they deny users control. Presumably what they fear is that, if users can just go straight to the tweets they missed or the show they were watching, they will not spend as much time exploring the app, discovering other tweets or videos that they might enjoy.

What is forgotten is that "engagement" just happens to be one metric that is easy to measure — but the ease of measurement does not necessarily make it the most important dimension, especially in isolation. If that engagement is me scrolling irritably around Twitter or Netflix, getting increasingly frustrated because I can't find what I want, my opinion of those platforms is actually becoming more corroded with every additional second of engagement.

There is a common unstated assumption behind both of the factors above, which is that whatever system is driving the personalisation is perfect, both unbreakable in its functioning and without corner cases that may deliver sub-optimal results even when the algorithm is working as designed. One of the problems with black-box systems is that when (not if!) they break, users have no way to understand why they broke, nor to prevent them breaking again in the future. If the Twitter algorithm keeps recommending something to me, I can (for now) still go into my settings, find the list of interests that Twitter has somehow assembled for me, and delete entries until I get back to more sensible recommendations. With Netflix, there is no way for me to tell it to stop recommending something — presumably because they have determined that a sufficient proportion of their users will be worn down over time, and, I don't know, whatever the end goal is — watch Netflix original content instead of something they have to pay to license from outside.

All of this comes back to my oft-repeated point about privacy: what is it that I am giving up my personal data in exchange for, exactly? The promise is that all these systems will deliver content (and ads)(really it's the ads) that are relevant to my interests. Defenders of the model will point out that profiling as a concept is hardly new. The reason you find different ads in Top Gear Magazine, in Home & Garden, and in Monocle, is that the profile for the readership is different. But the results speak for themselves: when I read Monocle, I find the ads relevant, and (given only the budget) I would like to buy the products featured. The sort of ads that follow me around online, despite a wealth of profile information generated at every click, correlated across the entire internet, and going back *mumble* years or more, are utterly, risibly, incomprehensibly irrelevant. Why? Some combination of that "we know better" attitude, algorithmic profiling systems delivering less than perfect results, and of course, good old fraud in the adtech ecosystem.

So why are we doing this, exactly?

It comes back to the same issue as with engagement: because something is easy to measure and chart, it will have goals set against it. Our lives online generate stupendous volumes of data; it seems incredible that the profiles created from those megabytes if not gigabytes of tracking data have worse results than the single-bit signal of "is reading the Financial Times". There is also the ever-present spectre of "I know half of my ad spending is wasted, I just don't know which half". Online advertising with its built-in surveillance mechanisms holds out the promise of perfect attribution, of knowing precisely which ad it was which caused the customer to buy.

And yet, here we are. Now, legislators in the EU, in China, and elsewhere around the world are taking issue with these systems, and either banning them outright or demanding they be made transparent in their operation. Me, I'm hoping for the control that Amazon used to give me. My dream is to be able to tell YouTube that I have no interest in crypto, and then never see a crypto ad again. Here, advertisers, I'll give you a freebie: I'm in the market for some nice winter socks. Show me some ads for those sometime, and I might even buy yours. Or, if you keep pushing stuff in my face that I don't want, I'll go read a (paper) book instead. See what that does for engagement.


🖼️ Photos by Hyoshin Choi and Susan Q Yin on Unsplash

App Stores & Missing Perspectives

In Apple-watching circles, there has long been some significant frustration about Apple's App Store policies. Whether it's the opaque approvals process, the swingeing 30% cut that Apple takes out of any purchase, or the restrictions on what types of apps and pricing models are even allowed, developers are not happy.

It was not always this way: when the iPhone first launched, there was no App Store. Everying was supposed to be done with web apps. Developers being developers, people quickly worked out how to "jailbreak" their iPhones to install their own apps, and a thriving unofficial marketplace for apps sprang up. Apple, seeing this development taking place out of their control, relented and launched an official App Store. The benefit of the App Store was that it would do everything for developers: hosting, payment process, a searchable catalogue, everything. Remember, the App Store launched in 2008, when all of that was quite a bit harder than it is today, and would have required developers to make up-front investments before even knowing whether their apps would take off — without even thinking about free apps.

With the addition of in-app purchase (IAP) the next year, and subscriptions a couple of years after that, most of the ingredients were in place for the App Store as we know it today. The App Store was a massive success, trumpeted by Apple at every opportunity. In January, Apple said that it paid developers $60 billion in 2021, and $260 billion since the App Store launched in 2008. Apple also reduced its cut from 30% to 15%, initially for the second year of subscriptions, but later for any developer making less than $1M per year in the App Store.

What's Not To Like?

This all sounds very fine, but developers are up in arms over Apple's perceived high-handed or even downright rapacious behaviour when it comes to the App Store. Particular sticking points are requirements that apps in the App Store use only Apple's payment system, and that IAP be used for any digital experience offered to groups of people. The first requirement touched off a lawsuit from Epic, who basically wanted to have their own private store for in-game purchases, and the second resulted in some bad press early in the pandemic when Apple started chasing fitness instructors who were providing remote classes while they were unable to offer face-to-face sessions.

The bottom line is that many of these transactions simply do not have a 30% margin in the first place, let alone the ability to still make any profit after giving Apple a 30% (or even a 15%) cut. This might seem to be a problem for developers, but not really for anyone else — but what gave this issue resonance beyond the narrow market of iOS developers is that the world has moved on since 2008.

Hosting an app and setting up payment for it is easy and cheap these days, thanks to the likes of AWS and Stripe. Meanwhile, App Store review is capricious, while also allowing through all sorts of scams, generally based on subscriptions — what is becoming known as fleeceware.

The long and the short of it is that public opinion has shifted against Apple, with proceedings not just in the US, but in Korea, Japan, and the Netherlands too. Apple are being, well, Apple, and refusing to budge except in the most minor and grudging ways.

Here is my concern, though: this situation is being looked at as a simple conflict between Apple and developers. In all the brouhaha, nobody ever mentions another very important perspective: what do users want?

Won't Somebody Think Of The Users?

Developers rightly point out that the $260B that Apple trumpeted having paid them was money generated by their apps, not Apple's generosity, and that a big part of the reason users buy Apple's devices is the apps in the App Store. However, that money was originally paid by users, and we also have opinions about how the App Store should work for our needs and purposes.

First of all, I want all of the things that developers hate. I want Apple's App Store to be the only way of getting apps on iPhones, I want all subscriptions to be in the App Store, and I want Apple's IAP to be the only payment method. These are the factors that make users confident in downloading apps in the first place! Back when I had a Windows machine, it was just accepted that every twelve months or so, you'd have to blow away your operating system and reinstall it from scratch. Even if you were careful and avoided outright malware, bloat and cruft would take over and slow everything to a crawl — and good luck ever removing anything. Imagine a garden that you weed with a flamethrower.

The moment Apple relaxed any of the restrictions on app installation and payment, shady developers would stampede through — led by Epic and Facebook, who both have form when it comes to dodgy sideloading. It doesn't matter what sort of warnings Apple put into iOS; if that were to become how people get their Fortnight or their WhatsApp, they would tap through any number of dialogues without reading them, just as fast as they can tap. And once that happens, all bets are off. Subscriptions to Epic's games or to whatever dodgy thing in Facebook's platform would not be visible in users' App Store profiles, making it all too easy for money to be drained out, through forgetfulness and invisibility if not outright scams.

Other Examples: The Mac

People sometimes bring up the topic of the Mac App Store, which operates along the same notional lines as the iOS (and iPadOS) App Store, but without the same problems. The Mac App Store is actually a great example, but not for the reasons its proponents think. On the Mac, side-loading — deploying apps without going through the Mac App Store — is very much a thing, and in fact it is a much bigger delivery channel than the Mac App Store itself. The problem is that it is also correspondingly harder to figure out what is running on a Mac, or to remove every trace of an app that the user no longer wants. It's nowhere near as bad as Windows, to be clear, but it's also not as clean-cut as iOS, where deleting an app's icon means that app is gone, no question about it.

On the Mac, technical users have all sorts of tools to manage this situation, and that extra flexibility also has many other benefits, making the Mac a much more capable platform than iOS (and iPadOS — sigh). But many more people own iPhones and iPads than own Macs, and they are comfortable using those devices precisely because of the sandboxed1 nature of the experience. My own mother, who used to invite me to lunch and then casually mention that she had a couple of things she needed me to do on the computer, is fully independent on her iPad, down to and including updates to the operating system. This is because the lack of accessible complexity gives her confidence that she can't mess something up by accident.

More Examples: Google

Over the pandemic, I have had the experience of comparing Google's and Apple's family controls, as my kids have required their own devices for the first time for remote schooling. We have a new Chromebook and some assorted handed-down iPads and iPhones (without SIM cards). The Google controls are ridiculously coarse-grained and easily bypassed — that is, when they are not actively conflicting with each other: disabling access to YouTube breaks the Google login flow… In contrast, Apple lets me be extremely granular in what is allowed, when it is allowed, and for how long. Once again, this is possible because of Apple's end-to-end control: I can see what apps are associated with each kid's account, and approve or decline them, enforce limits, and so on. I don't want to have to worry that they will subscribe to a TikTok creator or something, outside the App Store, and drain my credit card, possibly with no way to cancel or get a refund.

What Now?

Good developers like Marco Arment want to build a closer relationship with customers and manage that process themselves. I do trust Marco to use those tools ethically — but I don't trust Mark Zuckerberg with the same tools, and this is an all-or-nothing decision. If it's the price it takes to keep Mark Zuckerberg out of my business, then I'd rather have the status quo.

All of that said, I do think Apple are making things harder on themselves. Their unbending attitude in the face of developers' complaints is not serving them well, whether in the court of public opinion or in the court of law. I do hope that someone at Apple can figure out a way to give enough to developers to reduce the noise — cut the App Store take, make app review more transparent, enable more pricing models, perhaps even refunds with more developer input, whatever it takes. There are also areas where the interests of developers and users are perfectly aligned: search ads in the App Store are gross, especially when they are allowed against actual app names. It's one thing (albeit still icky) to allow developers to pay to increase their ranking against generic terms, like "podcast player"; it's quite another to allow competing podcast players to advertise against each other by name. Nobody is served by that.

If Apple does not clear up this mess themselves, the risk is that lawmakers will attempt to clear it up for them. This could go wrong in so many ways, whether it's specific bad policies (sideloading enforced by law), or a patchwork of different regulations around the world, further balcanising the experience of users based on where they happen to live.

Everyone — Apple, developers, and users — want these platforms to (continue to) succeed. For that to happen, Apple and developers need to talk — and users' concerns must be heard too.


🖼️ Photos by Neil Soni on Unsplash


  1. Yes, I am fully aware that the sandboxing is at the OS level and technically not affected by any App Store changes, but it's part of a continuum of experience, and I would rather not rely on the last line of defence in the OS; I would prefer a continuum between the OS and the App Store to give me joined-up management. In fact, I would like the integration to go even further, such that if I delete an app that has an active subscription, iOS prompts me to cancel the subscription too. 

2022 Predictions

A Textual Podcast

Welcome back to Roll For Enterprise, the podcast described as the squishy heart at the centre of enterprise IT. Because all four hosts were off having fun over the holidays, we couldn’t quite figure out the logistics of getting us all online at the same time to record an audio episode – so instead we put together this textual podcast, since it worked well as an asynchronous way of bouncing ideas around last time we had trouble recording together.

In the last (audio) episode we went over the major themes of 2021, so now it’s time for our 2022 predictions. Sometimes we struggled a bit to keep the two separate while we were recording, so we simply decided to double down and list the major themes of 2021 that we discussed – because we think all of these will continue to be major features of 2022:

  • Semiconductor shortages and architecture turnover
  • Outages and incidents
  • Security in general (attacks, ransomware, etc)
  • No-code/Low-Code and the shifting definition of architects
  • Mental health and the change in employment landscapes
  • The Great Resignation
  • The year the employees took it all back

Semiconductor shortages are an easy call; all the projections forecast these disruptions to continue into mid-2022 at the very least, even if the rest of the world returns to normal. The same goes for the architecture turnover: the shift to ARM is still underway, and this year will see the software ecosystem begin to catch up with that hardware shift. More and more Mac apps now support the M1 architecture natively, and as AWS rolls out more and more Graviton-powered instance types, the same shift is happening in server software. In both cases, the performance benefits make it more than worth while to do the work of porting software to these ARM-based architectures.

As Dominic said in the last episode, outages and incidents are pretty much inevitable as long as fallible humans are in charge of systems whose complexity is at the very limit of what we can reason about. The good news is that cloud outages are short, and software can be architected to be resilient to outages of individual availability zones or even entire cloud providers. Therefore, while there may be a temptation born of frustration to blame the big cloud providers for outages that are not your fault, overall you’re still better off relying on them and the vast resources they can throw at their systems and processes. One development we do expect is a greater insistence from customers on transparency by the big providers: what went wrong, and what will be done to prevent such a failure from recurring in the future. AWS sets a good standard in public post-mortems, for instance, and others will be expected to live up to it.

The same goes for security incidents; the complexity that leads to the possibility of a fat-fingered config causing an outage also leads to the possibility of a security breach. We are not looking at any particular step-change here, just an ongoing recognition that, especially as we all continue working from home, there is no longer any validity to the idea of a network having an "inside" and an "outside". Perimeter defence is dead; defence in depth is the only way. I do expect an increase in security issues around NFTs, which will highlight the issues of decentralised architectures – and the fact that what exists today in that space is well on its way to centralising around a small number of big players.

Is this the year of low-code and no-code? Perhaps, but probably not; it’s a slow-building wave as we get more and more components in place to make these approaches fully-integrated parts of an enterprise IT stack, as opposed to weird stuff off to the side that "isn’t really IT". Partly this shift is about platform capabilities to allow for must-have functionality such as backup, versioning, or auditing. Equally it’s a cultural shift, recognising the validity and importance of these approaches as more than "toy programming". The real Year of Low-Code will come when there is an explosion of new capabilities built on these tools, built by people other than our traditional conception of developers. Right now, what we have mainly fits into existing categories. Tableau is the poster child here, but it mainly replaces Excel rather than enabling something new. That’s not nothing, but it’s not yet an industry-shifting move either.

Finally, the factors enabling the Great Resignation are still very much with us, so their consequences will continue to play out in 2022. Right now, there is a massive imbalance in large parts of IT, with new job offers coming with salaries that are several multiples of what people are coming from. This disparity is driving massive job churn, especially because companies have not changed their retention practices significantly. If your choice is between a single-digit percentage cost-of-living increase where you are, versus perhaps a triple-digit percentage increase elsewhere, the outcome is pretty obvious. If this trend continues, companies will need to get serious about retention, in part by taking factors like mental health more seriously. As we have been saying on the podcast all along, this is not a normal time. People are stressed out, tired out, and burned out by new factors and expectations, and companies need to respond to that by changing their own expectations in return. Maybe that massive raise will be much less attractive if it comes from a company with a culture of presenteeism, requiring a gruelling commute and long hours in an office with people whose health status you are not entirely sure of. That calculus becomes even easier if the company you are currently working at shows that it cares for employees by being flexible about working hours and attentive to the factors that affect peoples’ lives outside work (their own health, caring for others, home schooling, and so on).


Perhaps we close the year with a verse, with apologies to the Bard

If we podcasters have offended
Think but this and all is mended
That you have commuted here
While our voices filled your ears
And our odd, unhinged debate
Won't predict our world’s fate
Listeners, do not unsubscribe
Do enjoy our diatribes
And, as I am a fair Lilac
If we’ve earned your candid flack
Now, to edit themes and form
Improvement shall become our norm
Else the Mike, a liar call
Or Zack incites a verbal brawl
Lend us your ears, if we be friends
And Dominic shall restore amends


🖼️ Photo by Clay Banks on Unsplash

How I Work From Home

Even though travel is (gradually) opening up, I still opted to invest in my home office setup, and I think you should too. Here’s why.

I have been fully-remote for fifteen years now, with only brief interruptions. By that I mean that I have not had a team-mate, let alone a manager, in the same country, and frequently not even in the same time-zone, for that entire time. It’s true that for most of it I have had colleagues in-country, and even offices of varying dimensions and permanence, but they were always in adjacent functions: sales, services, field marketing, and all the back-office functions required to keep an international enterprise functioning.

This means that I am very used to going into an office only rarely, and a setup that lets me work from home has been a requirement for that entire time. The details of my setup have evolved and improved over the years, with increased resources available, and increased permanence to plan for.

The biggest recent change has been recognition that the home office is now a much more permanent part of life. In the Before Times, I would spend a good 50% of my time (if not more) on the road, so the home office was for occasional work. Now, it’s where everything happens, so it had better work well, be comfortable, and look good in the background of Zoom calls.

Here is the current state of the art.

Deep Underground

When we moved into my current place, I earmarked the "tavernetta" for my home office. A "tavernetta" is a uniquely Italian phenomenon: think a US-style basement family room, except that it’s under a block of flats. Several of the flats in my building come with these spaces, but most are only used for storage; a couple are fitted out to be habitable, and mine even includes the luxury of an en-suite bathroom, so I don’t even need to go upstairs to the main family home for that.

There was, however, one minor issue: all of the fittings date back to the Sixties, when this block was originally built. Worse, the flat actually belonged to my wife’s grandmother — so the "tavernetta" is also where my wife and all her cousins held their teenage parties, not to mention her mother and aunts… Out of sight and (more importantly) earshot, but within reach if needed. Anyway, without going into detail, and even though the statute of limitations has long since expired, let’s just say that the furniture and carpets had suffered somewhat over the years of parties.

Over the past summer, therefore, we tore up all the cigarette-burned fitted carpets, ripped out and replaced the ancient and horrible plumbing, and repainted the walls a nice clean white. An electrician was summoned, took one look, sucked his teeth and muttered "vintage", and promptly added a zero to the painful end of his estimate. On the other hand, I do have a lot of electronics plugged in down here, so it’s worth doing it right.

It’s So Bright, I Need Sunglasses

Packing up my desk to make space for all this work was an enormous pain, but I took the opportunity to streamline my setup quite a bit. I was using an ancient Iiyama panel that must be at least a dozen years old; it’s full-HD and was a pretty good screen at the time, but the state of the art has moved on, and the Iiyama is now woefully dim and low-resolution. Worse, it sat between my MacBook Pro and its Retina screen, and a Lenovo 27" panel that I got from work as part of a programme to help employees get set up for work-from-home. The Lenovo has a halfway-house resolution that sits between HD and 4k, but it’s sharp and bright; I run it in portrait (vertical) orientation to look at reference material beside the main screen that I’m working on.

Between those two bright and sharp displays, the Iiyama really suffered by comparison. What I really wanted was a Retina screen to match the MacBook, but Apple only make the monstrous XDR, which is lovely, but costs more than my first several cars — especially once you add a grand’s worth of stand! I put off making a decision, hoping that Apple would finally do what everyone was begging them to and release the 5k panel that they already have in their iMacs as a standalone monitor without a whole computer attached. Apple, in their wisdom, opted not to do this, and offered as a substitute the LG UltraFine. This is supposedly that same panel – but the LG enclosure is ugly as sin, and reports soon surfaced of quality problems: drooping support stands, unreliable USB connections, and even flaky displays. Since the UltraFine is hardly inexpensive, and is also hardly ever in stock, everyone took the hopeful assumption that all these issues meant that surely, soon, Apple would do it right. And so we waited. And waited. And waited.

When last October’s Apple event rolled around with the announcement of the new MacBook Pros, which would have been the obvious time to release a screen to plug the new laptops into, and Apple still didn’t — that was when I snapped. I went out and bought an LG 5k2k Ultrawide panel. The diagonal is a huge 34", but it’s actually only the height of a 27", just stretched out wiiiiide. The picture is sharp, the screen is bright, and the increase in real estate is incredible. As with most "tavernette", mine is partly below street level, and my desk is in the back of the room (it’s fixed to the wall and can’t move), so more light is very welcome. I also added an LED strip above the monitor, and my webcam (a Razer Kiyo mounted on the shelf above the desk) has a ring light, so I think my SAD countermeasures are sufficient for now.

That desk is my working desk, so the only thing that gets plugged in there with any regularity is the MacBook Pro I get from work. I have it on a stand so that it’s at the level of my sight line, and aligned to the monitors too. Before, I had a combo USB hub, USB-C power pass-through, and HDMI adapter Velcro’d to one of the legs of the laptop riser, and that went into one USB-C port, while a second USB-C cable fed the Lenovo. I then had a bunch of USB-A peripherals depending either from that hub or from the USB hub in the back of the Lenovo: keyboard, webcam, microphone, audio device, Ethernet adapter and MuteMe hardware mute button.

I was never super happy with this setup, and with the advent of the monster LG panel, I had an opportunity to redo it properly. Now, I have a single Thunderbolt cable coming out of the MacBook Pro, that takes care of power and all data connections. That cable goes into a CalDigit TS3-Plus dock that feeds everything else: DisplayPort to the LG, Mini DisplayPort to the Lenovo, (gigabit) Ethernet, SPDIF for audio, and powered USB-A for keyboard, webcam, microphone, and MuteMe button — with several more ports still available.

I favour a Microsoft Natural ergonomic keyboard. This is a split keyboard; the benefit is that your wrists do not bend while using it, as they do for straight keyboards such as the ones built in to laptops. It took a little while to get used to, but it’s very comfortable, and I could never go back. It works fine with a Mac, especially once you use Karabiner-Elements to remap some important keys.

My setup is also ambi-moustrous: I have an Apple Magic Mouse on the right of the keyboard — and a Magic Trackpad on the left. This setup lets me alternate my pointing hand to avoid stressing my right hand and wrist, as well as opening up the possibility of trackpad gestures without having to reach up to the MacBook’s trackpad, which is elevated some way off the desk and not exactly natural to use.

Make Some Noise

The audio situation is also worth touching on for a moment. Previously I was running a CambridgeWorks 4+1 speaker setup that I got with a Soundblaster Live! card more than twenty years ago. They were fine for what they are, but Macs never properly understood them, even with a dedicated USB audio interface that has separate front and rear audio outputs. (The system’s audio setup utility can play test audio through each of the four speakers, but in actual usage, the rear pair make only the faintest noise.) On the other hand, I did like having a physical volume knob on my desk, so I could crank it all the way to the left and be certain that nothing was going to make noise, no matter what.

I replaced these with an Edifier 2+1 set of bookshelf speakers with a monster subwoofer — seriously, the sub is bigger than both speakers together, and by a substantial margin (you can just about see it under the desk in the pic above). They are fed by an optical fibre cable from the CalDigit dock, and sound absolutely fantastic! They also have their own remote, which still lets me mute them without having to trust that some piece of software won’t decide that it’s important to unmute for some reason.

I also have my podcasting setup: a Røde NT-USB microphone that plugs into the CalDigit dock, and a pair of audio-technica headphones that plug into the Røde. The mic is on a spring arm so that I can fold it out of the way when I’m not using it, and the headphones have their own stand to keep them out of mischief.

This is the best setup for me: a single cable to plug in, and the MacBook is docked to all of this setup — and when it’s time to go, one cable unplugged and I’m ready. I keep go-bags of cables and power bricks in both of the bags I use when I leave the house, so I just need to make sure the actual laptop is in there and I’m good to go.

Away From Keyboard

Beyond what is on the desk, my home office includes a few more amenities. There is a mini-fridge under my desk with drinks — mainly sparkling water (tap water plus Sodastream bubbles), but also a few fruit juices and the like for when I fancy something different, and a couple of beers in case of particularly convivial Friday afternoon meetings (although it’s been a while since I’ve had occasion to drink one). I also have an electric Bialetti moka coffee pot for when it’s stimulation that I need rather than relaxation.

Yes, there is a printer down here! After some unpleasant experiences with inkjets, I lived the paperless lifestyle for a long time, but finally caved and bought a laser printer in 2019. At the time I assumed it would remain largely unused, and if I’m honest, I only bought it to placate my wife — who was of course very soon proved to be not only Right (again), but scarily prescient, as we spent much of 2020 in home-schooling mode, printing reams of paper every day. Utilisation has died back down a bit now, but the benefit of laser printers is that they don’t dry up and gunk up their print heads if you don’t print every five minutes.

Moving away from the desk area, I also have a TV down here with a rowing machine in front of it. The TV is passed down from when the main living room TV got upgraded to 4k, but it’s still perfectly serviceable. It’s not connected to an actual TV antenna down here; instead, I have an AppleTV device plugged into it, which means I can AirPlay content to it from my MacBook. How this plays out is that when I am attending a webinar or any sort of camera-off passive presentation, I stream that to the TV screen (without having to disconnect from the desk), and follow the webinar from my rowing machine, getting an education and a workout at the same time.

Make Space

With remote work and work-from-home becoming normalised, at least part-time, I would recommend to everyone that they invest in their home-office setup. I am very conscious that not everyone has the luxury of a dedicated room — but remember, I have been building up to this dream setup for a long time. If you are able to set yourself up with even a desk in a corner, that will help to confine work to that space. The physical separation gives "I am going to work" and "I am leaving work" rhythm to your day. There’s also a practical benefit to having somewhere to leave work in progress, notes, or whatever without that stuff cluttering up space you need for other purposes (a table you need to eat meals off).

You should also do the best you can in terms of height of desk, chair, keyboard, and screen. Yes, those last two are separate; laptops are an ergonomic nightmare if you are going to be using them all day, every day. Investments in your working environment will pay substantial dividends in terms of physical and mental well-being. It doesn’t have to be a huge expense, either; IKEA stuff is pretty good.

Don’t be put off by the thought that this is all nerd nonsense. Remember, programmers and gamers care deeply about the ergonomics of their computers because they spend a lot of time using them. These days, that describes most of us in white-collar jobs. Leaving aside some of the questionable choices gamers especially might make in terms of the aesthetics of their rigs, there is a lot to learn from those groups. Big screens, comfortable keyboards and mice, and some attention paid to how those devices are laid out in relation to one another, will all make your work life much less painful.

If you don’t have room for a rowing machine — or a Peloton, or a treadmill, or whatever — you may be able to simply exercise in front of your computer screen, depending on personality and the sort of exercise you favour, without needing special equipment and the room to set it up. I would definitely suggest making time for physical exercise, though; a walk around the block before sitting down to work, a run between meetings, or a sneaky bike ride over a lunch break — whatever works for you. I got into the habit of taking a mental-health day every couple of weeks when I was otherwise not leaving the house, and getting on my bike and just disappearing up into the hills. Your precise needs may vary, but try to make room for something in your routine.

And here’s hoping that we get to vary the work-from-home routine with some (safe) in-person interaction in 2022.

Spending Tim Cook's Money

Mark Gurman has had many scoops in his time covering Apple, and they have led him to a perch at Bloomberg that includes a weekly opinion column. This week's column is about how Apple is losing the home, and it struck a chord with me for a few reasons.

First of all, we have to get one thing out of the way. There is a long and inglorious history of pundits crying that Apple must make some particular device or risk ultimate doom. I mean, Apple must be just livid at missing out on that attractive netbook market, right? Oh right, no, that whole market went away, and Apple is doing just fine selling MacBook Airs and iPads.

That said, the reason this particular issue struck home is that I have been trying to get stuff done around the house, and really felt the absence of what feel like some obvious gap-filling devices from Apple. As long as we are spending Tim Cook's money, here are some suggestions of my own — and no, there are no U2 albums on this list!

Can You See Me Now?

FaceTime is amazing, it is by far the most pleasant video-chat software to use. Adding Center Stage on the iPad Pro makes it even better. It has the potential to be a game-changer for group calls — not the Zoom calls where each person is in their own box, but calls where several people are in one place, trying to talk to several people in another place. Examples are families with the kids lined up on the couch, or trying to play board or card games with distant friends. What I really want in those situations is a TV-size screen, but the Apple TV doesn't support any sort of camera. Yes, you can sort of fudge it by mirroring the screen of a smaller device onto the TV via AirPlay, but it's a mess and still doesn't work right. In particular, your eye is still drawn to the motion on the smaller screen, plus you have to find a perch for the smaller device somewhere close enough to the TV that you are "looking at" the people on the other end.

What I want is a good camera, at least HD if not 4k, that can perch somewhere around the TV screen and talk to the AppleTV directly so that we can do a FaceTime call from the biggest screen in the house. Ideally, this device would also support Center Stage so that it could focus in on the speaker. In reverse, the AppleTV should be able to use positional audio to make the voice of speakers on the far end come from the right place in your sound stage.

Can You Hear Me?

This leads me to the next question: I have dropped increasingly less subtle hints about getting a Home Pod Mini for Christmas, but if people decide against that (some people just don't like buying technology as a gift), I will probably buy at least one for myself. However, the existence of a Home Pod Mini implies the existence of Home Pod Regular and perhaps even a Home Pod Pro — but since the killing of the original-no-qualifiers Home Pod, the Mini is the only product in its family. Big speakers are one of those things that are worth spending money on in my opinion, but Apple simply does not want to take my money in this regard. Maybe they have one in the pipeline for 2022 and I will regret buying the Mini, but right now I can only talk about what's in the current line-up.

Me, I Disconnect From You

This lack of interest in speakers intersects with the same disinterest when it comes to wifi. I loved my old AirPort base station, and the only reason I retired it is that I wanted a mesh network that had some more sophisticated management options. If we are going to put wifi-connected smart speakers all over our homes, why not make them also act as repeaters of that same wifi signal? And they should also work as AirPlay receivers for external, passive speakers, for people who already have good speakers and just want them to be smart.

People Have Families

These additions to Apple's line-up would do a lot more to help Apple "win the home" than Mark Gurman's suggestion of a big static iPad that lives in the kitchen. Apart from the cost of such a thing, it would also require Apple to think much more seriously about multi-user capabilities than they ever have with i(Pad)OS, so that the screen recognises me and shows me my reminders, not my wife's.

Something Apple could do today in the multi-user space is to improve CarPlay. My iPhone remembers where I parked my car and puts a pin in the map. This is actually useful, because (especially these days) I drive my car infrequently enough that I often genuinely do have to think for a moment about where I left it. Sometimes though I drive my wife's car, and then it helpfully updates that "parked car" pin, over-writing the location where I parked my car with the last location of my wife's car — which is generally the garage under the building we live in… The iPhone knows that they are two different cars and lets me maintain car-specific preferences; it just doesn't track them separately in Maps. As long as we are wishing, it would be even better if, when my wife drives her car and leaves it somewhere, if the pin could update in my phone too, since we are all members of the same iCloud Family.

This would be a first step to a better understanding of families and other units of multiple people who share (some) devices, and the sorts of features that they require.


🖼️ Photo by Howard Bouchevereau on Unsplash