From Provincial Italy To London — And Back Again

More reflections on remote work

Well, I'm back to travelling, and in a pretty big way — as in, I'm already to the point of having to back out of one trip because I was getting overloaded! I've been on the road for the past couple of weeks, in London and New York, and in fact I will be back in New York in a month.

It has honestly been great to see people, and so productive too. Even though I was mostly meeting the same people I speak to week in, week out via Zoom, it was different to all be in the same room together. This was also the first time I was able to get my whole team together since its inception: I hired everyone remotely, and while I have managed to meet up with each of them individually, none of the people on the team had actually met each other in person… We had an amazingly productive whiteboarding session, where we knocked out some planning in a couple of hours that might otherwise have taken weeks, and probably justified a chunk of the cost of the trip on its own.

This mechanism also showed up in an interesting study in Nature, entitled Virtual communication curbs creative idea generation. The study shows that remote meetings are better for some things and worse for others. Basically, if the meeting has a fixed agenda and clear outcomes, a remote meeting is a more efficient way of banging through those items. However, when it comes to ideation and creativity, in-person meetings are better than remote ones.

As with all the best studies, this result tallies with my experience and reinforces my prejudices. I have been remote for a long time, way before the recent unpleasantness, but I always combined remote work with regular in-person catch-up meetings. You do the ideation and planning when you can all gather together around the whiteboard — not to mention reinforcing personal ties by gathering around a table in a restaurant or a bar! Then that planning and those personal ties take you through the rest of the quarter, with regular check-ins for tactical day-to-day actions to implement the strategic goals decided at the in-person meeting.

Leaving London

Something else that was interesting about my recent trips was meeting a whole lot of people who were curious about my living situation in Italy — how I came to be there, and what it was like to work a global role from provincial Italy, rather than from one of the usual global nerve centres. Telling the story in New York, coming fresh from my trip to London, led me to reflect back on how come I left London and whether it was the right call (spoiler: it totally was).

The London connection also showed up in a pair of articles by Marie Le Conte, who recently spent a couple of months in Venice before returning to London. It has been long enough since I left London that I no longer worry about whether prices in my favourite haunts will be different, but whether any of them are still there or still recognisable — and sadly, most of them are not. But then again, this is London we are talking about, so I have new favourites, and find a new one almost every trip.

Leaving London was a wrench: it was the first place I lived after university, and I enjoyed it to the hilt. Of course I had to share a flat, and I drove ancient unreliable cars1. But we were out and about all the time, in bars and theatres, eating out and meeting up and just enjoying the place.

However, over the following years most of my London friends moved away in turn, either leaving the UK outright or moving out to the commuter belt. The latter choice never quite made sense to me: why live somewhere nearly as expensive as London (especially when you factor in the cost of that commute), which offers none of the benefits of being in actual London, and still has awful traffic and so on? But as my friends started to settle down and want to raise families and so on, they could no longer afford London prices. Those prices get especially hard to justify once you could no longer balance them out by enjoying everything London has to offer — because you're at home with the kids, who also need to be near a decent school, and get back and forth from sports and activities, and so on and so forth.

My friends and I experienced the same London in our twenties that Marie Le Conte did: it didn't matter if you "rent half a shoebox in a block of flats where nothing really worked", because "there was always something to do". But if you're not out doing all the things, and you need more than half a shoebox to put kids in, London requires a serious financial commitment for not much return.

But why commute to the office at all?

Even before the pandemic, remote work allowed many of us to square that circle. We could live in places that were congenial to us, way outside commuting range of any office we might nominally be attached to, but travel regularly for those all-important ideation sessions that guided and drove the regular day-to-day work.

The pandemic has opened the eyes of many more people and companies to the possibilities of remote work. Airbnb notably committed to a full remote-work approach, which of course makes particular sense to Airbnb, expecially the bit about "flexibility to live and work in 170 countries for up to 90 days a year in each location". I admit they are an extreme case, but other companies have an opportunity to implement the parts of that model that make sense for them.

Certain functions benefit from being in the office all the time, so they require permanent space. This means both individual desks and meeting rooms. Meanwhile, remote workers will need to come in regularly, but when they do, they will have different needs. They will absolutely require meeting rooms, and large, well-equipped ones at that, and those are on top of whatever the baseline needs are for the in-office teams. On the other hand, the out-of-towners will spend most of their time in meetings (or, frankly, out socialising), and so they do not need huge numbers of hot desks — just a few for catching up with emails in gaps between meetings.

If you rotate the in-office meetings so you don't have the place bursting at the seams one week and empty the rest of the time, this starts to look like a rather different office setup than what most companies have now. You can even start thinking of cloud-computing analogies, no longer provisioning office space for peak utilisation, but instead spreading work to take advantage of unused capacity, and maybe bursting by renting external capacity as needed (WeWork2 et al).

If you go further down the Airbnb route and go fully remote, you might even start thinking more about where you put that office. Does it need to be in a downtown office core, or can it be in a more fun part of town — or in a different city entirely? Maybe it can even be in a resort-type location, as long as it has good transport links. Hey, a guy can dream…

But in the mean time, remote work unlocks the ability for many more people to make better choices about where to live. Raising a family is hard enough; doing it when both parents work is basically impossible without a strong local support network. Maybe the model should be something like the Amish Rumspringa, where young Amish go spend time out in the world before going back home and committing to the Amish way of life. Enjoy your twenties in the big city, get started on your career with the sort of hands-on guidance that is hard to get remotely, and then move back home near parents and friends when it's time to settle down, switching to remote working models — with careful scheduling to avoid both parents being away at once.

Once you start looking at it like that, provincial Italy is hard to beat. Quality of life is top-notch, with the sort of lifestyle that would require an extra zero on the salary in London or NYC. If you combine that with regular visits to the big cities, it's honestly pretty great.


🖼️ Photos by Kaleidico and Jason Goodman on Unsplash; London photograph author’s own (the view from my hotel room on my most recent London trip).


  1. I only had a car in the first place because I commuted out of London, to a place not well-served by trains; I never drove into central London if I could avoid it, even before the congestion charge was introduced. 

  2. Just because WeWork is a terrible company doesn't mean that the fundamental idea is wrong. See also Uber: while Uber-the-company is obviously unsustainable and has a number of terrible side-effects, it has forced into existence a ride-hailing market that almost certainly would not exist absent Uber. Free Now gives me an Uber-like experience (summon a car from my phone in most cities, pay with a stored card), but using regular licensed taxis and without the horrible exploitative Uber model. 

Old Views For Today's News

Here's a blog post I wrote back in 2015 for my then-employer that I was reminded of while recording the latest episode of the Roll For Enterprise podcast. Since the original post no longer seems to be available via the BMC web site, I assume they won't mind me reposting it here, with some updated commentary.
cia.png

xkcd, CIA

There has been a certain amount of excitement in the news media, as someone purportedly associated with ISIL has taken over and defaced US Central Command's Twitter account. The juxtaposition with recent US government pronouncements on "cyber security" (ack) is obvious: Central Command’s Twitter Account Hacked…As Obama Speaks on Cybersecurity.

The problem here is the usual confusion around IT in general, and IT security in particular. See for instance CNN:

The Twitter account for U.S. Central Command was suspended Monday after it was hacked by ISIS sympathizers -- but no classified information was obtained and no military networks were compromised, defense officials said.

To an IT professional, even without specific security background, this is kind of obvious.

shucking-a-tutorial.jpgPenny Arcade, Brains With Urgent Appointments

However, there is a real problem here. IT professionals also have a blind spot here: they don't think of things like Twitter accounts when they are securing IT infrastructure. This oversight can expose organisations to serious problems.

One way this can happen is credential re-use and leaking in general. Well-run organisations will use secure password-sharing services such as LastPass, but many times without IT guidance teams might instead opt for storing credentials in a spreadsheet, as we now know happened at Sony. If someone got their hands on even one set of credentials, what other services might they be able to unlock?

The wider issue is the notion of perimeter defence. IT security to date has been all about securing the perimeter - firewalls, DMZs, NAT, and so on. Today, though, what is the perimeter? End-user services like Dropbox, iCloud, or Google Docs, as well as multi-tier enterprise applications, span back and forth across the firewall, with data stored and code executed both locally and remotely.

I don't mean to pick on Sony in particular - they are just the most recent victims - but their experience has shown once and for all that focusing only on the perimeter is no longer sufficient. The walls are porous enough that it is no longer possible to assume that bad guys are only outside. Systems and procedures are needed to detect anomalous activity inside the network, and once that occurs, to handle it rapidly and effectively.

This cannot happen if IT is still operating as "the department of NO", reflexively refusing user requests out of fear or potential consequences. If the IT department tries to ban everything, users will figure out a way to go around the restrictions to achieve their goals. The risk then is that they make choices which put the entire organisation and even its customers at risk. Instead, IT needs to engage with those users and find creative, novel ways to deliver on their requirements without compromising on their mandate to protect the organisation.

While corporate IT cannot be held responsible for the security of services such as Twitter, they can and should advise social-media teams and end-users in general on how to protect all of their services, inside and outside the perimeter.

There are a still a lot of areas where IT is focused on perimeter defence. Adopting Okta or another SSO service is not a panacea; you still do need to consider what would happen when (not if) someone gets inside the first layer of defence. How would you detect them? How would you stop them?

The Okta breach has also helpfully provided an example of another important factor in security breaches: comms. Okta's comms discipline has not been great, reacting late, making broad denials that they later had to walk back, and generally adding to the confusion rather than reducing it. Legislation is being written around the world (with the EU as usual taking the lead) to mandate disclosure in situations like these, which may focus minds — but really, if you're not sufficiently embarrassed as a security provider that a bunch of teenagers were apparently running around your network for at least two weeks without you detecting them, you deserve all the fines you're going to get.

These are no longer purely tech problems. Once you get messy humans in the mix, the conversation changes from "how many bits of entropy does the encryption algorithm need" to "what is the correct trade-off between letting people get their jobs done and ensuring a reasonable level of security, given our particular threat model". Working with humans means communicating with them, so you’d better have a plan ready to go for what to say in a given situation. Hint: blanket denials early on are generally a bad idea, leaving hostages to fortune unnecessarily.

Have a plan ready to go for what you will say in a given situation (including what you may be legally mandated to disclose, and on what timeframe), and avoid losing your customers’ trust. Believe me, that’s one sort of zero trust that you don’t want!

Kids

Make no mistake: having kids is messy, stressful, and expensive. You should absolutely not have kids if you like having free time, disposable income, or any say in what to watch on TV. But there are also those moments when you walk into a room and you are greeted by an excitable small human who was unable to roll over an eyeblink ago, but now is gabbling on about the amazing castle they built with their wooden blocks, and who lives behind this door or in that tower, and what they will do next, and it all seems worth it. Well, at least until it's time to clear up…

Help, I'm Being Personalised!

As the token European among the Roll For Enterprise hosts, I'm the one who is always raising the topic of privacy. My interest in privacy is partly scarring from an early career as a sysadmin, when I saw just how much information is easily available to the people who run the networks and systems we rely on, without them even being particularly nosy.

Because of that history, I am always instantly suspicious of talk of "personalising the customer experience", even if we make the charitable assumption that the reality of this profiling is more than just raising prices until enough people balk. I know that the data is unquestionably out there; my doubts are about the motivations of the people analysing it, and about their competence to do so correctly.

Let's take a step back to explain what I mean. I used to be a big fan of Amazon's various recommendations, for products often bought with the product you are looking at, or by the people who looked at the same product. Back in the antediluvian days when Amazon was still all about (physical) books, I discovered many a new book or author through these mechanisms.

One of my favourite aspects of Amazon's recommendation engine was that it didn't try to do it all. If I bought a book for my then-girlfriend, who had (and indeed still has, although she is now my wife) rather different tastes from me, this would throw the recommendations all out of whack. However, the system was transparent and user-serviceable. Amazon would show me transparently why it had recommended Book X, usually because I had purchased Book Y. Beyond showing me, it would also let me go back into my purchase history and tell it not to use Book Y for recommendations (because it was not actually bought for me), thereby restoring balance to my feed. This made us both happy: I got higher-quality recommendations, and Amazon got a more accurate profile of me, that it could use to sell me more books — something it did very successfully.

Forget doing anything like that nowadays! If you watch Netflix on more than one device, especially if you ever watch anything offline, you'll have hit that situation where you've watched something but Netflix doesn't realise it or won't admit it. And can you mark it as watched, like we used to do with local files? (insert hollow laughter here) No, you'll have that "unwatched" episode cluttering up your "Up next" queue forever.

This is an example of the sort of behaviour that John Siracusa decried in his recent blog post, Streaming App Sentiments. This post gathers responses to his earlier unsolicited streaming app spec, where he discussed people's reactions to these sorts of "helpful" features.

People don’t feel like they are in control of their "data," such as it is. The apps make bad guesses or forget things they should remember, and the user has no way to correct them.

We see the same problem with Twitter's plans for ever greater personalisation. Twitter defaulted to an algorithmic timeline a long time ago, justifying the switch away from a simple chronological feed with the entirely true fact that there was too much volume for anyone to be a Twitter completist any more, so bringing popular tweets to the surface was actually a better experience for people. To repeat myself, this is all true; the problem is that Twitter did not give users any input into the process. Also, sometimes I actually do want to take the temperature of the Twitter hive mind right now, in this moment, without random twenty-hour-old tweets popping up out of sequence. The obvious solution of giving users actual choice was of course rejected out of hand, forcing Twitter into ever more ridiculous gyrations.

The latest turn is that for a brief shining moment they got it mostly right, but hilariously and ironically, completely misinterpreted user feedback and reversed course. So much for learning from the data… What happened is that Twitter briefly gave users the option of adding a "Latest Tweets" tab with chronological listing alongside the algorithmic default "Home" tab. Of course such an obviously sensible solution could not last, for the dispiriting reason that unless you used lists, the tabbed interface was new and (apparently) confusing. Another update therefore followed rapidly on the heels of the good one, which forced users to choose between "Latest Tweets" or "Home", instead of simply being able to have both options one tap apart.

Here's what it boils down to: to build one of these "personalisation" systems, you have to believe one of two things (okay, or maybe some combination):

  • You can deliver a better experience than (most) users can achieve for themselves
  • Controlling your users' experience benefits you in some way that is sufficiently important to outweigh the aggravation they might experience

The first is simply not true. It is true that it is important to deliver a high-quality default that works well for most users, and I am not opposed in principle to that default being algorithmically-generated. Back when, Twitter used to have "While you were away" section which would show you the most relevant tweets since you last checked the app. I found it a very valuable feature — except for the fact that I could not access it at will. It would appear at random in my timeline, or then again, perhaps not. There was no way to trigger it manually, or any place where it would appear reliably and predictably. You just had to hope — and then, instead of making it easier to access on demand, Twitter killed the entire feature in an update. The algorithmic default was promising, but it needed just a bit more control to make it actually good.

This leads us directly to the second problem: why not show the "While you were away" section on demand? Why would Netflix not give me an easy way to resume watching what I was watching before? They don't say, but the assumption is that the operators of these services have metrics showing higher engagement with their apps when they deny users control. Presumably what they fear is that, if users can just go straight to the tweets they missed or the show they were watching, they will not spend as much time exploring the app, discovering other tweets or videos that they might enjoy.

What is forgotten is that "engagement" just happens to be one metric that is easy to measure — but the ease of measurement does not necessarily make it the most important dimension, especially in isolation. If that engagement is me scrolling irritably around Twitter or Netflix, getting increasingly frustrated because I can't find what I want, my opinion of those platforms is actually becoming more corroded with every additional second of "engagement".

There is a common unstated assumption behind both of the factors above, which is that whatever system is driving the personalisation is perfect, both unbreakable in its functioning and without corner cases that may deliver sub-optimal results even when the algorithm is working as designed. One of the problems with black-box systems is that when (not if!) they break, users have no way to understand why they broke, nor to prevent them breaking again in the future. If the Twitter algorithm keeps recommending something to me, I can (for now) still go into my settings, find the list of interests that Twitter has somehow assembled for me, and delete entries until I get back to more sensible recommendations. With Netflix, there is no way for me to tell it to stop recommending something — presumably because they have determined that a sufficient proportion of their users will be worn down over time, and, I don't know, whatever the end goal is — watch Netflix original content instead of something they have to pay to license from outside.

All of this comes back to my oft-repeated point about privacy: what is it that I am giving up my personal data in exchange for, in the end? The promise is that all these systems will deliver content (and ads)(really it's the ads) that are relevant to my interests. Defenders of surveillance capitalism will point out that profiling as a concept is hardly new. The reason you find different ads in Top Gear Magazine, in Home & Garden, and in Monocle, is that the profile for the readership is different for each publication. But the results speak for themselves: when I read Monocle, I find the ads relevant, and (given only the budget) I would like to buy the products featured. The sort of ads that follow me around online, despite a wealth of profile information generated at every click, correlated across the entire internet, and going back *mumble* years or more, are utterly, risibly, incomprehensibly irrelevant. Why? Some combination of that "we know better" attitude, algorithmic profiling systems delivering less than perfect results, and of course, good old fraud in the adtech ecosystem.

So why are we doing this, exactly?

It comes back to the same issue as with engagement: because something is easy to measure and chart, it will have goals set against it. Our lives online generate stupendous volumes of data; it seems incredible that the profiles created from those megabytes if not gigabytes of tracking data have worse results than the single-bit signal of "is reading the Financial Times". There is also the ever-present spectre of "I know half of my ad spending is wasted, I just don't know which half". Online advertising with its built-in surveillance mechanisms holds out the promise of perfect attribution, of knowing precisely which ad it was which caused the customer to buy.

And yet, here we are. Now, legislators in the EU, in China, and elsewhere around the world are taking issue with these systems, and either banning them outright or demanding they be made transparent in their operation. Me, I'm hoping for the control that Amazon used to give me. My dream is to be able to tell YouTube that I have no interest in crypto, and then never see a crypto ad again. Here, advertisers, I'll give you a freebie: I'm in the market for some nice winter socks. Show me some ads for those sometime, and I might even buy yours. Or, if you keep pushing stuff in my face that I don't want, I'll go read a (paper) book instead. See what that does for engagement.


🖼️ Photos by Hyoshin Choi and Susan Q Yin on Unsplash

App Stores & Missing Perspectives

In Apple-watching circles, there has long been some significant frustration about Apple's App Store policies. Whether it's the opaque approvals process, the swingeing 30% cut that Apple takes out of any purchase, or the restrictions on what types of apps and pricing models are even allowed, developers are not happy.

It was not always this way: when the iPhone first launched, there was no App Store. Everying was supposed to be done with web apps. Developers being developers, people quickly worked out how to "jailbreak" their iPhones to install their own apps, and a thriving unofficial marketplace for apps sprang up. Apple, seeing this development taking place out of their control, relented and launched an official App Store. The benefit of the App Store was that it would do everything for developers: hosting, payment process, a searchable catalogue, everything. Remember, the App Store launched in 2008, when all of that was quite a bit harder than it is today, and would have required developers to make up-front investments before even knowing whether their apps would take off — without even thinking about free apps.

With the addition of in-app purchase (IAP) the next year, and subscriptions a couple of years after that, most of the ingredients were in place for the App Store as we know it today. The App Store was a massive success, trumpeted by Apple at every opportunity. In January, Apple said that it paid developers $60 billion in 2021, and $260 billion since the App Store launched in 2008. Apple also reduced its cut from 30% to 15%, initially for the second year of subscriptions, but later for any developer making less than $1M per year in the App Store.

What's Not To Like?

This all sounds very fine, but developers are up in arms over Apple's perceived high-handed or even downright rapacious behaviour when it comes to the App Store. Particular sticking points are requirements that apps in the App Store use only Apple's payment system, and that apple’s own in-app purchasing mechanism be used for any digital experience offered to groups of people. The first requirement touched off a lawsuit from Epic, who basically wanted to have their own private store for in-game purchases, and the second resulted in some bad press early in the pandemic when Apple started doing things like chasing fitness instructors who were providing remote classes while they were unable to offer face-to-face sessions.

The bottom line is that many of these transactions simply do not have a 30% margin in the first place, let alone the ability to still make any profit after giving Apple a 30% (or even a 15%) cut. This might seem to be a problem for developers, but not really for anyone else — but what gave this issue resonance beyond the narrow market of iOS developers is that the world has moved on since 2008.

Hosting an app and setting up payment for it is easy and cheap these days, thanks to the likes of AWS and Stripe. Meanwhile, App Store review is capricious, while also allowing through all sorts of scams, generally based on subscriptions — what is becoming known as fleeceware.

The long and the short of it is that public opinion has shifted against Apple, with proceedings not just in the US, but in Korea, Japan, and the Netherlands too. Apple are being, well, Apple, and refusing to budge except in the most minor and grudging ways.

Here is my concern, though: this situation is being looked at as a simple conflict between Apple and developers. In all the brouhaha, nobody ever mentions another very important perspective: what do users want?

Won't Somebody Think Of The Users?

Developers rightly point out that the $260B that Apple trumpeted having paid them was money generated by their apps, not Apple's generosity, and that a big part of the reason users buy Apple's devices is the apps in the App Store. However, that money was originally paid by users, and we also have opinions about how the App Store should work for our needs and purposes.

First of all, I want all of the things that developers hate. I want Apple's App Store to be the only way of getting apps on iPhones, I want all subscriptions to be in the App Store, and I want Apple's IAP to be the only payment method. These are the factors that make users confident in downloading apps in the first place! Back when I had a Windows machine, it was just accepted that every twelve months or so, you'd have to blow away your operating system and reinstall it from scratch. Even if you were careful and avoided outright malware, bloat and cruft would take over and slow everything to a crawl — and good luck ever removing anything. Imagine a garden that you weed with a flamethrower.

The moment Apple relaxed any of the restrictions on app installation and payment, shady developers would stampede through — led by Epic and Facebook, who both have form when it comes to dodgy sideloading. It doesn't matter what sort of warnings Apple put into iOS; if that were to become how people get their Fortnight or their WhatsApp, they would tap through any number of dialogues without reading them, just as fast as they can tap. And once that happens, all bets are off. Subscriptions to Epic's games or to whatever dodgy thing in Facebook's platform would not be visible in users' App Store profiles, making it all too easy for money to be drained out, through forgetfulness and invisibility if not outright scams.

Other Examples: The Mac

People sometimes bring up the topic of the Mac App Store, which operates along the same notional lines as the iOS (and iPadOS) App Store, but without the same problems. The Mac App Store is actually a great example, but not for the reasons its proponents think. On the Mac, side-loading — deploying apps without going through the Mac App Store — is very much a thing, and in fact it is a much bigger delivery channel than the Mac App Store itself. The problem is that it is also correspondingly harder to figure out what is running on a Mac, or to remove every trace of an app that the user no longer wants. It's nowhere near as bad as Windows, to be clear, but it's also not as clean-cut as iOS, where deleting an app's icon means that app is gone, no question about it.

On the Mac, technical users have all sorts of tools to manage this situation, and that extra flexibility also has many other benefits, making the Mac a much more capable platform than iOS (and iPadOS — sigh). But many more people own iPhones and iPads than own Macs, and they are comfortable using those devices precisely because of the sandboxed1 nature of the experience. My own mother, who used to invite me to lunch and then casually mention that she had a couple of things she needed me to do on the computer, is fully independent on her iPad, down to and including updates to the operating system. This is because the lack of accessible complexity gives her confidence that she can't mess something up by accident.

More Examples: Google

Over the pandemic, I have had the experience of comparing Google's and Apple's family controls, as my kids have required their own devices for the first time for remote schooling. We have a new Chromebook and some assorted handed-down iPads and iPhones (without SIM cards). The Google controls are ridiculously coarse-grained and easily bypassed — that is, when they are not actively conflicting with each other: disabling access to YouTube breaks the Google login flow… In contrast, Apple lets me be extremely granular in what is allowed, when it is allowed, and for how long. Once again, this is possible because of Apple's end-to-end control: I can see what apps are associated with each kid's account, and approve or decline them, enforce limits, and so on. I don't want to have to worry that they will subscribe to a TikTok creator or something, outside the App Store, and drain my credit card, possibly with no way to cancel or get a refund.

What Now?

Good developers like Marco Arment want to build a closer relationship with customers and manage that process themselves. I do trust Marco to use those tools ethically — but I don't trust Mark Zuckerberg with the same tools, and this is an all-or-nothing decision. If it's the price it takes to keep Mark Zuckerberg out of my business, then I'd rather have the status quo.

All of that said, I do think Apple are making things harder on themselves. Their unbending attitude in the face of developers' complaints is not serving them well, whether in the court of public opinion or in the court of law. I do hope that someone at Apple can figure out a way to give enough to developers to reduce the noise — cut the App Store take, make app review more transparent, enable more pricing models, perhaps even refunds with more developer input, whatever it takes. There are also areas where the interests of developers and users are perfectly aligned: search ads in the App Store are gross, especially when they are allowed against actual app names. It's one thing (albeit still icky) to allow developers to pay to increase their ranking against generic terms, like "podcast player"; it's quite another to allow competing podcast players to advertise against each other by name. Nobody is served by that.

If Apple does not clear up this mess themselves, the risk is that lawmakers will attempt to clear it up for them. This could go wrong in so many ways, whether it's specific bad policies (sideloading enforced by law), or a patchwork of different regulations around the world, further balcanising the experience of users based on where they happen to live.

Everyone — Apple, developers, and users — want these platforms to (continue to) succeed. For that to happen, Apple and developers need to talk — and users' concerns must be heard too.


🖼️ Photos by Neil Soni on Unsplash


  1. Yes, I am fully aware that the sandboxing is at the OS level and technically not affected by any App Store changes, but it's part of a continuum of experience, and I would rather not rely on the last line of defence in the OS; I would prefer a continuum between the OS and the App Store to give me joined-up management. In fact, I would like the integration to go even further, such that if I delete an app that has an active subscription, iOS prompts me to cancel the subscription too. 

2022 Predictions

A Textual Podcast

Welcome back to Roll For Enterprise, the podcast described as the squishy heart at the centre of enterprise IT. Because all four hosts were off having fun over the holidays, we couldn’t quite figure out the logistics of getting us all online at the same time to record an audio episode – so instead we put together this textual podcast, since it worked well as an asynchronous way of bouncing ideas around last time we had trouble recording together.

In the last (audio) episode we went over the major themes of 2021, so now it’s time for our 2022 predictions. Sometimes we struggled a bit to keep the two separate while we were recording, so we simply decided to double down and list the major themes of 2021 that we discussed – because we think all of these will continue to be major features of 2022:

  • Semiconductor shortages and architecture turnover
  • Outages and incidents
  • Security in general (attacks, ransomware, etc)
  • No-code/Low-Code and the shifting definition of architects
  • Mental health and the change in employment landscapes
  • The Great Resignation
  • The year the employees took it all back

Semiconductor shortages are an easy call; all the projections forecast these disruptions to continue into mid-2022 at the very least, even if the rest of the world returns to normal. The same goes for the architecture turnover: the shift to ARM is still underway, and this year will see the software ecosystem begin to catch up with that hardware shift. More and more Mac apps now support the M1 architecture natively, and as AWS rolls out more and more Graviton-powered instance types, the same shift is happening in server software. In both cases, the performance benefits make it more than worth while to do the work of porting software to these ARM-based architectures.

As Dominic said in the last episode, outages and incidents are pretty much inevitable as long as fallible humans are in charge of systems whose complexity is at the very limit of what we can reason about. The good news is that cloud outages are short, and software can be architected to be resilient to outages of individual availability zones or even entire cloud providers. Therefore, while there may be a temptation born of frustration to blame the big cloud providers for outages that are not your fault, overall you’re still better off relying on them and the vast resources they can throw at their systems and processes. One development we do expect is a greater insistence from customers on transparency by the big providers: what went wrong, and what will be done to prevent such a failure from recurring in the future. AWS sets a good standard in public post-mortems, for instance, and others will be expected to live up to it.

The same goes for security incidents; the complexity that leads to the possibility of a fat-fingered config causing an outage also leads to the possibility of a security breach. We are not looking at any particular step-change here, just an ongoing recognition that, especially as we all continue working from home, there is no longer any validity to the idea of a network having an "inside" and an "outside". Perimeter defence is dead; defence in depth is the only way. I do expect an increase in security issues around NFTs, which will highlight the issues of decentralised architectures – and the fact that what exists today in that space is well on its way to centralising around a small number of big players.

Is this the year of low-code and no-code? Perhaps, but probably not; it’s a slow-building wave as we get more and more components in place to make these approaches fully-integrated parts of an enterprise IT stack, as opposed to weird stuff off to the side that "isn’t really IT". Partly this shift is about platform capabilities to allow for must-have functionality such as backup, versioning, or auditing. Equally it’s a cultural shift, recognising the validity and importance of these approaches as more than "toy programming". The real Year of Low-Code will come when there is an explosion of new capabilities built on these tools, built by people other than our traditional conception of developers. Right now, what we have mainly fits into existing categories. Tableau is the poster child here, but it mainly replaces Excel rather than enabling something new. That’s not nothing, but it’s not yet an industry-shifting move either.

Finally, the factors enabling the Great Resignation are still very much with us, so their consequences will continue to play out in 2022. Right now, there is a massive imbalance in large parts of IT, with new job offers coming with salaries that are several multiples of what people are coming from. This disparity is driving massive job churn, especially because companies have not changed their retention practices significantly. If your choice is between a single-digit percentage cost-of-living increase where you are, versus perhaps a triple-digit percentage increase elsewhere, the outcome is pretty obvious. If this trend continues, companies will need to get serious about retention, in part by taking factors like mental health more seriously. As we have been saying on the podcast all along, this is not a normal time. People are stressed out, tired out, and burned out by new factors and expectations, and companies need to respond to that by changing their own expectations in return. Maybe that massive raise will be much less attractive if it comes from a company with a culture of presenteeism, requiring a gruelling commute and long hours in an office with people whose health status you are not entirely sure of. That calculus becomes even easier if the company you are currently working at shows that it cares for employees by being flexible about working hours and attentive to the factors that affect peoples’ lives outside work (their own health, caring for others, home schooling, and so on).


Perhaps we close the year with a verse, with apologies to the Bard

If we podcasters have offended
Think but this and all is mended
That you have commuted here
While our voices filled your ears
And our odd, unhinged debate
Won't predict our world’s fate
Listeners, do not unsubscribe
Do enjoy our diatribes
And, as I am a fair Lilac
If we’ve earned your candid flack
Now, to edit themes and form
Improvement shall become our norm
Else the Mike, a liar call
Or Zack incites a verbal brawl
Lend us your ears, if we be friends
And Dominic shall restore amends


🖼️ Photo by Clay Banks on Unsplash

How I Work From Home

Even though travel is (gradually) opening up, I still opted to invest in my home office setup, and I think you should too. Here’s why.

I have been fully-remote for fifteen years now, with only brief interruptions. By that I mean that I have not had a team-mate, let alone a manager, in the same country, and frequently not even in the same time-zone, for that entire time. It’s true that for most of it I have had colleagues in-country, and even offices of varying dimensions and permanence, but they were always in adjacent functions: sales, services, field marketing, and all the back-office functions required to keep an international enterprise functioning.

This means that I am very used to going into an office only rarely, and a setup that lets me work from home has been a requirement for that entire time. The details of my setup have evolved and improved over the years, with increased resources available, and increased permanence to plan for.

The biggest recent change has been recognition that the home office is now a much more permanent part of life. In the Before Times, I would spend a good 50% of my time (if not more) on the road, so the home office was for occasional work. Now, it’s where everything happens, so it had better work well, be comfortable, and look good in the background of Zoom calls.

Here is the current state of the art.

Deep Underground

When we moved into my current place, I earmarked the "tavernetta" for my home office. A "tavernetta" is a uniquely Italian phenomenon: think a US-style basement family room, except that it’s under a block of flats. Several of the flats in my building come with these spaces, but most are only used for storage; a couple are fitted out to be habitable, and mine even includes the luxury of an en-suite bathroom, so I don’t even need to go upstairs to the main family home for that.

There was, however, one minor issue: all of the fittings date back to the Sixties, when this block was originally built. Worse, the flat actually belonged to my wife’s grandmother — so the "tavernetta" is also where my wife and all her cousins held their teenage parties, not to mention her mother and aunts… Out of sight and (more importantly) earshot, but within reach if needed. Anyway, without going into detail, and even though the statute of limitations has long since expired, let’s just say that the furniture and carpets had suffered somewhat over the years of parties.

Over the past summer, therefore, we tore up all the cigarette-burned fitted carpets, ripped out and replaced the ancient and horrible plumbing, and repainted the walls a nice clean white. An electrician was summoned, took one look, sucked his teeth and muttered "vintage", and promptly added a zero to the painful end of his estimate. On the other hand, I do have a lot of electronics plugged in down here, so it’s worth doing it right.

It’s So Bright, I Need Sunglasses

Packing up my desk to make space for all this work was an enormous pain, but I took the opportunity to streamline my setup quite a bit. I was using an ancient Iiyama panel that must be at least a dozen years old; it’s full-HD and was a pretty good screen at the time, but the state of the art has moved on, and the Iiyama is now woefully dim and low-resolution. Worse, it sat between my MacBook Pro and its Retina screen, and a Lenovo 27" panel that I got from work as part of a programme to help employees get set up for work-from-home. The Lenovo has a halfway-house resolution that sits between HD and 4k, but it’s sharp and bright; I run it in portrait (vertical) orientation to look at reference material beside the main screen that I’m working on.

Between those two bright and sharp displays, the Iiyama really suffered by comparison. What I really wanted was a Retina screen to match the MacBook, but Apple only make the monstrous XDR, which is lovely, but costs more than my first several cars — especially once you add a grand’s worth of stand! I put off making a decision, hoping that Apple would finally do what everyone was begging them to and release the 5k panel that they already have in their iMacs as a standalone monitor without a whole computer attached. Apple, in their wisdom, opted not to do this, and offered as a substitute the LG UltraFine. This is supposedly that same panel – but the LG enclosure is ugly as sin, and reports soon surfaced of quality problems: drooping support stands, unreliable USB connections, and even flaky displays. Since the UltraFine is hardly inexpensive, and is also hardly ever in stock, everyone took the hopeful assumption that all these issues meant that surely, soon, Apple would do it right. And so we waited. And waited. And waited.

When last October’s Apple event rolled around with the announcement of the new MacBook Pros, which would have been the obvious time to release a screen to plug the new laptops into, and Apple still didn’t — that was when I snapped. I went out and bought an LG 5k2k Ultrawide panel. The diagonal is a huge 34", but it’s actually only the height of a 27", just stretched out wiiiiide. The picture is sharp, the screen is bright, and the increase in real estate is incredible. As with most "tavernette", mine is partly below street level, and my desk is in the back of the room (it’s fixed to the wall and can’t move), so more light is very welcome. I also added an LED strip above the monitor, and my webcam (a Razer Kiyo mounted on the shelf above the desk) has a ring light, so I think my SAD countermeasures are sufficient for now.

That desk is my working desk, so the only thing that gets plugged in there with any regularity is the MacBook Pro I get from work. I have it on a stand so that it’s at the level of my sight line, and aligned to the monitors too. Before, I had a combo USB hub, USB-C power pass-through, and HDMI adapter Velcro’d to one of the legs of the laptop riser, and that went into one USB-C port, while a second USB-C cable fed the Lenovo. I then had a bunch of USB-A peripherals depending either from that hub or from the USB hub in the back of the Lenovo: keyboard, webcam, microphone, audio device, Ethernet adapter and MuteMe hardware mute button.

I was never super happy with this setup, and with the advent of the monster LG panel, I had an opportunity to redo it properly. Now, I have a single Thunderbolt cable coming out of the MacBook Pro, that takes care of power and all data connections. That cable goes into a CalDigit TS3-Plus dock that feeds everything else: DisplayPort to the LG, Mini DisplayPort to the Lenovo, (gigabit) Ethernet, SPDIF for audio, and powered USB-A for keyboard, webcam, microphone, and MuteMe button — with several more ports still available.

I favour a Microsoft Natural ergonomic keyboard. This is a split keyboard; the benefit is that your wrists do not bend while using it, as they do for straight keyboards such as the ones built in to laptops. It took a little while to get used to, but it’s very comfortable, and I could never go back. It works fine with a Mac, especially once you use Karabiner-Elements to remap some important keys.

My setup is also ambi-moustrous: I have an Apple Magic Mouse on the right of the keyboard — and a Magic Trackpad on the left. This setup lets me alternate my pointing hand to avoid stressing my right hand and wrist, as well as opening up the possibility of trackpad gestures without having to reach up to the MacBook’s trackpad, which is elevated some way off the desk and not exactly natural to use.

Make Some Noise

The audio situation is also worth touching on for a moment. Previously I was running a CambridgeWorks 4+1 speaker setup that I got with a Soundblaster Live! card more than twenty years ago. They were fine for what they are, but Macs never properly understood them, even with a dedicated USB audio interface that has separate front and rear audio outputs. (The system’s audio setup utility can play test audio through each of the four speakers, but in actual usage, the rear pair make only the faintest noise.) On the other hand, I did like having a physical volume knob on my desk, so I could crank it all the way to the left and be certain that nothing was going to make noise, no matter what.

I replaced these with an Edifier 2+1 set of bookshelf speakers with a monster subwoofer — seriously, the sub is bigger than both speakers together, and by a substantial margin (you can just about see it under the desk in the pic above). They are fed by an optical fibre cable from the CalDigit dock, and sound absolutely fantastic! They also have their own remote, which still lets me mute them without having to trust that some piece of software won’t decide that it’s important to unmute for some reason.

I also have my podcasting setup: a Røde NT-USB microphone that plugs into the CalDigit dock, and a pair of audio-technica headphones that plug into the Røde. The mic is on a spring arm so that I can fold it out of the way when I’m not using it, and the headphones have their own stand to keep them out of mischief.

This is the best setup for me: a single cable to plug in, and the MacBook is docked to all of this setup — and when it’s time to go, one cable unplugged and I’m ready. I keep go-bags of cables and power bricks in both of the bags I use when I leave the house, so I just need to make sure the actual laptop is in there and I’m good to go.

Away From Keyboard

Beyond what is on the desk, my home office includes a few more amenities. There is a mini-fridge under my desk with drinks — mainly sparkling water (tap water plus Sodastream bubbles), but also a few fruit juices and the like for when I fancy something different, and a couple of beers in case of particularly convivial Friday afternoon meetings (although it’s been a while since I’ve had occasion to drink one). I also have an electric Bialetti moka coffee pot for when it’s stimulation that I need rather than relaxation.

Yes, there is a printer down here! After some unpleasant experiences with inkjets, I lived the paperless lifestyle for a long time, but finally caved and bought a laser printer in 2019. At the time I assumed it would remain largely unused, and if I’m honest, I only bought it to placate my wife — who was of course very soon proved to be not only Right (again), but scarily prescient, as we spent much of 2020 in home-schooling mode, printing reams of paper every day. Utilisation has died back down a bit now, but the benefit of laser printers is that they don’t dry up and gunk up their print heads if you don’t print every five minutes.

Moving away from the desk area, I also have a TV down here with a rowing machine in front of it. The TV is passed down from when the main living room TV got upgraded to 4k, but it’s still perfectly serviceable. It’s not connected to an actual TV antenna down here; instead, I have an AppleTV device plugged into it, which means I can AirPlay content to it from my MacBook. How this plays out is that when I am attending a webinar or any sort of camera-off passive presentation, I stream that to the TV screen (without having to disconnect from the desk), and follow the webinar from my rowing machine, getting an education and a workout at the same time.

Make Space

With remote work and work-from-home becoming normalised, at least part-time, I would recommend to everyone that they invest in their home-office setup. I am very conscious that not everyone has the luxury of a dedicated room — but remember, I have been building up to this dream setup for a long time. If you are able to set yourself up with even a desk in a corner, that will help to confine work to that space. The physical separation gives "I am going to work" and "I am leaving work" rhythm to your day. There’s also a practical benefit to having somewhere to leave work in progress, notes, or whatever without that stuff cluttering up space you need for other purposes (a table you need to eat meals off).

You should also do the best you can in terms of height of desk, chair, keyboard, and screen. Yes, those last two are separate; laptops are an ergonomic nightmare if you are going to be using them all day, every day. Investments in your working environment will pay substantial dividends in terms of physical and mental well-being. It doesn’t have to be a huge expense, either; IKEA stuff is pretty good.

Don’t be put off by the thought that this is all nerd nonsense. Remember, programmers and gamers care deeply about the ergonomics of their computers because they spend a lot of time using them. These days, that describes most of us in white-collar jobs. Leaving aside some of the questionable choices gamers especially might make in terms of the aesthetics of their rigs, there is a lot to learn from those groups. Big screens, comfortable keyboards and mice, and some attention paid to how those devices are laid out in relation to one another, will all make your work life much less painful.

If you don’t have room for a rowing machine — or a Peloton, or a treadmill, or whatever — you may be able to simply exercise in front of your computer screen, depending on personality and the sort of exercise you favour, without needing special equipment and the room to set it up. I would definitely suggest making time for physical exercise, though; a walk around the block before sitting down to work, a run between meetings, or a sneaky bike ride over a lunch break — whatever works for you. I got into the habit of taking a mental-health day every couple of weeks when I was otherwise not leaving the house, and getting on my bike and just disappearing up into the hills. Your precise needs may vary, but try to make room for something in your routine.

And here’s hoping that we get to vary the work-from-home routine with some (safe) in-person interaction in 2022.

Spending Tim Cook's Money

Mark Gurman has had many scoops in his time covering Apple, and they have led him to a perch at Bloomberg that includes a weekly opinion column. This week's column is about how Apple is losing the home, and it struck a chord with me for a few reasons.

First of all, we have to get one thing out of the way. There is a long and inglorious history of pundits crying that Apple must make some particular device or risk ultimate doom. I mean, Apple must be just livid at missing out on that attractive netbook market, right? Oh right, no, that whole market went away, and Apple is doing just fine selling MacBook Airs and iPads.

That said, the reason this particular issue struck home is that I have been trying to get stuff done around the house, and really felt the absence of what feel like some obvious gap-filling devices from Apple. As long as we are spending Tim Cook's money, here are some suggestions of my own — and no, there are no U2 albums on this list!

Can You See Me Now?

FaceTime is amazing, it is by far the most pleasant video-chat software to use. Adding Center Stage on the iPad Pro makes it even better. It has the potential to be a game-changer for group calls — not the Zoom calls where each person is in their own box, but calls where several people are in one place, trying to talk to several people in another place. Examples are families with the kids lined up on the couch, or trying to play board or card games with distant friends. What I really want in those situations is a TV-size screen, but the Apple TV doesn't support any sort of camera. Yes, you can sort of fudge it by mirroring the screen of a smaller device onto the TV via AirPlay, but it's a mess and still doesn't work right. In particular, your eye is still drawn to the motion on the smaller screen, plus you have to find a perch for the smaller device somewhere close enough to the TV that you are "looking at" the people on the other end.

What I want is a good camera, at least HD if not 4k, that can perch somewhere around the TV screen and talk to the AppleTV directly so that we can do a FaceTime call from the biggest screen in the house. Ideally, this device would also support Center Stage so that it could focus in on the speaker. In reverse, the AppleTV should be able to use positional audio to make the voice of speakers on the far end come from the right place in your sound stage.

Can You Hear Me?

This leads me to the next question: I have dropped increasingly less subtle hints about getting a Home Pod Mini for Christmas, but if people decide against that (some people just don't like buying technology as a gift), I will probably buy at least one for myself. However, the existence of a Home Pod Mini implies the existence of Home Pod Regular and perhaps even a Home Pod Pro — but since the killing of the original-no-qualifiers Home Pod, the Mini is the only product in its family. Big speakers are one of those things that are worth spending money on in my opinion, but Apple simply does not want to take my money in this regard. Maybe they have one in the pipeline for 2022 and I will regret buying the Mini, but right now I can only talk about what's in the current line-up.

Me, I Disconnect From You

This lack of interest in speakers intersects with the same disinterest when it comes to wifi. I loved my old AirPort base station, and the only reason I retired it is that I wanted a mesh network that had some more sophisticated management options. If we are going to put wifi-connected smart speakers all over our homes, why not make them also act as repeaters of that same wifi signal? And they should also work as AirPlay receivers for external, passive speakers, for people who already have good speakers and just want them to be smart.

People Have Families

These additions to Apple's line-up would do a lot more to help Apple "win the home" than Mark Gurman's suggestion of a big static iPad that lives in the kitchen. Apart from the cost of such a thing, it would also require Apple to think much more seriously about multi-user capabilities than they ever have with i(Pad)OS, so that the screen recognises me and shows me my reminders, not my wife's.

Something Apple could do today in the multi-user space is to improve CarPlay. My iPhone remembers where I parked my car and puts a pin in the map. This is actually useful, because (especially these days) I drive my car infrequently enough that I often genuinely do have to think for a moment about where I left it. Sometimes though I drive my wife's car, and then it helpfully updates that "parked car" pin, over-writing the location where I parked my car with the last location of my wife's car — which is generally the garage under the building we live in… The iPhone knows that they are two different cars and lets me maintain car-specific preferences; it just doesn't track them separately in Maps. As long as we are wishing, it would be even better if, when my wife drives her car and leaves it somewhere, if the pin could update in my phone too, since we are all members of the same iCloud Family.

This would be a first step to a better understanding of families and other units of multiple people who share (some) devices, and the sorts of features that they require.


🖼️ Photo by Howard Bouchevereau on Unsplash

Generalising Wildly

To make a wild generalisation, specialists are made, but generalists are born.

There is any amount of material out there to help people to specialise in a particular subject, ranging in formality from a quick YouTube video to entire academic fields of study. If your question is "how do I get better at X", someone is out there who can help you answer it. From that point on, it’s more of a question of the time, resources, and effort you dedicate to the pursuit — the now-debunked ten thousand hours of practice.

The result of this process of specialisation is (more or less) deep understanding of a particular field — but that understanding is restricted to that one field. Generalists, on the other hand, have an understanding of individual fields that is almost always shallower than that of specialists in that field, but they compensate by spreading their study across many different fields. The value that a generalist brings is the unexpected insight based on correlation or analogy with a different field.

One problem is that there are very few job descriptions out there that call for generalists. I’ve hired a few, but that’s always been on the basis of me being given the opportunity to create roles for myself as a generalist, and then the roles expanding to the point that I needed to build teams to keep up with demand. However, if you go on LinkedIn or whatever and look for openings, most of the job descriptions are looking for pretty narrowly specified skill sets: ten years of experience in this, certification in that, or documented contributions to the other.

Almost by definition, there is no single course of study that will produce generalists; you have to pick and choose between many options. It has not been possible since the actual Renaissance to be a "Renaissance (hu)man", with at least a passing familiarity with the entire corpus of human knowledge and thought. This is of course a Good Thing, driven as it is by a vast expansion in that corpus, but it can make it hard for specialists in different domains to communicate effectively with each other and share insights. It also makes for a lack of formal recognition for roles that are not based on deep specialisation in different fields.

This lack of visibility can be disheartening to generalists or would-be generalists, on top of the impostor syndrome that can come from talking about a particular subject to people who have specialised deeply in it and therefore know it far better. However, generalists are enormously valuable to organisations in a couple of different ways.

One benefit is to prevent the situation where specialists get "so preoccupied with whether or not they could, they didn't stop to think if they should", to quote Dr Ian Malcolm. Generalists are well placed to keep specialists grounded, to be the person in the room saying nope. Maybe they have experienced similar situations in other domains, maybe they are more aware of the constraints that apply to other aspects of the problem space, or maybe they simply don’t get so wrapped up in the elegance of possible solutions.

Another benefit generalists can bring is to be the Swiss Army knife for the organisation. They might not be the best at any one thing, but they can do a lot of things at the drop of a hat without retraining. This is admittedly the sort of benefit that becomes easier to bring to bear after a few years of experience, with some gravitas to lend credibility in the absence of formal certifications. Generalists can be parachuted into developing situations and plug gaps until specialists can be deployed to tackle more permanent solutions.

I’m a generalist, partly as a deliberate career choice, and partly out of circumstance. My university degree is in Computing Science1, but I came to it via a high school that focused very strongly on the humanities. I had more hours of Latin and Greek than of maths or other scientific subjects, about the same as history and philosophy. My original plan had been to specialise in sysadmin work, which done right is a pretty generalist role in its own right. What actually happened is that I ended up spanning between technical and human aspects, translating business requirements into technical specs and explaining technical constraints and possibilities in business terms. This sort of thing works well when you have a lot of different experience to call upon, including from different fields, so you don’t get too narrowly blinkered and end up proposing the same one-size-fits-all solution to every problem you are presented with.

To make this concrete, here are some of the skills I have accumulated in my magpie fashion over the years:

  • Graphic design
  • Web design (including accessibility)
  • UI & UX
  • Presentation design
  • Public speaking
  • Writing (technical and otherwise)
  • Translation
  • Programming (I’ve learned over a dozen languages, and while I’m not great or even good at any of them, I can pick something up and hack at it until it works)
  • Software localisation and internationalisation (l10n and i18n)
  • Availability and performance monitoring and observability
  • System deployment, configuration, and maintenance
  • Network design and admin
  • Database design and admin
  • Cloud stuff ranging from IaaS to PaaS to SaaS
  • RoI and business case development
  • User survey and interview
  • Competitive analysis (both tech and GTM)
  • Training and enablement (development and delivery)

And I can do all of that and more in three to five (human) languages, depending on how formal I have to get.

Some of these I’m only barely competent in, but I can at least have a reasonable conversation with an actual specialist where we understand each other, and I have the basis to go deeper if I ever have a need to. All of these skills have come in handy as parts of paid jobs where they absolutely were not part of the job spec, and several times a skill that was way outside my job description has saved someone’s bacon — mine, a colleague’s, a customer’s, or my employer’s.

Don’t underestimate generalists — and if you’re a generalist, or thinking about branching outside of your specialisation, don’t underestimate yourself.


🖼️ Photos by Thought Catalog, Hans-Peter Gauster and Patrick Tomasso on Unsplash


  1. Yes, Computing, not Computer; at my university, those were two separate courses, but one was basically straight-up software engineering, while the other also included a grounding in networks, databases, neural networks (in the late 90s this was cutting-edge stuff!) and even human interface design. 

Textual Podcast

In lieu of a normal episode of our Roll for Enterprise podcast this week, which was thwarted by myriad technical difficulties, we are trying to take our witty banter to text. Let’s see how this goes - thank you for taking the ride with us.

Mike starts us off with a bang, picking up on the release of the new Gartner Hype Cycle for Enterprise Networking (don’t worry, the link is to The Register, you don’t need a Gartner account to read it):

Mike: The reason I don’t trust most market research is that I am pretty sure vendors are writing it …

Dominic: I wish!

Lilac: Surely not. There are many capable people inside Forrester and possibly even Gartner. But, half the job is sorting through the embellishments of the vendors they do meet.

Dominic: I think this is an important point though. Many people have the impression that analysts are "pay to play", and while there are some out there that might fit that description, most of the big reputable analyst firms that you have heard of don’t work that way. There is one sense in which it is true: as a vendor, if you are a paying client of Gartner, Forrester, or whoever, you get more time with analysts, which translates to more opportunities to make your case to them. However, even in that sort of context, the good individual analysts are the ones who will pull you up if you make some sort of wild claim and demand proof, or tell you they are not hearing that type of request from individual practitioners, or whatever. These people tend to develop a personal reputation over and above that of the firm that employs them, because they provide an extremely valuable service.

To vendors, they act as a reality check, and give us an opportunity to refine our messaging and product plans in a semi-private setting rather than having to make corrections in the harsh glare of the public market.

Meanwhile to practitioners these analysts provide a validated starting point for their own investigations. For instance, if you are building a shortlist for a vendor selection, you might use the Gartner Magic Quadrant or the Forrester Wave for that market segment to double-check that you had made sensible selections. However, once again, the individual analysts leading the compilation of a particular Wave or MQ will make a big difference to the result, bringing their own experience and biases to bear, so it’s rarely as simple as just picking the vendor that’s furthest up and to the right.

Lilac: Totally agree. That has been my experience, Dominic - though we should hear from Zack here. I have been at large vendors with deep pockets that were panned by analysts - and small vendors with minuscule budgets that were lauded. Honesty, clarity, sanity … are both more valuable and harder to come by than contract dollars.

Zack: I have to be careful how I answer this question, but there aren't any surprises. Dominic’s point about vendor briefings is valid although you typically must be a paying client to schedule inquiries. Speaking from experience, a briefing is "supposed" to be one-way communication so you can update the analysts but you can’t ask questions about market landscapes or have conversations outside of the briefing. I would be more concerned about "real-world" experience as opposed to research exclusively. There are indeed some analysts that have never stepped foot in a data center although they’re basing their experience on multiple data points such as customer interactions, typically hundreds and across multiple analysts, vendor briefings & inquiries, and hands on labs in some cases, and research, but is that sufficient to form a conclusion? It might be, but as someone who spent many nights in a data center, there is nothing that takes the place of "real-world" experience. As with anything, people should use any analysts feedback as another data point in their quest to make a decision.

Dominic: Exactly – and the Hype Cycle is a perfect example, because people have a tendency to take it as predictive, assuming that every technology will eventually emerge on the Plateau of Productivity. In actual fact though there is a Pit of Oblivion somewhere at the bottom of the Trough of Disillusionment, which is where all the once-promising tech that never emerges goes to die. What is amazing about this particular Hype Cycle graphic is that IPv6 is still on it, and still in 5-to-10-years-out category!

Lilac: or VDI! It’s always the year for VDI. It’s going to be amazing.

The analysts aren’t giving you answers. They are giving you input. Things to consider. Independent checkpoints outside your organization. This isn’t a surgical specialist giving you the best answer. It’s a real estate agent, guiding you through options.

But then.. why do vendors seem to take the word of these analysts as validation and gospel? I never understood.

Dominic: At the risk of getting excessively philosophical, the answer is the same as it is for many things in grown-up life: because even with all the flaws, this is the least bad way we have found yet that is remotely practical. Right now I am sitting on both sides of different tables – wait, that metaphor sounds wrong. Swivelling my chair back and forth between two tables? Anyway. I am both running a procurement exercise which involves comparing different vendors, and participating as a vendor in a market survey.

At the customer table, I don’t have time to evaluate every vendor in even a relatively niche market, so I use analyst opinion as one of my tools to whittle down the list. Once I got to two, I took the time to talk to each one, and also talked to current customers, started figuring out pricing models, all of that – but all of that takes a lot of time.

This is kind of what I imagine readers are doing with the reports my employers participate in: not taking them as gospel, but as one input into their selection process. Getting philosophical again, it tends to be people furthest from the process who get most excited about the results – but it’s understandable: it’s one of the few results that are uncritically good. I get weirded out by vendors who trumpet their profitability, because that’s a short step to "I’m being overcharged!". Meanwhile, being certified as top of your particular field by a (supposedly) objective observer is pretty great.

But I still see no sign of IPv6 catching on anywhere. Even the hysteria about IPv4 address space running out seems to have died down.

Recommendations

Dominic

I want to recommend the latest book by Becky Chambers, A Psalm for the Wild-Built. If you’ve read her Wayfarers books (which I also recommend), you know what you are in for, even if this is a completely different setting. If you haven’t, the dedication should give you an idea: "For anybody who could use a break". It’s a delightful little SF novella that packs a lot into its short length.

Lilac

Use the 'Organizational Bullshit Perception Scale' to Decide If You Should Quit Your Job


Thanks for sticking with us! Normal service should be resumed next week. We hadn't missed an episode yet, thanks our innovative podcast architecture, which is based on a redundant array of independent co-hosts, and while it's a shame we couldn't keep the streak going, this is at least something. Apparently 26% of podcasts only ever produce a single episode, while we got to 62 before this hiccough.

Follow the show on Twitter @Roll4Enterprise or on our LinkedIn page. Please subscribe to the show, and do send us suggestions for topics and/or guests for future episodes!