AWS re:Invent 2022

At this time of year, with the nights drawing in, thoughts turn inevitably to… AWS' annual Las Vegas extravaganza, re:Invent. This year I'm attending remotely again, like it's 2020 or something, which is probably better for my liver, although I am definitely feeling the FOMO.

Day One: Adam Selipsky Keynote

I skipped Monday Night Live due to time zones, but as usual, this first big rock on the re:Invent calendar is a barrage of technical updates, with few hints of broader strategy. That sort of thing comes in the big Tuesday morning keynote with Adam Selipsky.

Last year was his first time taking over after Andy Jassy's ascension to running the whole of Amazon, not just AWS. This year’s delivery was more polished, plus it looks like we have seen the last of the re:Invent House Band. Adam Selipsky himself though was still playing the classics, talking up the benefits of cloud computing for cost savings and using examples such as Carrier or Airbnb to allude to companies' desire to be agile with fewer resources.

Still, it's a bit of a double-take to hear AWS still talking about cloud migration in 2022 — even if, elsewhere in Vegas, there was a memorable endorsement of migration to the cloud from Ukraine's Minister for Digital Transformation. Few AWS customers have to contend with the sorts of stress and time pressure that Mykhailo Fedorov did!

In the keynote, the focus was mostly on exhortations to continue investing in the cloud. I didn't see Andy Jassy's signature move of presenting a slide that shows cloud penetration as still being a tiny proportion of the market, but that was definitely the spirit: no reason to slow down, despite economic headwinds; there's lots more to do.

Murdering the Metaphors

We then got to the first of various metaphors that would be laboriously and at length tortured to breaking point and beyond. The first was space exploration, and admittedly there were some very pretty visuals to go with the point being belaboured: namely, that just like images captured in different wavelengths show different data to astronomers, different techniques used to explore data can deliver additional results.

There were some good customer examples in this segment: Expedia Group making 600B predictions on 70 Petabytes of data, and Pinterest storing 1 Exabyte of data on S31. That sort of scale is admittedly impressive, but this was the first hint that the tempo of this presentation would be slower, with a worse ratio of content to time than we had been used to in the Jassy years.

Tools, Integration, Governance, Insights

This led to a segment on the right tools, integration, and governance for working with data, and the insights that would be possible. The variety of tools is something I had focused on in my report from re:Invent 2021, in which I called out AWS' "one database engine for each use case" approach and questioned whether this was what developers actually wanted.

Initially, it seemed that we were getting more of the same, with Amazon Aurora getting top billing. The metrics in particular were very much down in the weeds, mentioning that Aurora offered 1/10 the cost of commercial DBMS, while also having up to 3x performance of PostgreSQL and 5x the performance of MySQL2.

We then heard about how customers also need analytics tools, not just transactional ones, such as EMR, MSK, and Redshift for high performance on structured data - 5x better price performance than "other cloud data warehouses" (a not-particularly-veiled dig at Snowflake, here — more of a Jassy move, I felt).

The big announcement in this section was OpenSearch Serverless. This launch means that AWS offers serverless options for all of its analytics services. According to Selipsky, "no-one else can say that". However, it is worth checking the fine print. In common with many "serverless" offerings, OpenSearch Serverless has a minimum spend of 4 OCUs — or $700 in real money. Scaling to zero is a key requirement and expectation of serverless, so it is disappointing to see so many offerings like this one that devolve to elastic scalability on top of a fixed base. Valuable, to be sure, but not quite so revolutionary.

ETL Phone Home

Then things got interesting.

Adam Selipsky made an example of a retail company running its operations on DynamoDB and Aurora and needing to move data to Redshift for analysis. This is exactly the sort of situation I decried in last year's report for The New Stack: too many single-purpose databases, leaving users trying to copy data back and forth, with the attendant risk of loss of control over their data.

It seems that AWS product managers had been hearing the same feedback that I had, but instead of committing to one general-purpose database, they are doubling down on their best-of-breed approach. Instead, they enabled federated query in Redshift and Athena to query other services — including third-party ones.

The big announcement was zero-ETL integration between Aurora and Redshift. This was advertised as being "near real time", with latency measured in seconds — good enough for most use cases, although something to be aware of for more demanding situations. The integration also works with multiple Aurora instances all feeding into one Redshift instance, which is what you want. Finally, the integration was advertised as being "all serverless", scaling up and down in response to data volume.

Take Back Control

So that's the integration — but that only addresses questions of technical complexity and maybe cost of storage. What about governance? Removing the need for ETL from one system into another does remove one big issue, which is the creation of a second copy of the data without the access controls and policy enforcement applied to the original. However, there is still a need to track metadata — data about the data itself.

Enter Amazon DataZone, which enables users to discover, catalog, share, and govern data across organisations. What this means in practice is that there is a catalog of available data, with metadata, labels, and descriptions. Authorised consumers of the data can search, browse, and request access, using existing tools: Redshift, Athena, and Quicksight. There is also a partner API for third-party tools; Snowflake and Tableau were mentioned specifically.

The Obligatory AI & ML Segment

I was not the only attendee to note that AWS spent an inordinate amount of time on AI & ML, given AWS' relatively weak position in that market.

Adam Selipsky talked up the "most complete set of machine learning and AI services", as well as claiming that Sagemaker is the most popular IDE for ML. A somewhat-interesting example is ML-powered forecasting: take a metric on a dashboard and extend it into the future, using ML to include seasonal fluctuations and so on. Of course this is only slightly more realistic than just using a ruler to extend the line, but at least it saves the time needed to make the line look credibly irregular.

More Metaphors

Then we got another beautiful video segment, which Adam Selipsky used to bridge somehow from underwater exploration to secure global infrastructure and GuardDuty. The main interesting announcement in this segment was Amazon SecurityLake, a "dedicated data lake to combine security data at petabyte scale". Data in the lake can be queried with Athena, OpenSearch, and Sagemaker, as well as third-party tools.

It didn’t sound like there was massive commitment to this offering, so the whole segment ended up sounding opportunistic. The whole thing reminded me of Tim Bray's recent tale of how AWS never did get into blockchain stuff: as long as people are going to do something, you might as well make it easy.

In this case, what people are doing is dumping all their logs into one place in the hope that they can find the right algorithm to sift them with and find interesting patterns that map to security issues. The most interesting aspect of SecurityLake is that it is the first tool to support the new Open Cybersecurity Schema Framework format. This is a nominally open format (Cisco and Splunk were mentioned as contributors), but it is notable that the examples in the OCSF white paper are all drawn from AWS services. OCSF is a new format, only launched in August 2022, so ultimate adoption by the industry is still unclear.

Trekking Towards The End

By this point in the presentation I was definitely flagging, but there was another metaphor to torture, this time about polar exploration. Adam Selipsky contrasted the Scott and Amundsen expeditions, which seemed in remarkably poor taste, what with all the ponies and people dying — although the anecdote about Amundsen bringing a tin-smith to make sure his cans of fuel stayed sealed was admittedly a good one, and the only non-morbid part of the whole segment. Anyway, all of this starvation and death — of the explorers, I mean, not the keynote audience, although if I had gone before breakfast I would have been regretting it by this point — was in service of making the point that specific tools are better than general ones.

We got a tour of what felt like a large proportion of AWS' 600+ instance types, with shade thrown at would-be Graviton competitors that have not yet appeared, more ML references with Inferentia chips, and various stories about HPC. Here it was noticeable that the customer example use case uses Intel Xeon chips, despite all of those earlier Graviton references.

One More Metaphor

There was one more very pretty video on imagination, but it was completely wasted on supply chains and call centres.

There was one last interesting offering, though, building on that earlier point about governance and access. This was AWS Clean Rooms, a solution to enable secure collaboration on datasets without sharing access to the underlying data itself. This is useful when working across organisational boundaries, because instead of copying data (which means losing control over the copy), it reads data in place, and thereby maintains restrictions on that data. Quicksight, Sagemaker, and Redshift all integrate with this service at launch.

There was one issue hanging over this whole segment, though. The Clean Rooms example was from advertising, which leads to a potential (perception of) conflict of interest with Amazon's own burgeoning advertising business. Like another new service, AWS Supply Chain, it's easy to imagine this offering being a non-starter simply because of the competitive aspect, much like retailers prefer to work with other cloud providers than AWS.

Turn It To Eleven

All in all, nothing earth-shattering — certainly nothing like Andy Jassy's cavalcade of product announcements, upending client and vendor roadmaps every minute or so. Maybe that is as it should be, though, for an event which is in its eleventh year. And this may well be why Adam Selipsky opted for a different approach to "the cloud is still in its infancy", when it is so clearly a market that is maturing fast. In particular, we are seeing a maturation in the treatment of data, from a purely technical focus on specific tasks to a more holistic lifecycle view. This shift is very much in line with the expectations of the market; however, at least based on this keynote, AWS is playing catch-up rather than defining the field of competition. In particular, all of the governance tools only work with analytical (OLAP) tools, not with real-time transactional (OLTP) tools. That would be a truly transformative move, especially if it can be accomplished without too much of a performance penalty.

The other thing that is maturing is AWS' own approach, moving inexorably up the stack from simple technical building blocks to full-on turnkey business applications. This shift does imply a change in target buyers, though; AWS' old IT audience may have been happy to swipe a credit card, read the docs, and start building, but the new audience they are quoting with Supply Chain and Clean Rooms certainly will not. It will be interesting to watch this transformation take place.


  1. It was not clarified how much of that data is used to poison image search engines. 

  2. Relevant because Aurora (and RDS which it is built on) is based on PostgreSQL and MySQL, with custom storage enhancements to give that speed improvement. 

Marketing Without Surveillance

This is a post that I drafted when Facebook released their last results, and never got around to publishing. Why publish it now? For a start, none of this is breaking news, so it remains as relevant as it ever was. More importantly, with the ongoing bonfire of Twitter, the questions of whether ad-funded social networks are a good thing or not is more relevant than ever.

My position remains that none of this tracking nonsense is worth while. I have never been served a relevant ad through surveillance-driven adtech. Meanwhile, brand advertising works just fine, simply by virtue of the brand being present in the right context: bike gear on a cycling blog, that sort of very limited targeting that only requires a single bit of information about the audience.

Meta Loses Top-10 Ranking by Market Value Amid Worst Month Ever
Social media company falls behind Tencent in value ranking
Facebook parent has lost $513 billion in market cap from peak
Stock has fallen 46% from last year’s record.

What do the terrible results announced by Facebook — I refuse to give in to their desire that we call them Meta — actually mean?

Zuck blamed Apple's ad tracking prevention features for wiping $10B off their bottom line, and there has been a concerted push since to present this as somehow a bad thing, especially for small businesses. I agree with Nick Heer that this framing is pretty gross on Facebook's part, but what I wanted to do today is to discuss alternatives that are open to marketers today.

I'm not in marketing these days, and I never worked directly in the demand-generation side that would get actively involved with this sort of thing — but I have worked closely with those teams and been in the planning meetings, so I have at least an idea of how that business works.

Everything starts with a campaign: you have a particular message you want to get out, you want it to reach a particular audience, and you want some idea of how effective it is. Given those goals, there are different ways to go about running your campaign — different largely in their ethics, rather than in their actual results. Let's take a look.
Alice and Bob work for ACME Widgets Corp. Both of them are launching marketing campaigns for the coming quarter — but they take different approaches, even though they have the same metrics set by their boss, Eve the VP of Marketing.

Alice goes all-in on the surveillance model: her emails have tracking pixels, the links they point to are all gated behind a form that also signs you up for a newsletter, she places ads that follow users around the web once they have come within her surveillance web. She even messes with the favicon and the hosted fonts on the website in order to be able to track users that way. At the end, thanks to all of this effort, Alice can show Eve attribution metrics with a certain click-though rate for her outreach and a certain acquisition cost per customer, set against their likely lifetime value to ACME.

Bob takes a different tack: his emails are plain text, without even any images — since plenty of people now reflexively block all images in email, or load them through proxies. The links in the email are customised so that Bob can tell which email was the one that triggered the action, but then they go directly to the linked resource. He also buys ads, but instead of direct calls to action, Bob focuses on brand advertising in the sorts of publications that the prospective customers are likely to read. At the end, Bob can also show Eve attribution metrics, click-through rates and customer acquisition costs — but he has got there with without irritating prospective customers, or falling foul of either technical countermeasures or policies such as GDPR or CCPA.

Comparing Alice and Bob’s Results

Effectively, Alice and Bob have access to the same metrics; it's just that one of them is going about the process of gathering them honestly. The only data point Bob is missing is the open rate on those emails — but first of all, how useful is that metric in reality? If the indicator that an email was opened is that a tracking pixel was loaded, Alice doesn't know whether the recipient actually read the whole thing, or paged past her email quickly on their way to something they actually wanted. And even assuming that it's an accurate representation of how many people read the text but don't click on any of the links — what can Alice do with that information that Bob would not also do with the information that he sent out X number of emails and Y% of recipients clicked on the call-to-action link? And no, for goodness sake, the answer is not even more layers of attribution woo that claims to be able to identify whether someone came to the ACME website because they remembered the email, or the billboard ad, or because someone mentioned it to them at work — let alone trying to embed the "read progression" code that far too many websites now include.

Secondly, all of these intrusive metrics now have a firm expiry date stamped on them. On top of the ad tracking prevention, Apple now offers a Private Relay capability in iCloud that hides originating IP addresses. Browsers already no longer report a whole lot of information that they used to, precisely because it was used for creepy tracking stuff. By building her campaigns this way, Alice might achieve her goals today, but soon she will not be able to run campaigns like this, and will have to learn to do things Bob's way anyway.

At the core of Bob's method is turning tracking inside out. Instead of trying to stalk users around the Web, engaging in a constant arms race and violating their clearly expressed preference, Bob simply figures out where his most valuable prospects gather and advertises there. First-party data is enough for his purposes, and while individual ads might be more expensive in CPM, he avoids engaging with an ecosystem that is ridden with fraud. He also does not need to worry that the ACME ad might show up beside some tin-foil-hatter YouTube channel and get bad press that way — and the time he doesn't spend micro-managing ad placement can be spent more productively on creating better copy, or an entire other campaign.

Context matters in other ways, too: when a prospective customer is reading about the latest political crisis, famine, or natural disaster, they are not in a widget-buying mood, so showing them a widget ad is counter-productive anyway. Instead, Bob puts his widget ads in widget blogs, places them with streamers who test widgets, and gets hosts of widget-focused podcasts to read out his ads. All of these channels have very limited tracking; podcasts offer none at all, unless Bob creates a special landing page or discount code for listeners of each podcast. And yet, those are some of the most expensive ad slots around, because the context makes them very strong indicators of desire to buy.

Eve looks at the campaign performance numbers presented by a haggard Alice and a relaxed Bob, remembers the news stories about Apple and Google clamping down further on ad tracking, and suggests gently to Alice that maybe she should sit with Bob and figure out how to get the job done without the crutch of surveillance ad tech.


🖼️ Photos by Charles Deluvio and Headway on Unsplash

Retracing My Steps

Another ride report post! This time, I decided on the spur of the moment to try a route I hadn't ridden before. It turned out to be a wee bit longer than I had really allowed for, which made me slightly late for family Sunday lunch — oops. I had also forgotten to charge my Apple Watch, so this ride went unrecorded, but I'm pretty sure the distance was around 80km, so not bad. The highest point was around 550m, but there was a fair bit of up and down, so the total vert would be quite a bit more.

Two of the things that make me happiest are bicycles and mountains, though, so riding up into the mountains like this does me an enormous amount of good. Here are some of the highlights of Sunday's ride.

I had only just left the tarmac when I saw three deer bouncing through the wispy fog that was still drifting across the ploughed fields. They moved fast enough that by the time I had stopped and got my phone out, I needed the 3x zoom — and one of the deer got away entirely. For such an extreme shot from a phone camera, I'm not unhappy with the results.

I also love that the scenery looks pretty wild in this framing, but actually it's still pretty close to a bunch of warehouses and factories, a true liminal space. The early part of this route is stitched together from tracks between fields to avoid busy roads, but it's still pretty close to industrial areas.

A little further along, and with the sun burning off the last vestiges of the mist, I stopped again because I liked the view of the river rippling across the stones. After this stop, though, I hit some pretty technical riding and had to concentrate on where I was putting my wheels. Some rain has finally arrived after the long drought, and then motorbikes (ugh) had come through, so all the mud was churned up into mire.

On my mountain bike I'd probably have been fine, but the Bianchi has some intermediate gravel tyres that are pretty smooth in the centre and with only a little bit of tread on the sides, as well as being narrower than MTB tyres. This is the sort of terrain where I'm glad to have proper pedals that I can unclip from and ride along with my feet free just in case I lose my balance and need to put a foot down in a hurry. Anyway, I got through without too much trouble, despite a lot of slipping and sliding. I did have to stop to clear out the plug of mud between rear wheel and frame once I got out of the woods, and then I walked the bike along the edge of one field that had been ploughed right to the river's edge, not leaving any smooth terrain to ride on.

Nothing much to say about this tower, I just always like the look of it. This is also where the trail finally starts to climb out of the plain.

This is an old railway bridge, and because the road bridge is just upstream, it's reserved for walking and riding. It's not at all signposted, either, so you have to know it's there; I rarely see anyone else on it.

One of the reasons I ride a gravel bike is so that I can spend as little time as possible sharing the road with cars. It's tough to avoid that when it comes to river crossings, though! One newer bridge around here has a cycle path slung underneath it, and one of the busier bridges carved out a cycle path in a redesign, but this one is the best of all.

After that I rode properly up into the hills, climbing up out of the Nure valley and over the watershed down into the Trebbia valley before heading home. Unfortunately the day clouded over a bit too, so although I did stop to take a few more shots, they aren't nearly so scenic. I did want to share this one, though, because that rocky outcrop in the middle distance already featured in a past ride report.

Business Case In The Clouds

A perennial problem in tech is people building something that is undeniably cool, but is not a viable product. The most common definition of "viable" revolves around the size and accessibility of the target market, but there are other factors as well: sustainability, profitability, growth versus funding, and so on.

I am as vulnerable as the next tech guy to this disease, which is just one of many reasons why I stay firmly away from consumer tech. I know myself well enough to be aware that I would fall in love with something that is perfectly suited to my needs and desires — and therefore has a minuscule target market made up of me and a handful of other weirdos.

One of the factors that makes this a constant ongoing problem, as opposed to one that we as an industry can resolve and move on from, is that advancing tech continuously expands the frontiers of what is possible, but market positioning does not evolve in the same direction or at the same speed. If something simply can't be done, you won't even get to the "promising demo video on Kickstarter" stage. If on the other hand you can bodge together some components from the smartphone supply chain into something that at least looks like it sort of works, you might fool yourself and others into thinking you have a product on your hands.

The thing is, a product is a lot more than just the technology. There are a ton of very important questions that need to be answered — and answered very convincingly, with data to back up the answers — before you have an actual product. Here are some of the key questions:

  • How many people will buy one?
  • How much are they willing to pay?
  • Given those two numbers, can we even manufacture our potential product at a cost that lets us turn a profit? If we have investors, what are their expectations for the size of that profit?
  • Are there any regulations that would bar us from entering a market (geographical or otherwise)? How much would it cost to comply with those regulations? Are we still profitable after paying those costs?
  • How are we planning to do customer acquisition? If we have a broad market and a low-cost product, we're going to want to blanket that segment with advertising and have as self-service a sales channel as possible. On the other hand, if we are going high-end and bespoke, we need an equally bespoke sales channel. Both options cost money, and they are largely mutually exclusive. And again, that cost comes out of our profit margin.
  • What's the next step? Is this just a one-shot campaign, or do we have plans for a follow-on product, or an expansion to the product family?
  • Who are our competitors? Do they set expectations for our potential customers?
  • How might those competitors react? Can they lower their own prices enough that we have to reduce ours and erode our profit margin? Can they cross-promote with other products while we are stuck being a one-trick pony?

These are just some of the obvious questions, the ones that you should not move a single step forward without being able to answer. There are all sorts of second- and third-order follow-ups to these. Nevertheless, things-that-are-not-viable-products keep showing up, simply because they are possible and technically cool.

Possible, Just Not Viable

One example of how this process can play out would be Google Stadia (RIP). At the time of its launch, everyone was focused on technical feasibility:

[...] streaming games from datacenters like they’re Netflix titles has been unproven tech, and previous attempts have failed. And in places like the US with fixed ISP data caps, how would those hold up to 4-20 GB per hour data usage?

[...] there was one central question. Would it even work?

Some early reviewers did indeed find that the streaming performance was not up to scratch, but all the long-term reports I heard from people like James Whatley were that the streaming was not the problem:

The gamble was always: can Google get good at games faster than games can get good at streaming. And I guess we know (we always knew) the answer now. To be clear: the technology is genuinely fantastic but it was an innovation that is looking - now even more overtly - for a problem to solve.

As far as we can tell from the outside (and it will be fascinating to read the tell-all book when it comes out), Google fixated on the technical aspect of the problem. In fairness, they were and are almost uniquely well-placed to make the technology work that enables game streaming: data centers everywhere, fast network connections, and in-house expertise on low-latency data streaming. The part which apparently did not get sufficient attention was how to turn those technical capabilities into a product that would sell.

Manufacturing hardware is already not Google's strong suit. Sure, they make various phones and smart home devices, but they are bit-players in terms of volume, preferring to supply software to an ecosystem of OEMs. However, what really appears to have sunk Stadia is the pricing strategy. The combination of both a monthly subscription and having to buy individual games appears to have been a deal-killer, especially in the face of other streaming services from long-established players such as Microsoft or Sony which only charge a subscription fee.

To recap: Google built some legitimately very cool technology, but priced it in a way that made it unattractive to its target customers. Those customers were already well-served by established suppliers, who enjoyed positive reputations — as opposed to Google's reputation for killing services, one that has been further reinforced by the whole Stadia fiasco. Finally, there was no uniquely compelling reason to adopt Stadia — no exclusives, no special integration with other Google services, just "isn't it cool to play games streamed from the cloud instead of running on your local console?" Gamers already own consoles or game on their phones, especially the ones with the sort of fat broadband connection required to enable Stadia to work; there is not a massive untapped market to expand into here.

So much for Google. Can Facebook — sorry, Meta — do any better?

Open Questions In An Open World

Facebook rebranded as Meta to underline its commitment to a bright AR/VR future in the Metaverse (okay, and to jettison the increasingly stale and negative branding of the Blue App). The question is, will it work?

Early indications are not good: Meta’s flagship metaverse app is too buggy and employees are barely using it, says exec in charge. Always a sign of success when even the people building the thing can't find a reason to spend time with it. Then again, in fairness, the NYT reports that spending time in Meta's Horizon VR service was "surprisingly fun", so who knows.

The key point is that the issue with Meta is not one of technical feasibility. AR/VR are possible-ish today, and will undoubtedly get better soon. Better display tech, better battery life, and better bandwidth are all coming anyway, driven by the demands of the smartphone ecosystem, and all of that will also benefit the VR services. AR is probably a bit further out, except for industrial applications, due to the need for further miniaturisation if it's going to be accepted by users.

The relevant questions for Meta are not tech questions. Benedict Evans made the same point discussing Netflix:

As I look at discussions of Netflix today, all of the questions that matter are TV industry questions. How many shows, in what genres, at what quality level? What budgets? What do the stars earn? Do you go for awards or breadth? What happens when this incumbent pulls its shows? When and why would they give them back? How do you interact with Disney? These are not Silicon Valley questions - they’re LA and New York questions.

The same factors apply to Horizon. It's a given that Meta can build this thing; the tech exists or is already on the roadmap, and they have (or can easily buy) the infrastructure and expertise. The questions that remain are all "but why, tho" questions:

  • Who will use Horizon? How many of these people exist?
  • How will Horizon pay for itself? Subscriptions — in exchange for what value? Advertising — in what new formats?
  • What's the plan for customer acquisition? Meta keeps trying to integrate its existing services, with unified messaging across Facebook, Instagram, and WhatsApp, but it doesn't really seem to be getting anywhere with consumers.
  • Following on from that point, is any of this going to be profitable at Meta's scale? That qualification is important: to move the needle for Zuckerberg & co., this thing has to rope in hundreds of millions of users. It can't just hit a Kickstarter milestone and declare victory.
  • What competitors are out there, and what expectations have they already set? If Valve failed to get traction with VR when everybody was locked down at home and there was a new VR-exclusive Half-Life game1, what does that say about the addressable market?

None of these are questions that can be answered based on technical capabilities. It doesn't matter how good the display tech in the headsets is, or whether engineers figure out how to give Horizon avatars innovative features such as, oh I don't know, legs. What matters is what people can do in Horizon that they can't do today, IRL or in Flatland. Nobody will don a VR headset to look at Instagram photos; that works better on a phone. And while some people will certainly try to become VR influencers, that is a specialised skill requiring a ton of support; it's not going to be every aspiring singer, model, or fitness instructor who is going to make that transition. Meta will need a clear and convincing answer that is not "what if work meetings but worse in every way".

So there you have it, one failed product and one that is still unproven, both cautionary tales of putting the tech before the actual product.


  1. I love this devastating quote from PCGamesN: "Half-Life: Alyx, [...] artfully crafted though it was, [...] had all the cultural impact of a Michael Bublé album." Talk about vicious! 

Network TV

It is hardly news that the ad load on YouTube has become ridiculous, with both pre-roll, and several mid-roll slots, even on shorter videos. In parallel with the rise of annoying ads, YouTube is also deluging me with come-ons for YouTube Premium, their paid ad-free experience. I haven't coughed up because a) I'm cheap, and b) this feels like blackmail: "pay up or we'll make even more annoying unskippable ads".

YouTube charges through the nose for add-on offerings like YouTube Premium or YouTube TV, the US-only streaming replacement for cable TV. The expense of these services highlights just how profitable advertising is for them — and still they need to add more and more slots. The suspicion is of course that individual ads cost less, so YouTube needs to show ever more in order to continue their growth trajectory in the face of competition for eyeballs from the likes of TikTok.

Now news emerges that YouTube is negotiating to add content to its subscription services:

YouTube has been in closed-door talks with streaming broadcasters about a new product the video giant wants to launch in Australia, which industry insiders say is an ambitious play to own the home screen of televisions.
The company is seeking deals with Australian broadcasters to sell subscriptions to services such as Nine-owned Stan and Foxtel’s Binge directly through YouTube, which would then showcase the streamers’ TV and movie content to users.

Not being in Australia, I'm not familiar with either Stan or Binge, but the idea would appear to be to get more users habituated to paying for subscriptions through YouTube. There are already paid-subscription YouTube channels out there, but not many; it seems that most creators have opted for the widest possible distribution and monetisation via ads, instead of direct monetisation via paying subscriptions in exchange for a smaller audience. Perhaps the pull of these shows will be enough to jump-start that model? Presumably the reason for launching this offering in Australia is that it will be a pilot whose results will be watched closely before rolling out in other markets (or not).

This whole approach seems a bit backward to me though. YouTube Is pretty unassailably established as the platform for video on the web; TikTok is effectively mobile-only and playing a somewhat different game. What if Google exploited that position by working with ISPs? I'm resistant to paying for YouTube Premium specifically, but if you hid the same amount somewhere in my ISP bill, or made a bundle around it with something else, I'd probably cough up. ISPs that sign up could also implement local caches (presumably part-funded by Google) to improve performance for their users, maybe get better traffic data to optimise the service — without illegal preferencing, of course.

Instead of trying to jump-start a new revenue stream by getting users to pay for something they already get for free by offering a slightly nicer experience, better for YouTube to get into a channel where users are already habituated to paying for add-on services, and where the incumbents (the ISPs) are desperate to position themselves as more than undifferentiated dumb pipes. A better streaming video experience is already the most obvious reason for most households to upgrade their internet connection, so the link is already there in consumers' minds.

Susan Wojcicki, have your people call me.


🖼️ Photo by Erik Allen on Unsplash

Draining The Moat

Zoom is in a bit of a post-pandemic slump, describing its own Q2FY23 results as "disappointing and below our expectations". This is quite a drop for a company that at one point was more valuable than ExxonMobil. Zoom does not disclose the total number of users, only "enterprise users", of which there are 204,100. "Enterprise users" are defined in a footnote to the slides from those Q2FY23 results as "customers who have been engaged by Zoom’s direct sales team, channel partners, or independent software vendor (ISV) partners." Given that Zoom only claims 3,116 customers contributing >$100k in revenue over the previous year, that is hardly a favourable comparison with Cisco's claim of six million users of WebEx Calling in March 2022.

As I wrote in The Thing With Zoom, Zoom's original USP was similar to WebEx's, namely the lowest time-to-meeting with people outside company. As a sales person, how quickly can I get my prospect in the meeting and looking at my presentation? Zoom excelled at this metric, although they did cut a number of corners to get there. In particular, their software would stick around even after users thought they had uninstalled it, just in case they ever needed it again in the future.

Over the past year or two, though, Teams usage has absolutely taken off. At the beginning the user experience was very rough, even by Microsoft standards, confusing users with the transition from its previous bandwagon-jumping branding as Skype for Business. Joining a Teams meeting as an outsider to the Teams-using organisation was (and largely still is) a mess, with the client failing to connect as often as not, or leaving meeting invitees in a loop of failed authentication, stuck between a web client and a native client, neither of which is working.

And yet, Teams is still winning in the market. Why?

There is more to this situation than just Microsoft's strength in enterprise sales. Certainly, Microsoft did not get distracted trying to cater to Zoom cocktails or whatever, not least because nobody in their right mind would ever try to party over Teams, but also for the very pragmatic and Microsoftian move that those users don't pay.

Teams is not trying to play Zoom and WebEx at their own game. Microsoft doesn't care about people outside their client organisations. Instead, Microsoft Teams focuses on offering the richest possible meeting experience to people inside those organisations.

I didn't fully appreciate this distinction, since throughout this transition I was working for companies that used the standard hipster tech stack of Slack, Google Docs, and Zoom. What changed my understanding was doing some work with a couple of organisations that had standardised on Teams. Having the text chat, video call, and documents all in one place was wonderfully seamless, and felt native in a way that Google's inevitable attempt to shoehorn Hangouts into a Google Docs sidebar or comment thread never could.

This all-in-one approach was already calculated to appeal to enterprises who like simplicity in their tech stack — and in the associated procurement processes. Pay for an Office 365 license for everybody, done. Teams would probably have won out anyway just on that basis, but the trend was enormously accelerated by the very factor everyone assumed would favour Zoom: remote work.

While everyone was focusing on Zoom dating, Zoom board games, Zoom play dates, and whatever else, something different was happening. Sales people were continuing to meet with their customers over Zoom/WebEx/whatever, but in addition to that, all of the intra-company meetings were also flipping online. This transition lead to an explosion in the ratio of internal video meetings to outside-facing ones, changing the priority from "how quickly can I get the other people in here, especially if they haven't got the meeting client installed" to "everyone has the client installed, how productive can we be in the meeting".

As the ratio of outside video meetings to inside meetings flips, Zoom's moat gets filled in

Zoom could not compete on that metric. All Zoom could do was facilitate someone sharing their screen, just like twenty years ago. Maybe what was being shared was a Google Doc, and the other people in the meeting were collaborating in the doc — but then what was Zoom's contribution? Attempts to get people to use built-in chat features or whiteboarding never took off; people used their Slack for chatting, and I never saw anyone use the whiteboard feature in anger.

Once an organisation had more internal remote video meetings than outside-facing ones, these differences became glaring deficiencies in Zoom compared to Teams.1

Zoom squandered the boost that the pandemic gave them. Ultimately, video chat is a feature, not a product, and Zoom will either wither away, or get bought and folded into an actual product.


🖼️ Photos by Chris Montgomery and Christina @wocintech.chat on Unsplash


  1. The same factors are also driving a slight resurgence in Hangouts, based on my anecdotal experience, although Google does not disclose clear numbers. If you're already living in Google Docs, why not just use Hangouts? (Because it's awful UX, but since when did that stop Google or even slow them down?) 

Fun In The Sun

A reliable way for companies to be seen as villains these days is to try to roll back concessions to remote work that were made during the pandemic1. Apple is of course a perennial scapegoat here, and while it seems reasonable that people working on next year's iPhone hardware might have to be in locked-down secure labs with all the specialised equipment they need, there is a lurking suspicion that much of the pressure on other Apple employees to return to work is driven by the need to justify the massive expense of Apple Park. Jony Ive's last project for Apple supposedly cost over $4B, after all. Even for a company with Apple's revenues, that sort of spending needs to be justified. It's not a great look if your massive new vanity building is empty most of the time.

The same mechanisms are playing out in downtown business districts around the world, with commercial landlords worried about the long-term value of their holdings, and massive impacts on the services sector businesses (cafes, restaurants, bars, dry-cleaners, etc etc) that cluster around those office towers.

With all of this going on, it was probably inevitable that companies would try to jump on the bandwagon of being remote-work friendly — some with greater plausibility than others. I already mentioned Airbnb in a past post; they have an obvious incentive to facilitate remote work.

Other claims are, let's say, more far-fetched.

In a recent example of the latter genre, it seems that Citi is opening a hub in Málaga for junior bankers:

  • Over 3,000 Málaga hopefuls applied for just 27 slots in the two-year program, which promises eight-hour days and work-free weekends -- practically unheard of in the traditional banking hubs in Manhattan and London. In exchange, Málaga analysts will earn roughly half the starting salaries of their peers.
  • The new Spain office will represent just a minuscule number of the 160 analysts Citi hired in Europe, the Middle East, and Africa, on top of another 300+ in New York.

This is… a lot less than meets the eye. 27 people, out of a worldwide intake of ~500 — call it 5% — will be hired on a two-year contract in one admittedly attractive location, and in exchange for reasonable working hours, will take a 50% hit on their starting salary. In fairness the difference in cost of living between Málaga and London will make up a chunk of that difference, and having the weekends free to enjoy the place is not nothing, but apart from that, what is the upside here?

After the two years are up, the people who have been busy brown-nosing and visibly burning the midnight oil at head office will be on the promotion track. That is how banking works; if you can make it through the first few years, you have a) no social life any more, and b) a very remunerative career track in front of you. Meanwhile, it is a foregone conclusion that the people from the Málaga office will either not have their contract renewed after the two years are up, or will have to start their career track all over again in a more central location.

In other words, what this story boils down to is some short-term PR for Citi, a bunch of cheap(er) labour with a built-in termination date, and not much more.

Then again, it could be worse (it can always be worse). Goldman Sachs opted for the stick instead of the carrot with its own return to the office2 mandate, ending the free coffee that had been a perk of its offices.

Even after all these years in the corporate world, I am amazed by these utterly obvious PR own goals. The value of the coffee cart would have been infinitesimal, completely lost in Goldman's facilities budget. But what is the negative PR impact to them of this move? At one stroke they have hollowed out all the rhetoric of teamwork and empowerment that is the nominal justification for the return to office.

Truly committing to a remote work model would look rather different. I love the idea of Citi opening a Málaga hub. The difference is that in a truly remote-friendly organisation, that office would not have teams permanently based in it (apart from some local support staff). Instead, it would be a destination hub for teams that are truly remote to assemble on a regular basis for planning sessions. The rest of the time, everyone would work remotely wherever they currently live.

Some teams do need physical proximity to work well, some customer-facing roles benefit from having access to meeting space at a moment's notice — but a lot of the work of modern companies does not fall into these categories. Knowledge workers can do their work anywhere — trust me, I've been working this way for more than fifteen years. Some of my most productive work has been done in airport lounges, not even in my fully equipped home office! With instant messaging, video calls, and collaboration tools, there is no real downside to working this way. Meanwhile, the upside is access to a global and distributed talent pool. When I did have to go into an office, it was so painful to be in an open-space with colleagues that were not on my actual team that I wore noise-cancelling headphones. If that's the situation, what's the point of commuting to an office?

This sort of reorganisation would admittedly not be great for the businesses that currently cluster around Citi offices and cater to the Citi employees working in those offices — but the flip side would be the massive benefits to businesses in those Citi employees' own home neighbourhoods. If you're not spending all your waking hours in Canary Wharf or Wall Street, you can do your dry cleaning at your local place, you can buy lunch around the corner instead of eating some over-priced plastic sandwich hunched over your desk, and you can get a better quality of life that way — maybe even in Málaga!

The only downside of working from home is that you have to pay for your own coffee and can't just get Goldman to foot the bill.


🖼️ Photos by Carles Rabada, Jonas Denil, and Tim Mossholder on Unsplash


  1. Not that the pandemic is quite over yet, but let's not get into that right now. 

  2. Never "return to work". This is a malicious rhetorical framing that implies we've all been slacking off at home. People are being asked to continue to work, and to return to the office to do so. They may want to pick up noise-cancelling headphones on their way in. 

Growing Pains

The iPad continues to (slowly, slowly) evolve into a Real Computer. My iPad Pro is my only personal computer — I don't have a Mac of my own, except for an ancient Mac Mini that is plugged into a TV and isn't really practical to use interactively. It's there to host various network services or display to that TV.

For reasons I don't feel like going into right now, I don't currently have a work Mac to plug into my desk setup, so I thought I'd try out the new Stage Manager feature in iPadOS 16.

So, the bottom line is that it does work, and it makes the iPad feel suddenly like a rather different machine.

Some setup is required. Of course Stage Manager needs iPadOS 16; I've been running the beta on my iPad all summer, and it seems pretty stable. The second display needs to connect via USB-C; I already have my CalDigit dock set up that way, so that part was no problem. Using Stage Manager with an external display also requires an external keyboard and mouse, and these have to be connected by Bluetooth; the USB keyboard connected to my dock was not recognised. Without those peripherals, the external display only works for screen mirroring, which is a bit pointless in my opinion. Mirroring the iPad's display to another screen makes sense if you are showing something to someone, but then, why would you need Stage Manager?

Anyway, once I had everything connected, the external display started working as a second display. I was able to arrange the two displays correctly from Settings; some new controls appeared under Display & Brightness to enable management of the second display.

It's interesting to see what does and does not work. The USB microphone plugged into the dock — and the analogue headphones daisy-chained from that — worked without any additional configuration, but the speakers connected to the dock's SPDIF port were not visible to iPadOS. Luckily these speakers also support Bluetooth, so I'm still able to use them; it’s just a bit of a faff to have to connect three Bluetooth devices (keyboard, mouse, and speakers) every time I want to sit at my desk. The Mac is way easier: one USB-C cable, and you’re done. The second desktop display does not show up at all, but that's fair enough; even the first generation of M1 Macs didn't support two external displays. External cameras also do not show up, and there's not even any control, so it's the iPad's built-in camera or nothing.

There's some other weird stuff that I assume and hope is due to the still-beta status of iPadOS 16.

  • The Settings app does not like being on the external display in the least, and appears all squashed. My display is an Ultrawide, but weirdly, the Settings window is squashed horizontally. Maybe the Settings app in iPadOS has not received much attention given the troubled gestation of the new Settings app in macOS Ventura?
  • Typing in Mail and a couple of other apps (Evernote, Messages, possibly others I haven’t encountered yet) sometimes lagged — or rather, the keystrokes were all being received, but they would not be displayed, until I did something different such as hitting backspace or clicking the mouse. At other times, keystrokes showed up normally.
  • The Music App goes straight into its full-screen display mode when it's playing, even when the window is not full-screen. The problem is that the touch control at the top of that window which would normally return to the usual display mode does not work. Also, Music is one of the apps whose preview in the Stage Manager side area does not work, so it's always blank. This seems like an obvious place to display static cover art, even if we can't have live-updating song progression or whatever.
  • Sometimes apps jump from the external display to the iPad’s built-in, for instance if you open something in Safari from a different app.

What does work is that apps can be resized and rearranged, giving a lot more flexibility than the previous single-screen hover or side-by-side multitasking options. App windows can also be grouped to keep apps together in logical groups, such as the editor I'm typing this into and a Safari window to look up references. Again, this is something that I already did quite a lot with the pre-existing multi-tasking support in iPadOS, but it only really worked for two apps, plus one in a slide-over if you're really pushing it. Now, you can do a whole lot more.

I am glad that I came back to give Stage Manager another chance. I had played with the feature on my iPad without connecting it to anything, and found it unnecessarily complex. I do wonder how much of that is because I'm rocking an 11" rather than a 13"? Certainly, I can see this feature being much more useful on a Mac, even standalone. However, Stage Manager on iPadOS truly comes into its own with an external display. This is a big step on way to the iPad becoming a real computer rather than merely a side device for a Mac or a bigger iPhone.

It's worth noting that Stage Manager only works with the very latest iPads that use Apple silicon: iPad Air (5th generation), 11-inch iPad Pro (2021), and 12.9-inch iPad Pro (2021). It's probably not the time to be buying a new iPad Pro, with rumours that it's due for a refresh soon, maybe to an M2, unless you really really want to try Stage Manager right now. However, if you have an iPad that can support it, and an external display, keyboard, and mouse, it's worth trying it out to get a better idea of the state of the iPadOS art.


🖼️ Photos by author, except Stage Manager screenshot from Apple

Sights From A Bike Ride

One of the positive aspects I often cite when talking up the place where I live is that I can be in fields in ten minutes' ride from my front door in the old town — as in, my windows look out onto the old city walls.1

Once out in the fields, though, you never know what you might find. Here are some scenes from my latest ride.

Roadside shrine to the Madonna della Notte, complete with offerings and ex-voto (thanks for successful prayers).

Not sure what's up with this old Lancia planted in a farm yard, but it looks cool!

Here I just liked the contrast between the red tomatoes waiting for the harvest and the teal frame of my Bianchi.

Bike rides are so great for getting out of my head, whether it’s a technical piece of single-track on my mountain bike where I have to concentrate so hard I can’t think of anything else, or a ride like this where I’m bowling along the flat with a podcast in my (bone-conduction) headphones. The trick is staying off main roads as much as possible — hence the gravel bike.


  1. Which are actually the newest city walls, dating from the sixteenth century CE, post-dating various earlier medieval and Roman walls of which only traces remain. These Renaissance walls were later turned into a linear park (pictures) known as the "Facsal", a distortion of London's famous Vauxhall gardens, among the first and best-known pleasure gardens in nineteenth-century Europe. In more modern times, the Facsal was part of the street circuit for the 1947 Grand Prix of Piacenza, famously the first race entered by a Ferrari car — although not the site of the Scuderia's first win.