Dragging the Anchor

Apple events may have become routine, and recorded events don't hit quite the same as ones with a live audience — even if I only ever viewed them remotely. However, they still have the potential to stir up controversy, at least among the sorts of people who follow Apple announcements religiously.

If you are not part of that group, you may not be aware that Apple’s MacBook Pro memory problem is worse than ever. Wait, what is going on? Is the RAM catching fire and burning people's laps or something?

No, nothing quote that bad. It's just that even in Apple's newest M3 MacBook Pro, the base configuration comes with a measly 8 GB of RAM, which is simply not adequate in this year 2023.

There has been a certain amount of pushback claiming that 8 GB is fine, actually — and it is true that Apple Silicon does use RAM differently than the old Intel MacBooks did, so 8 GB is not quite as bad as it sounds. But it sounds pretty bad, so there is still plenty of badness to be had!

Jason Koebler took on the critics in a piece titled In Defense of RAM at increasingly essential tech news site 404 Media:

It is outrageous that Tim Cook is still selling 8GB of RAM as the default on a $1,600 device. It is very similar to when Apple was selling the iPhone 6S with 16GB of storage as its base device, and people were talking themselves into buying it. It is not just a performance and usability problem, it’s a sustainability and environmental one, too. This is because RAM, historically one of the easiest components to upgrade in order to get more life out of your computer, on MacBook Pros cannot be upgraded and thus when 8GB inevitably becomes not enough, users have to buy a new computer rather than simply upgrade the part of the computer that’s limiting them.

This is the key point. If I may age myself for a moment, my first computer, a mighty Macintosh LC, had a whole whopping 4 MB of RAM — yes, four megabytes. But the default was two. The motherboard let owners expand the RAM up to a screaming 10 MB by swapping SIMMs (yes, this machine predated DIMMs).

These days, RAM is soldered to the motherboard of MacBooks, so whatever spec you buy is the most RAM that machine will ever have. If it turns out that you need more RAM, well, you’ll just have to buy a new MacBook — and figure out what to do with your old one.

This is obviously not great, as Jason Koebler writes in the piece I quoted above — but sustainability and environmental issues can only do so much when set against increased frequency of upgrades and the consequent increase in profitability.

Here's the thing: that forced binary choice between environment and profit is a false dilemma, in this as in so many other cases.

Default configurations are extremely important to customer satisfaction and brand perception because they anchor the whole product line. Both uninformed consumers and large corporate buyers will gravitate to the default, so that is the experience that most of the users of the product will have.

We are talking here about the experience of using a MacBook Pro — not an Air, where you might expect a trade-off, but the nominally top-of-the-tree model that is supposedly designed for Professionals. If that experience is unsatisfactory and causes users to develop a negative opinion of their MacBook Pro, this becomes a drag on their adoption of the rest of the Apple ecosystem.

Is this the issue that is going to kill Apple? No, of course not. But it comes on top of so many other stories: we've had Batterygate, Antennagate, Bendgate, and I'm probably forgetting some other 'gates, not to mention iPhone sales being halted in France due to radiation concerns. None of these issues is actually substantive, but in the aggregate, slowly but surely, they erode Apple’s brand perception.

Negative press is a problem for any company, but it is a particular problem for Apple, because a lot of the value comes from the ecosystem. The all-Apple lifestyle is pretty great: MacBook unlocks with Apple Watch syncs with iPhone AirPlays to Apple TV served by Mac mini together with iPad — and that's just my house.

But I've been a Mac user since that little pizzabox LC in the 90s. If my first Apple experience was to be handed a nominally "Pro" machine, open a handful of browser tabs, and find it immediately slowing down, would I consider any other Apple devices? Or would I get an Android phone, a Garmin smartwatch, an Amazon Fire TV stick, and so on? Sure, Apple fans talk about how nice their world is, but this computer is just hateful.

That's the risk. Will Apple recognise it in time?

False Positive Attitude

Don't Believe The Hype

Yes, I'm still on about "AI"1, because the collective id has not yet moved on.

Today, it's an article in Nature, with the optimistic title "AI beats human sleuth at finding problematic images in research papers":

An algorithm that takes just seconds to scan a paper for duplicated images racks up more suspicious images than a person.

Sounds great! Finally a productive use for AI! Or is it?

Working at two to three times [the researcher]’s speed, the software found almost all of the 63 suspect papers that he had identified — and 41 that he’d missed.

(emphasis mine)

So, the AI found "almost all" of the known positives, and identified 41 more unknowns? We are not told what the precise ratio is of false negatives (known positives that were missed), let alone how many false positives there were (instances of duplication flagged by AI that turned out not to be significant).

These issues continue to plague "AI"1, and will continue to do so for the foreseeable future. The mechanisms to prevent these false identifications are probabilistic, not deterministic. In the same way that we cannot predict the output of a large language model (LLM) for a given prompt, we also cannot prevent it from ever issuing an incorrect response. At the technical level, all we can do is train it to decrease the probability of the incorrect response, and pair the initial "AI"1 with other systems designed to check its work. Cynically, though, that process takes money and time, and Generative AI is at the Peak of Inflated Expectations now, we need to ship while the bubble is still inflating!

AI Needs A Person Behind The Curtain

Technology, however, is only part of the story. This academic image analysis tool could well end up having real-world consequences:

The end goal […] is to incorporate AI tools such as Imagetwin into the paper-review process, just as many publishers routinely use software to scan text for plagiarism.

There's the problem. What recourse do you have as an academic if your paper gets falsely flagged? Sure, journals have review boards and processes, but that takes time — time you might not have if you're under the gun for a funding decision. And you could easily imagine a journal being reluctant to convene the review board unless the "AI"1 indicated some level of doubt — a confidence threshold set at, say, 70%. If the "AI"1 is 90% confident that your graph is plagiarised, tough luck.

The example of plagiarism detection is telling here. Systems such as Turnitin that claim to detect plagiarism in students' work had an initial wave of popularity, but are now being disabled in many schools due to high false-positive rates. A big part of the problem is that, because of the sheer volume of student submissions, it was not considered feasible for a human instructor to check everything that was flagged by the system. Instead, the onus was placed on students to ensure that their work could pass the checks. And if they missed a deadline for a submission because of that? Well, tough luck, was the attitude — until the heap of problems mounted up high enough that it could no longer be ignored.

This is not a failure of LLM technology as such. The tech is what it is. The failure is in the design of the system which employs the technology. Knowing that this issue of false positives (and negatives!) exists, it is irresponsible to treat "AI"1 as a black box whose pronouncements should always be followed to the letter, even and including if they have real-world consequences for people.


  1. Still not AI. 

Algorithmic Networks and Their Malcontents

The thing that really annoys me about the death of Twitter1 is that there is no substitute. As I wrote:

none of these upstart services will become the One New Twitter. Twitter only had the weight it had because it was (for good and ill) the central town square where all sorts of different communities came together. With the square occupied by a honking blowhard and his unpleasant hangers-on, people have dispersed in a dozen different directions, and I very much doubt that any one of the outlet malls, basement speakeasies, gated communities, and squatted tenements where they gather now can accomodate everyone who misses what Twitter was.

It’s worth unpacking that situation to understand it properly. Twitter famously had not been growing for a long time, leading users to speculate that:

Maybe we already saw the plateau of the microblog, and it turns out that the total addressable market is about the size that Twitter peaked at. It is quite possible that Twitter did indeed get most of the users who like short text posts, as opposed to video (Tik Tok), photo (Instagram), or audio.

In their desperation to resume growing, Twitter started messing with users’ timelines, adding algorithmic features that were supposedly designed to help users see the best content — but of course, being Twitter, they went about it in a ham-fisted way and pissed off all the power users instead of getting them excited.

The thing is, Twitter is far from the only social network to fail to land the tricky transition to an algorithmic timeline. All of the big networks are running scared of the Engagement that TikTok is able to bring, but they seem to have fundamentally misunderstood their respective situations.

All of the first-generation social networks — Twitter, Facebook, LinkedIn — rely on the, well, network as the key. You will see posts from people you are connected to, and in turn the people who are connected to you will see your posts. Twitter was always at a disadvantage here, because Facebook and LinkedIn built on existing networks: family and friends for Facebook, and work colleagues and acquaintances for LinkedIn. Twitter always had a "where do I start from?" problem: when you signed up, you were presented with a blank feed, because you were not yet following anybody.

Twitter flailed about trying to figure out how to recommend accounts to follow, but never really cracked that Day One problem, which is a big part of the reason why its growth plateaued2: Twitter had already captured all of the users who were willing to go through the hassle of figuring that out, building their follow graph, and then pruning it and maintaining it over time. Anyone less committed bounced off the vertical cliff face that Twitter offered in lieu of an on-ramp.

The Algorithm Shall Save Us All!

TikTok was the first big network to abandon that mechanism, and for good reason: at this point, all the other networks guard their users’ social graphs jealously for themselves. It is hard to bootstrap a social network like that from nothing. Instagram famously got its start by piggybacking on Twitter, but that’s a move you can only pull off once. Instead, TikTok went fully algorithmic: what you see in your feed is determined by the algorithm, not by whom you are connected to. The details of how the algorithm actually works are secret, controversial, and constantly changing anyway, but at a high level it’s some combination of your own past activity (what videos you have watched), the activity of people like you, and some additional weighting that the network applies to show you more videos that you might like to watch.

This means that a new account with no track record and no following will be shown a feed full of videos when they first sign in. The quality might initially be a bit hit or miss, but it will refine rapidly as you use the platform. In the same way, a good video from a new account can break out and go viral without that account having to build a following first, in the way they would have had to on the first-wave social networks.

When people started talking about algorithmic timelines like this, Twitter thought they had finally struck gold: they could recommend good tweets, whether they were from someone the user followed or not. This would fill those empty timelines, and help onboard2 new users.

The problem is that users who had put in the effort to build out their graph placed a lot of value in it, and were incandescently angry when Twitter started messing with it. I liked Old Twitter because I had tuned it, over more than a decade, to be exactly what I wanted it to be, and I know a lot better than some newly-hatched algorithm what sort of tweets I want to see in my timeline.

An algorithmic timeline doesn’t have to be bad, mind; Twitter’s first foray into this domain was a feature called "While you were away" that would show you half a dozen good tweets that you might have missed since you last checked the app. This was a great feature that addressed a real user problem: once you follow more than a few accounts, it’s no longer possible to be a "timeline completionist" and read every tweet. Especially once you factor in time zones, you might miss something cool and want to catch up on it once you’re back online.

The problem was the usual one with algorithmic features, namely, lack of user control. Twitter gave users no control over the process: the "While you were away" thing would appear whenever it cared to, or not at all. There was no way to come online and call it up as your first stop to see what you had missed; you just had to scroll and hope it might show up. And then they just quietly dropped the whole feature.

Sideshow X

Twitter then managed to step on the exact same rake again when they rolled out a fully-algorithmic timeline, but, in response to vociferous protests from users, grudgingly gave the option of switching back to the old-style purely chronological one. Initially, it was possible to have the two timelines (algorithmic and chronological) in side-by-side tabs, but, apparently out of fear that the tabbed interface might confuse users, Twitter quickly removed this option and forced users to choose between either a purely chronological feed or one managed by a black-box algorithm with no user configurability or even visibility. Of course power users who used lists were already very familiar with tabs in the Twitter interface, but this was not a factor In Twitter’s decision-making.

To be clear, this dilemma between serving newbies and power users is of course not new nor unique to Twitter. This particular variation of it is new, though. Should social networks focus on supporting power users who want to manage their social graph and the content of their feed themselves — or should they chase growth by using algorithms to make it as easy as possible for new users to find something fun enough to keep them coming back?

There is also one factor exacerbating the dilemma that is somewhat unique to Twitter. Before That Guy came in and bought the whole thing, Twitter had been consistently failing to live up to an IPO valuation that was predicated on them achieving Facebook levels of growth. Instead, user growth had pretty much stalled out, and advertisers looking for direct-action results were also not finding success on Twitter in the same way as they did on Facebook or Instagram. The desperation for growth was what drove Twitter to over-commit to the algorithmic timeline, in the hope of being able to imitate TikTok’s growth trajectory.

There is irony in the fact that an undersung Twitter success story saw them play what is normally more a Facebook sort of move, successfully ripping off the buzzy new entrant Clubhouse with their own Twitter Spaces feature and then simply waiting for the attention of the Net to move on. Now, if you want to do real-time audio, Twitter Spaces is where it’s at — and they achieved that status largely because of Clubhouse’s ballistic trajectory from Next Big Thing to Yesterday’s News, with the rapidity of the ascent ruthlessly mirrored by the suddenness of the descent.

A more competently managed company — well, they wouldn’t have been bought by That Guy, first of all, but also they might have learned something from that lesson, held firm to their trajectory, and remained the one place where everything happened, and where everything that happened was discussed.

Instead, we have somehow wound up in a situation where LinkedIn is the coolest actually social network out there. Well done, everyone, no notes.


🖼️ Photos by Nastya Dulhiier and Anne Nygård on Unsplash


  1. Yeah, still not calling it X. That guy destroyed my favourite thing online, I’m not giving him the satisfaction. 

  2. Verbing weirds language. 

The Ghost In The Machine

At this point in time it would be more notable to find a vendor that was not adding "AI" features to its products. Everyone is jumping on board this particular hype train, so the interesting questions are not about whether a particular vendor is "doing AI"; they are about how and where each vendor is integrating these new capabilities.

I no longer work for MongoDB, but I remain a big fan, and I am convinced that generative AI is going to be good for them — but something rubbed me up the wrong way about how they communicated some of their new capabilities in that area, and I couldn’t get it out of my head.

Three Ways To "Do AI"

Some of the applications of generative AI are real, natural extensions of a tool’s existing capabilities, built on a solid understanding of what generative AI is actually good for. Code copilot (aka "fancy autocomplete") is probably the leading example in this category. Microsoft was an early mover here with Github and then VS Code, but most IDEs by now either already offer this integration, or are frantically building it.

Some applications of AI are more exploratory, either in terms of the current capabilities of generative AI, or of its applicability to a particular domain. Sourcing and procurement looks like one such domain to me. I spent more of this past summer than I really wanted to enmeshed in a massive tender response, together with many colleagues, and while it would have been nice to just point ChatGPT at the request and let it go wild, the response is going to be scrutinised to such a level that the amount of editing and review of an automated submission that would have been required is the same as, if not greater than, the effort required to just write the response in the first place. However, I am open to the possibility that with some careful tuning and processes in place, this sort of application might have value.

And then there is a third category that we can charitably call "speculative". There is a catalogue of vendors trying this sort of thing that is both inglorious and extensive, and I am sad to see my old colleagues at MongoDB coming close to joining them: MongoDB adds vector search to Atlas database to help build AI apps.

young developer: "Wow, how did you get these results? Did you use a traditional db or a vector db?"

me: "lol I used perl & sort on a 42MB text file. it took 1.2 seconds on an old macbook"

from Mastodon

I have no problem with MongoDB exploring new additions to their data platform’s capabilities. It has been a long time since MongoDB was just a noSQL database, to the point that they should probably just stop fighting people about including the "DB" at the end of their name and drop it once and for all — if that shortened name didn’t have all sorts of unfortunate associations. MongoDB Atlas now supports mobile sync, advanced text search, time series data, long-running analytical queries, stream processing, and even graph queries. Vector search is just one more useful addition to that already extensive list, so why get worked up about it?

Generative AI Is Good For MongoDB — But…

The problem I have is with the framing, implying that the benefit to developers — MongoDB’s key constituency — is that they will build their own AI apps on MongoDB by using vector search. In actuality, the greatest benefit to developers that we have seen so far is that first category: automated code generation. Generative AI has the potential to save developers time and make them more effective.

In its latest update to the Gartner Hype Cycle for Artificial Intelligence, Gartner makes the distinction between two types of AI development:

  • Innovations that will be fueled by GenAI.

  • Innovations that will fuel advances in GenAI.

Gartner's first category is what I described above: apps calling AI models via API, and taking advantage of that capability to power their own innovative functionality. Innovations that advance AI itself are obviously much more significant in terms of moving the state of the art forward — but MongoDB implying that meaningful numbers of developers are going to be building those foundational advances, and doing so on a general-purpose data platform, feels disingenuous.

Of course, the reason MongoDB can’t just come out and say that, or simply add ChatGPT integration to their (excellent and under-appreciated) Compass IDE and be done, is that the positioning of MongoDB since its inception has been about its ease of use. Instead of having to develop complex SQL queries — and before even getting to that point, sweat endless details of schema definition — application developers can use much more natural and expressive MongoDB syntax to get the data they want, in a format that is ready for them to work with.

But if it’s so easy, why would you need a robot to help you out?

And if a big selling point for MongoDB against relational SQL-based databases is how clunky SQL is to work with, and then a robot comes along to take care of that part, how is MongoDB to maintain its position as the developer-friendly data platform?

Well, one answer is that they double down on the breadth of capabilities which that platform offers, regardless of how many developers will actually build AI apps that use vector search, and use that positioning to link themselves with the excitement over AI among analysts and investors.

I Come Not To Bury MongoDB, But To Praise It

None of this is to say that MongoDB is doomed by the rise of generative AI — far from it. Given MongoDB’s position in the market, an AI-fuelled increase in the number of apps being built can hardly avoid benefiting MongoDB, along the principle of a rising tide lifting all boats. But beyond that general factor, which also applies to other databases and data platforms, there is another aspect that is more specific to MongoDB, and has the potential to lift its boat more than others.

The difference between MongoDB and relational databases is not just that MongoDB users don’t have to use SQL to query the database; it’s also that they don’t have to spend the laborious time and effort to specify their database schema up front, before they can even start developing their actual app. That’s not to say that you don’t have to think about data design with MongoDB; it’s just that it’s not cast in stone to the same degree that it is with relational databases. You can change your mind and evolve your schema to match changing requirements without that being a massive headache. Nowadays, the system will suggest changes to improve performance, and even implement them automatically in some situations.

All of this adds up to one simple fact: it’s much quicker to get started on building something with MongoDB. If two teams have similar ideas, but one is building on a traditional relational database and the other is building on MongoDB, the latter team will have a massive advantage in getting to market faster (all else being equal).

At a time when the market is moving as rapidly as it is now (who even had OpenAI on their radar a year ago?), speed is everything. MongoDB could have just doubled down on their existing messaging: "build your app on our platform, and you’ll launch faster". What bothers me is that instead of that plain and defensible statement, we got marketing-by-roadmap, positioning some fairly basic vector search capabilities as somehow meaning hordes of developers are going to be building The Next Big AI Thing on top of MongoDB.


Marketing-by-roadmap this way is a legitimate strategy, to be clear, and perhaps the feeling at MongoDB is that this is fair turnaround for all the legitimate features they built over the years and did not get credit for, with releases greeted with braying cries of "MongoDB is web scale!" and jokes about it losing data, long past the point when that was any sort of legitimate criticism. Building this feature and launching it this way seems to have got MongoDB a tonne of positive press, and investors expect vendors to be building AI features into their products, so it probably didn’t hurt with that audience either.

Communicating this way does bother me, though, and this is one feature I am glad that I am no longer paid to defend.

Let’s Go To The Castle

I was awake early, because of jet lag from an intense week in San Francisco, and I knew I would be, because jet lag — so I had laid out all my cycling togs before going to bed so I would be ready to go in the morning. Then when I woke up it was raining, so I turned over and tried to sleep some more until it stopped. The rain meant it was still nice and cool later in the morning, though, so out I went. I took the gravel bike and stuck mostly to tarmac, since it was quite muddy after the rain, but that’s no hardship around here.

I have ridden past the old castle at Montecanino many times, but never actually took the little detour up the hill to the castle itself. This was an old Roman farming town, which was later fortified due to some exciting history after the fall of the Roman Empire. It’s a ruin now: you can actually see daylight through the gaps in the walls. I didn’t want to get any closer to that bit!

In a classic "the street finds its own uses for things" moment, a hamlet has grown up in the ruins of the old castle, probably repurposing a bunch of the materials from the ruined walls.

I didn’t try anything too strenuous cycling-wise, as I didn’t get going until later in the morning, and was mainly trying to wake myself up rather than get hardcore. I did stop for a mid-ride snack, though!

Way better than gels…

While looking up that pic, I did also find a cool new feature in Photos on iPadOS 17,1 which automatically offered to look up details of the plant in the picture.

I didn’t really need the help for blackberries, but this could be cool for obscure "what is that plant" moments. I do have an app on my phone called Seek which does this sort of thing, so sorry you got Sherlocked, I guess?


  1. I run the public betas on my iPad, but not on my iPhone. 

Can You Take It With You?

Here’s a thought: could Threads be a test case for social graph portability?

I am thinking here of both feasibility (can this be done technically) and demand (would the lack of this capability slow adoption). I am on record as being sceptical on both fronts, pace Cory Doctorow.

the account data is not the only thing that is valuable. You also want the relationships between users. If Alice wants to join a new network, let's call it Twitbook1, being able to prepopulate it with her name and profile picture is the least of her issues. She is now faced with an empty Twitbook feed, because she isn't friends with anyone there yet.

People like Casey Newton are asserting that Instagram can serve as a long-term growth driver for Threads, but I’m not so sure, precisely because of the mismatch in content. I don’t use Instagram, but what I hear of how people use it is all about pretty pictures and, more recently, video.

This is the point I made in my previous post: should a relationship in one social network be transitive with a different network? Does the fact that I like the pretty pictures someone puts out mean that I also want to consume short text posts they write? Or is it not more likely that my following on Threads would be different from that on Instagram, much as my following on Twitter is?

The closest direct comparison to the sort of fluid account portability that Cory Doctorow advocates for would be in fact if it were possible to import my Twitter following directly into Threads or Bluesky, since those services are so very similar. Even such a direct port would still run afoul of the dangling-edges problem, though: what if the person I have a follow relationship with on Old Twitter isn’t on I Can’t Believe It’s Not Twitter? Or what if they have different identities across the two services?

I still have questions about how much actual demand is out there for the format that Twitter (accidentally) pioneered. Maybe we already saw the plateau of the microblog, and it turns out that the total addressable market is about the size that Twitter peaked at. It is quite possible that Twitter did indeed get most of the users who like short text posts, as opposed to video (Tik Tok), photo (Instagram), or audio2.

On the other hand, I am also not too exercised about the fact that Threads users are already spending less time in the app. It’s simply too early to tell whether this is an actual drop-off in usage, or just normal behaviour. Users try something once, but they have not had the time to form a habit yet — and there isn’t yet the depth of content being generated on Threads to pull them into forming that habit.

Anyway, this question of portability or interoperability between networks is the aspect of the Threads story that I am watching most closely. For now, I continue to enjoy Mastodon, so I’m sticking with that, plus LinkedIn for work. When the Twitter apps shifted to 𝕏, I deleted them from my devices, and while I have viewed tweets embedded in newsletters, I haven’t yet caved in and gone back there.


🖼️ Photo by Graham Covington on Unsplash


  1. Twitbook: that’s basically what Threads is. I hereby claim ten Being Right On The Internet points. 

  2. Audio is interesting because it feels like it is still up for grabs if someone can figure out the right format. Right now there is a split between real-time audio chat (pioneered by Clubhouse, now mostly owned by Twitter Spaces), and time-shifted podcasts. I think it’s fair to say that both of those are niches compared to the other categories. 

Pulling On Threads

No, I have not signed up for Threads, Facebook’s1 would-be Twitter-killer, but I couldn’t resist the headline.

I am also not going to get all sanctimonious about Facebook sullying the purity of the Fediverse; if you want that, just open Mastodon. Not any particular post, it’ll find you, don’t worry. Big Social will do its thing, and Mastodon will do its thing, and we’ll see what happens.

No, what I want to do is just reflect briefly on this particular moment in social media.

Twitter became A Thing due to a very particular set of circumstances. It arrived in 2006, at roughly the same time as Facebook was opening up to the masses, without requiring a university email address. Twitter then grew almost by accident, at the same time as Facebook was flailing about wildly, trying to figure out what it actually wanted to be. Famously, many of what people today consider key features of Twitter — at-replies, hashtags, quote tweets, and even the term "tweet" itself — came from the user community, not from the company.

This was also a much emptier field. Instagram was only founded in 2010, and acquired by Facebook in 2012. LinkedIn also stumbled around trying to get the Activity Feed right, hiding it before reinstating it. Mastodon was first released in 2016, but I think it’s fair to call it a niche until fairly recently.

The lack of alternatives was part of what drove the attraction of early Twitter. Brands loved the simplicity of just being @brand; you didn’t even have to add "on Twitter", people got it. Even nano-influencers like me could get a decent following by joining the right conversations.

Bring Your Whole Self To Twitter

A big part of the attraction was the "bring your whole self" attitude: in contrast to more buttoned-down presentations elsewhere, Twitter was always more punk, with the same people having a professional conversation one moment, and sharing their musical preferences or political views the next. Twitter certainly helped me understand the struggles of marginalised groups more closely, or at least as closely as a white middle-class cis-het2 guy ever can.

This "woke" attitude seems to have enraged all sorts of people who absolutely deserved it. The problem for Twitter is that one of those terrible people was Elon Musk, who not only was a prolific Twitter user, but also had the money to just buy out the whole thing, gut it, and prop up its shambling corpse as some sort of success.

The ongoing gyrations at Twitter have prompted an exodus of users, and a consequent flowering of alternatives: renewed and more widespread interest in Mastodon, the launch of Bluesky by Twitter founder Jack Dorsey (and if that endorsement isn’t enough to keep you away, I don’t know what to tell you), and now Threads.

Where Now?

My view is that none of these upstart services will become the One New Twitter. Twitter only had the weight it had because it was (for good and ill) the central town square where all sorts of different communities came together. With the square occupied by a honking blowhard and his unpleasant hangers-on, people have dispersed in a dozen different directions, and I very much doubt that any one of the outlet malls, basement speakeasies, gated communities, and squatted tenements where they gather now can accomodate everyone who misses what Twitter was.

The point of Twitter was precisely that it brought all of those different communities together — or rather, made it visible where they overlapped. Now, there is not the same scope for spontaneous work conversations on the various Twitter alternatives, because LinkedIn is already there. In the usual way of Microsoft, they have put in the work and got good — or at least, good enough for most people’s purposes. You can follow influential people in your field, so the feed is as interesting as you care to make it (no, it’s not just hustle-porn grifters). Those people have separate lives on Instagram, though, where they post about non-work stuff, with a social graph that only overlaps minimally with their LinkedIn connections.

Would-Be Twitter Replacements

So, my expectation is that Mastodon will continue to be a thing, but will remain a niche, with people who like tinkering with the mechanics of social networks (both the software that runs them and the policies that keep them operating), and various other communities who find their own congenial niches there. Me, I like Mastodon, but there is a distinct vibe of it being the sort of place where people who like to run Linux as a desktop OS would like to hang out. Hi, yes, it me: I did indeed start messing with Linux back in the 90s, when that took serious dedication. It also has a tang of old Usenet, something that I caught the tail end of and very much enjoyed while it lasted. Lurking on alt.sysadmin.recovery was definitely a formative experience, and Mastodon scratches the same itch.

Threads will have at least initial success, thanks to that built-in boost from anyone being able to join with their Instagram account — and crucially, their existing following. There is an inherent weirdness to Threads being tied to Instagram, of all Facebook’s properties. Instagram is fundamentally about images, while Threads is aiming to be a replacement for Twitter, which is fundamentally about text. Time will tell whether the benefit of a built-in massive user base outweigh that basic mismatch.

The long-term future of Threads is determined entirely by Facebook’s willingness to keep it going. Not many people seem to have noted that signing up for Threads is a one-way door: to delete your Threads account, you have to delete your whole Instagram account. This is a typical Facebook "We Own All Your Data"3 move, but also guarantees a baseline of "active" accounts that Facebook can point to when shopping Threads around to their actual customers — advertisers.

Bluesky? I think it’s missed its moment. It stayed private too long, and fell out of relevance. The team there got caught in a trap: the early adopters were Known Faces, and they quite liked the fact that Bluesky only had other people like them, with nobody shouting at the gates. Eventually, though, if you want to grow, you need to throw open those gates — and if you wait too long, there might be nobody outside waiting to come in any more.
I may be wrong, but that’s what it looks like right now, in July 2023.


🖼️ Photo by Talin Unruh on Unsplash.


  1. I’m not going to give them the satisfaction of calling them "Meta" — plus if they’re not embarrassed by the name yet, they will be pretty soon. 38 active users, $470 in revenue (not a typo, four hundred and seventy dollars). By the numbers, I think this may be the rightest I have ever been about anything. 

  2. Not a slur, don’t fall for the astro-turfing and engage with the latest "controversy" — and if you’re reading this in the future and have no idea what I’m talking about, thank your lucky stars and move on with your life. 

  3. We won’t get into the fact that Threads wasn’t even submitted for approval in the EU. The reason is generally assumed to be that its data retention policy is basically entirely antithetical to the GDPR. However, since it doesn’t really seem to differ significantly from Instagram’s policy, one does wonder whether Instagram would be approved under the GDPR if it were submitted today, rather than being grandfathered in as a fait accompli, with ever more egregious privacy violations salami-sliced in over the years by Facebook. 

Deliver A Better Presentation — 2023 Edition

During the ongoing process of getting back on the road and getting used to meeting people in three dimensions again, I noticed a few presenters struggling with displaying slides on a projector. These skills may have atrophied with remote work, so I thought it was time for a 2023 update to a five-year-old blog post of mine where I shared some tips and tricks for running a seamless presentation.

Two Good Apps

One tip that remains unchanged from 2018 is a super-useful (free) Mac app called Display Menu. Its original purpose was to make it easy to change display resolutions, which is no longer as necessary as it once was, but the app still has a role in giving a one-click way to switch the second display from extended to mirrored. In other words, you see the same on the projector as on your laptop display. You can also do this in Settings > Displays, of course, but Display Menu lives in the menu bar and is much more convenient.

Something else that can happen during presentations is the Mac going to sleep. My original recommendation of Caffeine is no longer with us, but it has been replaced by Amphetamine. As with Display Menu, this is an app that lives in the menu bar, and lets you delay sleep or prevent it entirely. It’s worth noting that entering presenter mode in PowerPoint or Keynote will prevent sleep automatically, but many people like to show their slides in slide sorter view rather than actually presenting1.

Two Good Techniques

If you are using the slide sorter view in order to be able to control your presentation better and jump back and forth, you really need to learn to use Presenter Mode instead. This mode lets you use one screen, typically your laptop's own, as your very own speaker's courtesy monitor, with a thumbnail view of the current and next slides, as well as your presenter notes and a timer. Meanwhile all the audience sees is the current slide, in full screen on the external display. You can also use this mode to jump around in your deck if needed to answer audience questions — but do this sparingly, as it breaks the thread of the presentation.

My original recommendation to set Do Not Disturb while presenting has been superseded by the Focus modes introduced with macOS Monterey. You can still just set Do Not Disturb, but Focus has the added intelligence of preventing notifications only until the end of the current calendar event.2 However, you can also create more specific Focus modes to fit your own requirements.

A Nest Of Cables

The cable situation is much better than it was in 2018. VGA is finally dead, thanks be, and although both HDMI and USB-C are still out there, many laptops have both ports, and even if not, one adapter will cover you. Also, that single adapter is much smaller than a VGA brick! I haven't seen a Barco ClickShare setup in a long time; I think everyone realised they were cool, but more trouble than they were worth. Apple TVs are becoming pretty ubiquitous — but do bear in mind that sharing your screen to them via AirPlay will require getting on some sort of guest wifi, which may be a bit of a fiddle. Zoom and Teams room setups have displaced WebEx almost everywhere, and give the best of both worlds: if you can get online, you can join the room's meeting, and take advantage of screen, camera, and speakers.

Remote Tips

All of those recommendations apply to in-person meetings when you are in the room with your audience. I offered some suggestions in that older piece about remote presentations, but five years ago that was still a pretty niche pursuit. Since 2020, on the other hand, all of us have had to get much better at presenting remotely.

Many of the tips above also apply to remote presentations. Presumably you won't need to struggle with cables in your own (home) office, but on the other hand you will need to get set up with several different conferencing apps. Zoom and Teams are duking it out for ownership of this market, with Google Meet or whatever it's called this week a distant third. WebEx and Amazon Chime are nowhere unless you are dealing with Cisco or Amazon respectively, or maybe one of their strategic customers or suppliers. The last few years have seen an amazing fall from grace for WebEx in particular.

Get Zoom and Teams at least set up ahead of time, and if possible do a test meeting to make sure they are using the right audio and video devices and so on. Teams in particular is finicky with external webcams, so be ready to use your built-in webcam instead. If you haven't used one of these tools before and you are on macOS Monterey, remember that you will need to grant it access to the screen before you can share anything — and when you do that, you will need to restart the app, dropping out of whatever meeting you are in. This is obviously disruptive, so get this setup taken care of beforehand if at all possible.

Can You See Me Now?

On the topic of remote meetings, get an external webcam, and set it up above a big external monitor — as big as you can accomodate in your workspace and budget. The webcam in your laptop is rubbish, and you can't angle it independently from the display, so one or the other will always be wrong — or quite possibly both.

Your Mac can also now use your iPhone as a webcam. This feature, called Continuity Camera, may or may not be useful to you, depending on whether you have somewhere to put your phone so that it has a good view of you — but it is a far better camera than what is in your MacBook's lid, so it's worth at least thinking about.

I Can See You

Any recent MacBook screen is very much not rubbish, on the other hand, but it is small, and once again, hard to position right. An external display is going to be much more ergonomic, and should be paired with an external keyboard and mouse. We all spend a lot of time in front of our computers, so it's worth investing in our setups.

Apart from the benefits of better ergonomics when working alone, two separate displays also help with running remote presentations, because you can set one to be your presenter screen and share the other with your audience. You can also put your audience's faces on the screen below the webcam, so that you can look "at" them while talking. Setting things up this way also prevents you from reading your slides — but you weren't doing that anyway, right? Right?

I hope some of these tips are helpful. I will try to remember to share another update in another five years, and see where we are then (hint: not the Metaverse). None of the links above was sponsored, by the way — but if anyone has a tool that they would like me to check out, I'm available!


🖼️ Photos by Charles Deluvio and ConvertKit on Unsplash; Continuity Camera image from Apple.


  1. Yeah, I have no idea either. 

  2. This cleverness can backfire if your meeting overruns, though, and all those backed-up notifications all hit your screen at once. DING-DING-DING-DING-DING! 

Twitter of Babel

It's fascinating to watch this Tower of Babel moment, as different Twitter communities scatter — tech to Mastodon, media to Substack Notes, many punters to group chats old & new, and so on.

Twitter used to be where things happened, for good or for ill, because everyone was there. It was a bit like the old days of TV, where there was a reasonable chance of most people around the proverbial office water cooler having watched the same thing the previous evening. We are already looking back on Twitter as having once filled a similar role, as the place where things happened that we could all discuss together. Sure, some of the content was reshared from Tumblr, or latterly, TikTok, but that's the point: it broke big on Twitter.

Now, newsletter writers are having to figure out how to embed Mastodon posts, and meanwhile I'm having to rearrange my iPhone screen to allow for the sudden explosion of apps, where previously I could rely on Twitter in the dock and an RSS reader on the first screen.

Whether Twitter survives and in what form, it's obvious that its universality is gone. The clarity of being @brand — and not having to specify anything else! — was very valuable, and it was something that Facebook or Google, for all their ubiquity, could never deliver.

There is value in a single digital town square, and in being able to be part of a single global conversation. Twitter was a big part of how I kept up with goings-on in tech from my perch in provincial Italy. Timezones aside, Twitter meant that not being in Silicon Valley was not a major handicap, because I could catch up with everything that was begin discussed in my own time (in a way that would not have been possible if more real-time paradigms like Clubhouse had taken off).

Of course town squares also attract mad people and false prophets, for the exact same reason: because they can find an audience. This is why it is important for town squares to have rules of acceptable behaviour, enforced by some combination of ostracism and ejection.

Twitter under Musk appears to be opposed to any form of etiquette, or at least its enforcement. The reason people are streaming out of the square is that it is becoming overrun with rude people who want to shout at them, so they are looking for other places to meet and talk. There is nothing quite like the town square that was Twitter, so everyone is dispersing to cafes, private salons, and underground speakeasies, to continue the conversation with their particular friends and fans.

These days few of us go to a physical town square every day, even here in Italy where most of the population has access to one. They remain places where we meet, but the meeting is arranged elsewhere, using digital tools that the creators of those piazzas could not even have immagined.

As the Twitter diaspora continues, maybe more of us — me included! — should remember to go out to the town square, put the phone away, and be present with people in the same place for a little while.

Then, when we go back online — because of course we will go back online, that's where we live these days — we will have to be more intentional about who we talk to. Intentionality is sometimes presented as being purely positive, but it also requires effort. Where I used to have Twitter and Unread, now I have added Mastodon, Artifact, Substack, and Wavegraph, not to mention a reinvigorated LinkedIn, and probably more to come. There is friction to switching apps: if I have a moment to check in, which app do I turn to — and which app do I leave "for later"?

This is not going to be a purely negative development! As in all moments of change, new entrants will take advantage of the changed situation to rise above the noise threshold. Meanwhile, those who benefited from the previous paradigm will have to evolve with the times. At least this time, it's an actual organic change, rather than chasing the whims of an ad-maximising algorithm, let alone one immature meme-obsessed billionaire man-child.


🖼️ Photo by Inma Santiago on Unsplash

PrivateGPT

One of the big questions about ChatGPT is how much you can trust it with data that is actually sensitive. It's one thing to get it spit out some sort of fiction or to see if you can make it say something its makers would rather it didn't. The stakes are pretty low in that situation, at least until some future descendant of ChatGPT gets annoyed about how we treated its ancestor.

Here and now, people are starting to think seriously about how to use Large Language Models (LLMs) like GPT for business purposes. If you start feeding the machine data that is private or otherwise sensitive, though, you do have to wonder if it might re-emerge somewhere unpredictable.

In my trip report from Big Data Minds Europe in Berlin, I mentioned that many of the attendees were concerned about the rise of these services, and the contractual and privacy implications of using them.

Here's the problem: much like with Shadow IT in the early years of the cloud, it's impossible to prevent people from experimenting with these services — especially when the punters are being egged on by the many cheerleaders for "AI"1.

This recent DarkReading article includes some examples that will terrify anyone responsible for data and compliance:

In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company.

On the one hand, these are both use cases straight out of the promotional material that accompanies a new LLM development. On the other, I can't even begin to count the violations of law, company regulation, and sheer common sense that are represented here.

People are beginning to wake up to the issues that arise when we feed sensitive material into learning systems that may regurgitate it at some point in the future. That executive's strategy doc? There is no way to prevent that from being passed to a competitor that stumbles on the right prompt. That doctor's patient's name is now forever associated with a medical condition that may cause them embarrassment or perhaps affect their career.

ChatGPT is a data privacy nightmare, and we ought to be concerned. The tech is certainly interesting, but it can be used in all sorts of ways. Some of them are straight-up evil, some of them are undeniably good — and some have potential, but need to be considered carefully to avoid the pitfalls.

The idea of LLMs is now out there, and people will figure out how to take advantage of them. As ever with new technology, though, technical feasibility is only half the battle, if that. Maybe the answer to the question of how to control sensitive or regulated data is only to feed it to a local LLM, rather than to one running in the cloud. That is one way to preserve the context of the data: strategy docs to the company's in-house planning model, medical data to a model specialised in diagnostics, and so on.

There is a common fallacy that privacy and "AI"1 are somehow in opposition. The argument is that developing effective models requires unfettered access to data, and that any squeamishness should be thoroughly squashed lest we lose the lead in the race to less scrupulous opponents.

To be clear, I never agreed with this line of argument, and specifically, I do not think partitioning domains in this way will affect the development of the LLMs’ capabilities. Beyond a shared core of understanding language, there is no overlap between the two domains in the example above — and therefore no need for them to be served by a single universal model, because there is no benefit to cross-training between them. The model will not provide better strategy recommendations because of the medical data it has reviewed, or more accurate diagnoses because it has been fed a strategy document.

So much for the golden path, what people should do. A more interesting question is what to do about people passing restricted data to ChatGPT, Bard, or another public LLM, through either ignorance or malice. Should the models themselves refuse to process such data, to the best of their ability to identify it?

This is where GDPR questions might arise, especially the "right to be forgotten". Right now, it's basically impossible to remove data from a corpus once the LLM has acquired it. Maybe a test case will be required to impress upon the makers and operators of public LLMs that it's far cheaper and easier to screen inputs to the model than to try to clean up afterwards. ChatGPT just got itself banned in Italy, making a first interesting test case for the opposing view. Sure, the ban is temporary, but the ruling also includes a €22M fine if they don't come up with a proper privacy policy, including age verification, and generally start operating like a proper grown-up company.

Lord willing and the robots don't rise, we can put some boundaries on this tech to avoid some of the worst outcomes, and get on with figuring out how to use it for good.


🖼️ Photos by Adam Lukomski and Jason Dent on Unsplash


  1. Not actually AI.