Deliver A Better Presentation — 2023 Edition

During the ongoing process of getting back on the road and getting used to meeting people in three dimensions again, I noticed a few presenters struggling with displaying slides on a projector. These skills may have atrophied with remote work, so I thought it was time for a 2023 update to a five-year-old blog post of mine where I shared some tips and tricks for running a seamless presentation.

Two Good Apps

One tip that remains unchanged from 2018 is a super-useful (free) Mac app called Display Menu. Its original purpose was to make it easy to change display resolutions, which is no longer as necessary as it once was, but the app still has a role in giving a one-click way to switch the second display from extended to mirrored. In other words, you see the same on the projector as on your laptop display. You can also do this in Settings > Displays, of course, but Display Menu lives in the menu bar and is much more convenient.

Something else that can happen during presentations is the Mac going to sleep. My original recommendation of Caffeine is no longer with us, but it has been replaced by Amphetamine. As with Display Menu, this is an app that lives in the menu bar, and lets you delay sleep or prevent it entirely. It’s worth noting that entering presenter mode in PowerPoint or Keynote will prevent sleep automatically, but many people like to present in slide sorter view rather than presenting1.

Two Good Techniques

If you are using the slide sorter view in order to be able to control your presentation better and jump back and forth, you really need to learn to use Presenter Mode instead. This mode lets you use one screen, typically your laptop's own, as your very own speaker's courtesy monitor, with a thumbnail view of the current and next slides, as well as your presenter notes and a timer. Meanwhile all the audience sees is the current slide, in full screen on the external display. You can also use this mode to jump around in your deck if needed to answer audience questions — but do this sparingly, as it breaks the thread of the presentation.

My original recommendation to set Do Not Disturb while presenting has been superseded by the Focus modes introduced with macOS Monterey. You can still just set Do Not Disturb, but Focus has the added intelligence of preventing notifications only until the end of the current calendar event.2 However, you can also create more specific Focus modes to fit your own requirements.

A Nest Of Cables

The cable situation is much better than it was in 2018. VGA is finally dead, thanks be, and although both HDMI and USB-C are both still out there, many laptops have both ports, and even if not, one adapter will cover you. Also, that single adapter is much smaller than a VGA brick! I haven't seen a Barco ClickShare setup in a long time; I think everyone realised they were cool, but more trouble than they were worth. Apple TVs are becoming pretty ubiquitous — but do bear in mind that sharing your screen to them via AirPlay will require getting on some sort of guest wifi, which may be a bit of a fiddle. Zoom room setups have displaced WebEx almost everywhere, and give the best of both worlds: if you can get online, you can join the room's Zoom, and take advantage of screen, camera, and speakers.

Remote Tips

All of those recommendations apply to in-person meetings when you are in the room with your audience. I offered some suggestions in that older piece about remote presentations, but five years ago that was still a pretty niche pursuit. Since 2020, on the other hand, all of us have had to get much better at presenting remotely.

Many of the tips above also apply to remote presentations. Presumably you won't need to struggle with cables in your own (home) office, but on the other hand you will need to get set up with several different conferencing apps. Zoom and Teams are duking it out for ownership of this market, with Google Meet or whatever it's called this week a distant third. WebEx and Amazon Chime are nowhere unless you are dealing with Cisco or Amazon respectively, or maybe one of their strategic customers or suppliers. The last few years have seen an amazing fall from grace for WebEx in particular.

Get Zoom and Teams at least set up ahead of time, and if possible do a test meeting to make sure they are using the right audio and video devices and so on. Teams in particular is finicky with external webcams, so be ready to use your built-in webcam instead. If you haven't used one of these tools before and you are on macOS Monterey, remember that you will need to grant it access to the screen before you can share anything — and when you do that, you will need to restart the app, dropping out of whatever meeting you are in. This is obviously disruptive, so get this setup taken care of beforehand if at all possible.

Can You See Me Now?

On the topic of remote meetings, get an external webcam, and set it up above a big external monitor — as big as you can accomodate in your workspace and budget. The webcam in your laptop is rubbish, and you can't angle it independently from the display, so one or the other will always be wrong — or quite possibly both.

Your Mac can also now use your iPhone as a webcam. This feature, called Continuity Camera, may or may not be useful to you, depending on whether you have somewhere to put your phone so that it has a good view of you — but it is a far better camera than what is in your MacBook's lid, so it's worth at least thinking about.

I Can See You

Any recent MacBook screen is very much not rubbish, on the other hand, but it is small, and once again, hard to position right. An external display is going to be much more ergonomic, and should be paired with an external keyboard and mouse. We all spend a lot of time in front of our computers, so it's worth investing in our setups.

Apart from the benefits of better ergonomics when working alone, two separate displays also help with running remote presentations, because you can set one to be your presenter screen and share the other with your audience. You can also put your audience's faces on the screen below the webcam, so that you can look "at" them while talking. Setting things up this way also prevents you from reading your slides — but you weren't doing that anyway, right? Right?

I hope some of these tips are helpful. I will try to remember to share another update in another five years, and see where we are then (hint: not the Metaverse). None of the links above was sponsored, by the way — but if anyone has a tool that they would like me to check out, I'm available!


🖼️ Photos by Charles Deluvio and ConvertKit on Unsplash; Continuity Camera image from Apple.


  1. Yeah, I have no idea either. 

  2. This cleverness can backfire if your meeting overruns, though, and all those backed-up notifications all hit your screen at once. DING-DING-DING-DING-DING! 

Twitter of Babel

It's fascinating to watch this Tower of Babel moment, as different Twitter communities scatter — tech to Mastodon, media to Substack Notes, many punters to group chats old & new, and so on.

Twitter used to be where things happened, for good or for ill, because everyone was there. It was a bit like the old days of TV, where there was a reasonable chance of most people around the proverbial office water cooler having watched the same thing the previous evening. We are already looking back on Twitter as having once filled a similar role, as the place where things happened that we could all discuss together. Sure, some of the content was reshared from Tumblr, or latterly, TikTok, but that's the point: it broke big on Twitter.

Now, newsletter writers are having to figure out how to embed Mastodon posts, and meanwhile I'm having to rearrange my iPhone screen to allow for the sudden explosion of apps, where previously I could rely on Twitter in the dock and an RSS reader on the first screen.

Whether Twitter survives and in what form, it's obvious that its universality is gone. The clarity of being @brand — and not having to specify anything else! — was very valuable, and it was something that Facebook or Google, for all their ubiquity, could never deliver.

There is value in a single digital town square, and in being able to be part of a single global conversation. Twitter was a big part of how I kept up with goings-on in tech from my perch in provincial Italy. Timezones aside, Twitter meant that not being in Silicon Valley was not a major handicap, because I could catch up with everything that was begin discussed in my own time (in a way that would not have been possible if more real-time paradigms like Clubhouse had taken off).

Of course town squares also attract mad people and false prophets, for the exact same reason: because they can find an audience. This is why it is important for town squares to have rules of acceptable behaviour, enforced by some combination of ostracism and ejection.

Twitter under Musk appears to be opposed to any form of etiquette, or at least its enforcement. The reason people are streaming out of the square is that it is becoming overrun with rude people who want to shout at them, so they are looking for other places to meet and talk. There is nothing quite like the town square that was Twitter, so everyone is dispersing to cafes, private salons, and underground bars, to continue the conversation with their particular friends and fans.

These days few of us go to a physical town square every day, even here in Italy where most of the population has access to one. They remain places where we meet, but the meeting is arranged elsewhere, using digital tools that the creators of those piazzas could not even have immagined.

As the Twitter diaspora continues, maybe more of us — me included! — should remember to go out to the town square, put the phone away, and be present with people in the same place for a little while.

Then, when we go back online — because of course we will go back online, that's where we live these days — we will have to be more intentional about who we talk to. Intentionality is sometimes presented as being purely positive, but it also requires effort. Where I used to have Twitter and Unread, now I have added Mastodon, Artifact, Substack, and Wavegraph, not to mention a reinvigorated LinkedIn, and probably more to come. There is friction to switching apps: if I have a moment to check in, which app do I turn to — and which app do I leave "for later"?

This is not going to be a purely negative development! As in all moments of change, new entrants will take advantage of the changed situation to rise above the noise threshold. Meanwhile, those who benefited from the previous paradigm will have to evolve with the times. At least this time, it's an actual organic change, rather than chasing the whims of an ad-maximising algorithm, let alone one immature meme-obsessed billionaire man-child.


🖼️ Photo by Inma Santiago on Unsplash

PrivateGPT

One of the big questions about ChatGPT is how much you can trust it with data that is actually sensitive. It's one thing to get it spit out some sort of fiction or to see if you can make it say something its makers would rather it didn't. The stakes are pretty low in that situation, at least until some future descendant of ChatGPT gets annoyed about how we treated its ancestor.

Here and now, people are starting to think seriously about how to use Large Language Models (LLMs) like GPT for business purposes. If you start feeding the machine data that is private or otherwise sensitive, though, you do have to wonder if it might re-emerge somewhere unpredictable.

In my trip report from Big Data Minds Europe in Berlin, I mentioned that many of the attendees were concerned about the rise of these services, and the contractual and privacy implications of using them.

Here's the problem: much like with Shadow IT in the early years of the cloud, it's impossible to prevent people from experimenting with these services — especially when the punters are being egged on by the many cheerleaders for "AI"1.

This recent DarkReading article includes some examples that will terrify anyone responsible for data and compliance:

In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company.

On the one hand, these are both use cases straight out of the promotional material that accompanies a new LLM development. On the other, I can't even begin to count the violations of law, company regulation, and sheer common sense that are represented here.

People are beginning to wake up to the issues that arise when we feed sensitive material into learning systems that may regurgitate it at some point in the future. That executive's strategy doc? There is no way to prevent that from being passed to a competitor that stumbles on the right prompt. That doctor's patient's name is now forever associated with a medical condition that may cause them embarrassment or perhaps affect their career.

ChatGPT is a data privacy nightmare, and we ought to be concerned. The tech is certainly interesting, but it can be used in all sorts of ways. Some of them are straight-up evil, some of them are undeniably good — and some have potential, but need to be considered carefully to avoid the pitfalls.

The idea of LLMs is now out there, and people will figure out how to take advantage of them. As ever with new technology, though, technical feasibility is only half the battle, if that. Maybe the answer to the question of how to control sensitive or regulated data is only to feed it to a local LLM, rather than to one running in the cloud. That is one way to preserve the context of the data: strategy docs to the company's in-house planning model, medical data to a model specialised in diagnostics, and so on.

There is a common fallacy that privacy and "AI"1 are somehow in opposition. The argument is that developing effective models requires unfettered access to data, and that any squeamishness should be thoroughly squashed lest we lose the lead in the race to less scrupulous opponents.

To be clear, I never agreed with this line of argument, and specifically, I do not think partitioning domains in this way will affect the development of the LLMs’ capabilities. Beyond a shared core of understanding language, there is no overlap between the two domains in the example above — and therefore no need for them to be served by a single universal model, because there is no benefit to cross-training between them. The model will not provide better strategy recommendations because of the medical data it has reviewed, or more accurate diagnoses because it has been fed a strategy document.

So much for the golden path, what people should do. A more interesting question is what to do about people passing restricted data to ChatGPT, Bard, or another public LLM, through either ignorance or malice. Should the models themselves refuse to process such data, to the best of their ability to identify it?

This is where GDPR questions might arise, especially the "right to be forgotten". Right now, it's basically impossible to remove data from a corpus once the LLM has acquired it. Maybe a test case will be required to impress upon the makers and operators of public LLMs that it's far cheaper and easier to screen inputs to the model than to try to clean up afterwards. ChatGPT just got itself banned in Italy, making a first interesting test case for the opposing view. Sure, the ban is temporary, but the ruling also includes a €22M fine if they don't come up with a proper privacy policy, including age verification, and generally start operating like a proper grown-up company.

Lord willing and the robots don't rise, we can put some boundaries on this tech to avoid some of the worst outcomes, and get on with figuring out how to use it for good.


🖼️ Photos by Adam Lukomski and Jason Dent on Unsplash


  1. Not actually AI. 

Artificial Effluent

A lot of the Discourse around ChatGPT has focused on the question of "what if it works?". As is often the case with technology, though, it's at least as important to ask the question of "what if it doesn't work — but people use it anyway?".

ChatGPT has a failure mode where it "hallucinates" things that do not exist. Here are just a few examples of things it made up from whole cloth: links on websites, entire academic papers, software for download, and a phone lookup service. These "hallucinations" are nothing like the sorts of hallucinations that a human might experience, perhaps after eating some particularly exciting cheese, or maybe a handful of mushrooms. Instead, these fabrications are inherent in the nature of the language models as stochastic parrots: they don't actually have any conception of the nature of the reality they appear to describe. They are simply producing coherent text which resembles text they have seen before. If this process results in superficially plausible-seeming descriptions of things that do not exist and have never existed, that is a problem for the user.

Of course that user may be trying to generate fictional descriptions, but with the goal of passsing off ChatGPT's creations as their own. Unfortunately "democratising the means of production" in this way triggers a race to the bottom, to the point that the sheer volume of AI-generated submissions spam forced venerable SF publisher Clarkesworld to shut down — temporarily, one hopes. None of the submitted material seems to have been any good, but all of it had to be opened and dealt with. And it's not just Clarkesworld being spammed with low-quality submissions, either: it's endemic:

The people doing this by and large don’t have any real concept of how to tell a story, and neither do any kind of A.I. You don’t have to finish the first sentence to know it’s not going to be a readable story.

Even now while the AI-generated submissions are very obvious, the process of weeding them out still takes time, and the problem will only get worse as newer generations of the models are able to produce more prima facie convincing fakes.

The question of whether AI-produced fiction that is indistinguishable from human-created fiction is still ipso facto bad is somewhat interesting philosophically, but that is not what is going on here: the purported authors of these pieces are not disclosing that they are at best "prompt engineers", or glorified "ideas guys". They want the kudos of being recognised as authors, without any of the hard work:

the people submitting chatbot-generated stories appeared to be spamming magazines that pay for fiction.

I might still quibble with the need for a story-writing bot when actual human writers are struggling to keep a roof overhead, but we are as yet some way from the point where the two can be mistaken for each other. The people submitting AI-generated fiction to these journals are pure grifters, hoping to turn a quick buck from a few minutes' work in ChatGPT, and taking space and money from actual authors in the process.1

Ted Chiang made an important prediction in his widely-circulated blurry JPEGs article:

But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large language models and lossy compression is useful. Repeatedly resaving a jpeg creates more compression artifacts, because more information is lost every time.

This is indeed going to be a problem for GPT-4, -5, -6, and so on: where will they find a pool of data that is not polluted with the effluent of their predecessors? And yes, I know OpenAI is supposedly working on ways to detect their own output, but we all know that is just going to be a game of cat and mouse, with new methods of detection always trailing the new methods of evasion and obfuscation.

To be sure, there are many legitimate uses for this technology (although I still don't want it in my search box). The key to most of them is that there is a moment for review by a competent and motivated human built in to the process. The real failure for all of the examples above is not that the language model made something up that might or perhaps even should exist; that's built in. The problem is that human users were taken in by its authoritative tone and acted on the faulty information.

My concern is specifically that, in the post-ChatGPT rush for everyone to show that they are doing something — anything — with AI, doors will be opened to all sorts of negative consequences. These could be active abuses, such as impersonation, or passive ones, omitting safeguards that would prevent users from being taken in by machine hallucinations.

Both of these cases are abusive, and unlike purely technical shortcomings, it is far from being a given that these abuse vectors will be addressed at all, let alone simply by the inexorable march of technological progress. Indeed, one suspects that to the creators of ChatGPT, a successful submission to a fiction journal would be seen as a win, rather than the indictment of their entire model that it is. And that is the real problem: it is still far from clear what the endgame is for the creators of this technology, nor what (or whom) they might be willing to sacrifice along the way.


🖼️ Photo by Possessed Photography on Unsplash


  1. It's probably inevitable that LLM-produced fiction will appear sooner rather than later. My money is on the big corporate-owned shared universes. Who will care if the next Star Wars tie-in novel is written by a bot? As long as it is consistent with canon and doesn't include too many women or minorities, most fans will be just fine with a couple of hundred pages of extruded fiction product. 

Why The Best Recommendations Are The Worst

The saga of my mother-in-law's printer continues. Apparently it does not always reconnect to WiFi in a timely manner when waking from sleep? I'm not entirely sure because I haven't had a chance to go around there yet armed with a big stick and intimidate the printer into submission.

This whole sad story is yet another example of why you should not ask people who are deeply into some domain for recommendations.

This advice seems counterintuitive: surely you want the experts' advice? Don't they know best? Not necessarily, no.

Take my mother-in-law's printer (please!). I bought it for her, based on a few criteria: it was in budget, it fit within the physical dimensions of the space she has for it, and it's from a brand (HP) with which I have always had good experiences. My own home printer is an HP LaserJet, and is an absolute tank, with all the features I could possibly want and more. This new printer was supposed to be the baby version of that. Unfortunately, it seems to have been de-contented to such a degree that functionality is severely compromised, and I worry about durability too. In other words, I tried to compromise between the sort of printer I would buy for myself, and the sorts of concerns that ordinary people have. My mother-in-law would have been better served by just driving to an electronics shop and getting whatever inkjet multifunction thing they had on special that week and (this is the key part) never thinking about it again.

The same principle applies with WiFi: my home network runs on Ubiquiti gear, but if I tell non-IT people how much my access points cost, their jaws hit the floor. For most people, the access point they got from their ISP is Good Enough(tm) and they never think about it.

"Good enough" actually is good enough for most people, because they don't need extra features, will not subject the thing to whatever stress the pro version is engineered to withstand, and don't need or wouldn't notice the ultimate quality of the result. They simply need something that's, well, good enough.

I do try to practice what I just preached (honest!): when my washing machine died, instead of springing for the Miele one that can probably iron and fold the clothes as well as cleaning them, I got an LG for literally a third of the price that seems… fine? But then again, I never think about it, no matter how much people who work in that business rave about Miele build quality or whatever.

Then again, I did spend an enjoyable time researching exactly which bookshelf speakers to get for my home office, and ended up going with an Edifier set that is way overkill for my needs. But it makes me happy, and that's what I care about.

Don't ask me for advice, we'll both regret it: you when you wind up with something expensive, overbuilt, and finicky, and me when I have to keep coming around to fix it when something goes wrong.


🖼️ Photo by Richard Dykes on Unsplash

Printing Money

I spent more time than I should have yesterday installing my mother-in-law’s new HP printer, and while I dodged the more obvious scams, I was actually shocked at how bad the experience was. There is absolutely no way that a normal person without significant IT experience could do it. And the worst part is that HP are in my experience the best — okay, least bad — printer manufacturer out there.

I'm going to document what happened in exhaustive detail because I still can't bring myself to believe some of what happened. It's not going to be a fun post. Sorry. If you want a fun post about how terrible printers are, here's one from The Oatmeal.

  • The "quick start" guide only showed the physical steps (remove packaging, connect power cord, add paper) and then offered a QR code to scan to deploy an HP app that would supposedly take care of the rest of the process.
  • The QR code lead to a URL that 404'd. In retrospect, this was the moment when I should have packed everything back up and shipped it back to HP.
  • Instead of following through on that much better plan and saving myself several hits to my sanity, some detective work helped me to identify what the app should be and find it in the Google Play Store (my MiL's computer is a Chromebook; this will be significant later).
  • The app's "install new printer" workflow simply scans the local network for printers. Since the step I was trying to accomplish was connecting the printer to Wi-Fi (this model doesn't have an on-board control panel, only an embedded web server), this scan was not particularly helpful.
  • The app's next suggestion was to contact support. Thanks, app.
  • After having checked the box for any additional docs, and finding only reams of pointless legal paperwork documenting the printer's compliance to various standards and treaties, I gingerly loaded up the HP web site to search for something more detailed.
  • The HP website's search function resolutely denied all knowledge of the printer model.
  • A Google search scoped to the HP web site found the printer's product page, which included an actual manual.
  • The manual asked me to connect to the printer's management interface, but at no point includes a step-by-step process. By piecing together various bits of information from the doc and some frantic Googling, I finally work out that I need to:
    • Connect to the printer's own ad-hoc Wi-Fi network;
    • Print a test page to get its IP address (this step involves holding down the paper feed button for 10 seconds);
    • Connect to that IP address;
    • Reassure the web browser that it's fine to connect to a website that is INSECURE!!1!
    • Not find the menu options from the doc, only some basic information about supplies;
    • Panic;
    • Note a tiny "Login" link hidden away in a corner;
    • Mutter "surely not…"
    • Fail to find any user credentials documented anywhere, or indeed any mention of a login flow;
    • Connect as "admin" with no password on a hunch;
    • Access the full management interface.
  • At this point I was finally able to authenticate the printer to the correct Wi-Fi network, at which point it promptly rebooted and then went catatonic for a worryingly long time before finally connecting.
  • But we're not done yet! The HP printer app claims to be able to set up the local printer on the Chromebook, but as far as I can tell, it doesn't even attempt to do this. However, we have a network connection, I can read out supply levels and what-not, how hard can this be?
  • Despite having Google Cloud Print enabled, nothing was auto-detected, so I created it as IPP (amazingly, this step is actually in the docs).
  • Time for a test print! The Chromebook's print queue showed the doc as PRINTED, but the printer didn’t produce anything, and as far as I could determine, it never hit the printer's own queue.
  • Hang head in hands.
  • Verified that my iPhone can see the printer (via AirPrint) and print to it. This worked first time.
  • Tried deleting the printer and re-creating it; somehow Google Cloud Print started working at this point, so the printer was auto-detected? The resulting config looked identical to what I created by hand, except with a port number specified instead of just an IP address.
  • Does it print now? HAHAHA of course not.
  • Repeat previous few steps with increasing muttering (can't swear or throw things because I am in my mother-in-law's home).
  • Decide to update software:
    • The Chromebook updates, reboots, no change.
    • The printer's product page does not show any firmware at all — unless you tell it you are looking for Windows software. There are official drivers for various Linux distros, but apparently they don't deserve firmware. There is nothing for macOS, because Apple wisely doesn't allow rando third-party printer drivers anywhere near their operating systems. And of course nothing for ChromeOS or "other", why would you ask?
    • Download the firmware from the Windows driver page, upload it to the printer's management UI — which quoth "firmware not valid".
    • Search for any checksum or other way to verify the download, and of course there is none.
    • Attempt to decode the version embedded in the file name, discover that it is almost impossible to persuade ChromeOS to display a file name that long.
    • Eventually decide that the installed and downloaded versions are probably the same, despite the installed one being over a year old.
  • Give up and run away, promising to return with new ideas, or possibly a can of petrol and a Zippo.

Good Robot

Last time I wrote about ChatGPT, I was pretty negative. Was I too harsh?

The reason I was so negative is that many of the early demos of ChatGPT focus on feats that are technically impressive ("write me a story about a spaceman in the style of Faulkner" or whatever), but whose actual application is at best unclear. What, after all, is the business model? Who will pay for a somewhat stilted story written by a bot, at least once the novelty value wears off? Actual human writers are, by and large, not exactly rolling in piles of dollars, so it's not as if there is a huge profit opportunity awaiting the first clever disrupter — quite apart from the moral consequences of putting a bunch of humans out of a job, even an ill-paying one.

Instead, I wanted to think about some more useful and positive applications of this technology, ones which also have the advantage that they are either not being done at all today, or can only be done at vast expense and not at scale or in real time. Bonus points if they avoid being actively abusive or enabling ridiculous grifts and rent-seeking. After all, with Microsoft putting increasing weight behind Open AI, it's obvious that smart people smell money here somewhere.

Summarise Information (B2C)

It's more or less mandatory for new technology to come with a link to some beloved piece of SF. For once, this is not a Torment Nexus-style dystopia. Instead, I'm going right to the source, with Papa Bill's Neuromancer:

"Panther Moderns," he said to the Hosaka, removing the trodes. "Five minute precis."
"Ready," the computer said.

Here's a service that everyone wants, as evidenced by the success of the "five-minute explainer" format. Something hits your personal filter bubble, and you can tell there is a lot of back story; battle lines are already drawn up, people are several levels deep into their feuds and meta-positioning, and all you want is a quick recap. Just the facts, ma'am, all sorts of multimedia, with a unifying voiceover, and no more than five minutes.

There are also more business-oriented use cases for this sort of meta-textual analysis, such as "compare this quarter's results with last quarter's and YoY, with trend lines based on close competitors and on the wider sector". You could even link with Midjourney or Stable Diffusion to graph the results (without having to do all the laborious cutting and pasting to get the relevant numbers into a table first, and making sure they use the same units, currencies, and time periods).

Smarter Assistants (B2C)

One of the complaints that people have about voice assistants is that they appear to have all the contextual awareness of goldfish. Sure, you can go to a certain amount of effort to get Siri, Alexa, and their ilk to understand "my wife" without having to use the long-suffering woman's full name and surname on each invocation, but they still have all the continuity of an amnesiac hamster if you try to continue a conversation after the first interaction. Seriously, babies have a far better idea of object persistence (peekaboo!). The robots simply have no way of keeping context between statements, outside of a few hard-coded showcase examples.

Instead, what we want is precisely that continuity: asking for appointments, being read a list, and then asking to "move the first one to after my gym class, but leave me enough time to shower and get over there". This is the sort of use case that explains why Microsoft is investing so heavily here: they are so far behind otherwise that why not? Supposedly Google has had this tech for a while and just couldn't figure out a way to introduce it without disrupting its cash-cow search business. And Apple never talks about future product directions until they are ready to launch (with the weird exception of Project Titan, of course), so it may be that they are already on top of this one. Certainly it was almost suspicious how quickly Apple trotted out specific support for Stable Diffusion.

Tier Zero Support (B2B)

Back in the day, I used to work in tech support. The classic division of labour in that world goes something like this:

  • Tier One, aka "the phone firewall": people who answer telephone or email queries directly. Most questions should be solved at this level.
  • Tier Two: these are more expert people, who can help with problems which cannot be resolved quickly at Tier One. Usually customers can’t contact Tier Two directly; their issues have to be escalated there. You don't want too many issues to get to this level, because it gets expensive.
  • Tier Three: in software organisations, these are usually the actual engineers working on the product. If you get to Tier Three, your problem is so structural, or your enhancement request is sufficiently critical, that it's no longer a question of helping you to do something or fixing an issue, but changing the actual functioning of the product in a pretty major way.

Obviously, there are increasing costs at each level. A problem getting escalated to Tier Two means burning the time of more senior and expert employees, who are ipso facto more expensive. Getting to Tier Three not only compounds the monetary cost, but also adds opportunity costs: what else are those engineers not doing, while they work on this issue? Therefore, tech support is all about making sure problems get solved at the lowest possible tier of the organisation. This focus has the happy side-effect of addressing the issue faster, and with fewer communications round-trips, which makes users happier too.

It's a classic win-win scenario — so why not make it even better? That's what the Powers That Be decided to do where I was. They added a "Tier Zero" of support, that was outsourced (to humans), with the idea that they would address the huge proportion of queries that could be answered simply by referring to the knowledge base1.

So how did this go? Well, it was such a disaster that my notoriously tight-fisted employers2 ended up paying to get out of the contract early. But could AI do better?

In theory, this is not a terrible idea. Something like ChatGPT should be able to answer questions based on a specific knowledge base, including past interactions with the bot. Feed it product docs, FAQs, and forum posts, and you get a reasonable approximation of a junior support engineer. Just make sure you have a way for a user to pull the rip-cord and get escalated to a human engineer when the bot gets stuck, and why not?

One word of caution: the way I moved out of tech support is that I would not only answer the immediate question from a customer, but I would go find the account manager afterwards and tell them their customer needed consulting, or training, or more licenses, or whatever it was. AI might not have the initiative to do that.

Another drawback: it's hard enough to give advice in a technical context, but at least there, a command will either execute or not; it will give the expected results, or not (and even then, there may be subtle bugs that only manifest over time). Some have already seized on other domains that feature lots of repetive text as opportunities for ChatGPT. Examples include legal contracts, and tax or medical advice — but what about plausible-but-wrong answers? If your chatbot tells me to cure my cancer with cleanses and raw vegetables, can I (or my estate) sue you for medical malpractice? If your investor agreement includes a logic bug that exposes you to unlimited liability, do you have the right to refuse to pay out? Fun times ahead for all concerned.

Formulaic Text (B2B)

Another idea for automated text generation is to come up with infinite variations on known original text. In plain language, I am talking about A/B testing website copy in real time, rewriting it over and over to entice users to stick around, interact, and with any luck, generate revenue for the website operators.

Taken to the extreme, you get the evil version, tied in with adtech surveillance to tweak the text for each individual visitor, such that nobody ever sees the same website as anyone else. Great for plausible deniability, too, naturally: "of course we would never encourage self-harm — but maybe our bot responded to something in the user's own profile…".

This is the promise of personalised advertising, that is tweaked to be specifically relevant to each individual user. I am and remain sceptical of the data-driven approach to advertising; the most potent targeted ads that I see are the same examples of brand advertising that would have worked equally well a hundred years ago. I read Monocle, I see an ad for socks, I want those socks. You show me a pop-up ad for socks while I am trying to read something unrelated, I dismiss it so fast that I don't even register that it's trying to sell me socks. It's not clear to me that increasing smarts behind the adtech will change the parameters of that equation significantly.

De-valuing Human Labour

These are the use cases that seem to me to be plausible and defensible. There will be others that have a shorter shelf life, as illustrated in Market For Lemons:

What happens when every online open lobby multiplayer game is choked with cheaters who all play at superhuman levels in increasingly undetectable ways?

What happens when, from the perspective of the average guy, "every girl" on every dating app is a fiction driven by an AI who strings him along (including sending original and persona-consistent pictures) until it's time to scam money out of him?

What happens when, from the perspective of the average girl, "every guy" on the internet has become weirdly dismissive and hostile, because he's been conditioned to think that any girl that seems interested in him must be fake and trying to scam money out of him?

What happens when comments sections on every forum gets filled with implausibly large consensus-building hordes who are able to adapt in real time and carefully slip their brigading just below the moderator's rules?

What these AI-enabled "growth hacks" boil down to is taking advantage of a market that has already outsourced labour and creativity to (human) non-employees: multiplayer games, user-generated content, and social media in general. Instead of coming up with a storyline for your game, why not just make users pay to play with each other? Instead of paying writers, photographers, and video makers, why not just let them upload their content for free? And with social media, why not just enable users to live vicariously through the fantasy lives of others, while shilling them products that promise to let them join in?

Now computers can deliver against those savings even better — but only for a short while, until people get bored of dealing with poor imitations of fellow humans. We old farts already bailed on multiplayer games, because it's no fun spending my weekly hour of gaming just getting ganked repeatedly by some twelve-year-old who plays all day. Increasingly, I bailed on UGC networks: there is far more quantity than quality, and I would rather pay for a small amount of quality than have to sift through the endless quantity.

If the pre-teen players with preternaturally accurate aim are now actually bots, and the AI-enhanced influencers are now actually full-on AIs, those developments are hardly likely to draw me back to the platforms. Any application of AI tech that is simply arbitrage on the cost of humans without factoring in other aspects has a short shelf life.

Taken to its extreme, this trend leads to humans abdicating the web entirely, leaving the field to AIs creating content that will be ranked by other AIs, and with yet more AIs rewarding the next generation of paperclip-maximising content-producing AIs. A bleak future indeed.

So What's Next?

At this point, with the backing of major players like Microsoft and Apple, it seems that AI-enabled products are somewhat inevitable. What we can hope for is that, after some initial over-excitement, we see fewer chatbot psychologists, and more use cases that are concrete, practical, and helpful — to humans.


🖼️ Photos by Andrea De Santis and Charles Deluvio on Unsplash


  1. Also known as RTFM queries, which stands for Read The, ahem, Fine Manual. (We didn't always say "Fine", unless a customer or a manager was listening.) 

  2. We had to share rooms on business trips, leaving me with a wealth of stories, none of which I intend to recount in writing. 

Disappearing In The Hills

I have a bunch of stuff I need to talk to people in the US about, but I had forgotten that today was MLK Day, so everything will have to wait one more day. I had an early call with a startup in the Middle East I am consulting with, but then found myself at 10am with an empty schedule for the day — so why not hop on the bike and disappear up into the hills until lunch time?

I had been hoping to climb out of the fog, but it was persistent until I got quite high up — and then I found that it was grey and overcast above the fog anyway. The views were very atmospheric, though, including this Brigadoon-like situation with a village appearing and disappearing amid the shifting billows.

Any day on the bike is a good day, though. I am in a bit of a holding pattern, waiting to get started on a number of projects, so a long(ish — 60km) ride is great for keeping myself from fretting.

I was happy that my legs were cooperating, too, since I was also out on the bike on Sunday, for a group ride on the other side of the Po. This is a part of the world I have never visited, even though it’s only a half-hour drive from me; I just pass through on the train or motorway en route to Milan. It’s a different vibe over there, but they have some fun trails, and it was a good day out — chilly, overcast, and muddy in spots, but at least the forecast rain stayed off.

I especially liked the souvenir for the day — way better than yet another T-shirt!

There are no photos up on the event page yet, and being in a group, I didn’t want to stop to take pictures — but at least I have this wine bottle to remind me…

Pilgrimage

Today’s ride was along part of the old pilgrim route from England through France to Rome, the Via Francigena. There is still a ferry crossing for the use of pilgrims at the Guado di Sigerico — and yes, Italian speakers will be jumping up and down at this point because "guado" means ford, but there is no ford there in modern times, just the ferry. Sigeric himself was an Archbishop of Canterbury who made the pilgrimage down to Rome for his investiture, and someone in his party documented the return leg.

More than a thousand years after Sigeric, it’s still quite common to meet pilgrims walking or cycling the route; it’s also part of the Europe-wide Eurovelo cycling network, as route EV5, appropriately enough named Via Romea Francigena. Sensible pilgrims however avoid setting out at the fag-end of the year, with only single-digit temperatures and overcast skies to look forward to, so today I had the road to myself.

There are still a number of chapels along the route around here, although I suspect they were more for the benefit of farm-workers rather than pilgrims; those tend to stop in towns and cities, just as they always did. Since the advent of mechanisation in farming, most of these field-side chapels are in poor repair. There are no longer armies of farm workers to gather for celebrations in the fields, just the odd tractor — and not even any of those at this time of year.

This particular chapel looks structurally sound from a distance, but as you approach the door, you realise that there is quite a lot of light making its way inside — more than those tiny grated windows could explain.

Sure enough, the roof fell in at some point. On the plus side, this means we can see the remains of the interior frescos a little better.

This is not the only lonely chapel I found today. Not needing the ferry across the Po, I left the pilgrim route at the mouth of the Tidone river and joined the Sentiero del Tidone, which runs along the banks of the eponymous river all the way from its source up in the Apennines down to its confluence into the Po, near the Guado di Sigerico.

This ruined farm-house had a little chapel beside it that someone had gone to some effort to clean up and revive.

This site is a little closer to a main road and still-occupied farms and hamlets, which maybe gives it just enough passing traffic to hang on as a just-about going concern? Or maybe the maintenance of this half-ruined chapel is one person’s project, giving the old building one last lease on life.

Of course on a late-December day these ruins could not help but look sad and brooding. They did not make the same melancholy impression on me when I last came through here in September, with a background that was green, growing, and sunlit, rather than muddy ploughed fields under lowering clouds.

Anyway, I got my miles in, and some thoughts out of my head, so I’m happy with that. Any ride is a good ride!

Information Push

My Twitter timeline, like most people's, is awash with people trying out the latest bot-pretending-to-be-human thing, ChatGPT. Everyone is getting worked up about what it can and cannot do, or whether the way it does it (speed-reading the whole of the Internet) exposes it to copyright claims, inevitable bias, or simply polluting the source that it drinks from so that its descendants will no longer be able to be trained from a pool of guaranteed human-generated content, unpolluted by bot-created effluent.

I have a different question, namely: why?

We do not currently have a problem of lack of low-quality plausible-seeming information on the Internet; quite the opposite. The problem we have right now is one of too much information, leading to information overload and indigestion. On social media, it has not been possible for years to be a completist (reading every post) or to use a purely linear timeline. We require systems to surface information that is particularly interesting or relevant, whether on an automated algorithmic basis, or by manual curation of lists/circles/spaces/instances.

As is inevitably the case in this fallen world of ours, the solution to one problem inevitably begets new problems, and so it is in this case. Algorithmic personalisation and relevance filtering, whether of a social media timeline or the results of a query, soon leads to the question of: relevant to whom?

Back in the early days of Facebook, if you "liked" the page for your favourite band, you would expect to see their posts in your timeline alerting you of their tour dates or album release. Then Facebook realised that they could charge money for that visibility, so the posts by the band that you had liked would no longer show up in your timeline unless the band paid for them to do so.

In the early days of Google, it was possible to type a query into the search box and get a good result. Then people started gaming the system, triggering an arms race that laid waste to ever greater swathes of the internet as collateral damage.

Keyword stuffing meant that metadata in headers became worthless for cataloguing. Auto-complete will helpfully suggest all sorts of things. Famously, recipes now have to start with long personal essays to be marked as relevant by the all-powerful algorithm. Automated search results have become so bad that people append "reddit" to their queries to take advantage of human curation.

This development takes us full circle to the early rivalry between automated search engines like Google and human-curated catalogues like Yahoo's. As the scale of the Internet exploded, human curation could not keep up — but now, it’s the quality problem that is outpacing algorithms' ability to keep up. People no longer write for human audiences, but for robotic ones, in the hope of rising to the surface long enough to take advantage of the fifteen minutes of fame that Andy Warhol promised them.

And the best we can think of is to feed the output of all of this striving back into itself.

We are already losing access to information. We are less and less able to control our information intake, as the combination of adtech and opaque relevance algorithms pushes information to us which others have determined that we should consume. In the other direction, our ability to pull or query information we actually desire is restricted or missing entirely. It is all too easy for the controllers of these systems to enable soft censorship, not by deleting information, but simply by making it unsearchable and therefore unfindable. Harbingers of this approach might be Tumblr's on-again, off-again approach to allowing nudity on that platform, or Huawei phones deleting pictures of protests without the nominal owners of those devices getting any say in the matter.

How do we get out of this mess?

While some are fighting back, like Stack Overflow banning the use of GPT for answers, I am already seeing proposals just to give in and embrace the flood of rubbish information. Instead of trying to prevent students from using ChatGPT to write their homework, the thinking is that we should encourage them to submit their prompts together with the model's output and their own edits and curation of that raw output. Instead of trying to make an Internet that is searchable, we should abandon search entirely and rely on ChatGPT and its ilk to synthesise information for us.

I hate all of these ideas with a passion. I want to go in exactly the opposite direction. I want search boxes to include "I know what I'm doing" mode, with Boolean logic and explicit quote operators that actually work. I do find an algorithmic timeline useful, but I would like to have a (paid) pro mode without trends or ads. And as for homework, simply get the students to talk through their understanding of a topic. When I was in school, the only written tests that required me to write pages of prose were composition exercises; tests of subjects like history involved a verbal examination, in which the teacher would ask me a question and I would be expected to expound on the topic. This approach will remain proof against technological cheating for some while yet.

And once again: why are we building these systems, exactly? People appear to find it amusing to chat to them — but people are very easy to fool. ELIZA could do it without burning millions of dollars of GPU time. There is far more good, valuable text out there already, generated by actual interesting human beings, than I can manage to read. I cannot fathom how anyone can think it a good idea to churn out a whole lot more text that is mediocre and often incorrect — especially because, once again, there is already far too much of that being generated by humans. Automating and accelerating the production of even more textual pablum will not improve life for anyone.

The potential for technological improvement over time is no defence, either. So what if in GPT-4 (or -5 or -6) the text gets somewhat less mediocre and is wrong (or racist) a bit less often? Then what? In what way does the creation and development of GPT improve the lot of humanity? At least Facebook and Google could claim a high ideal (even if neither of them lived up to those ideals, or engaged seriously with their real-world consequences). The entities behind GPT appear to be just as mindless as their creation.


🖼️ Photo by Owen Beard on Unsplash