The Hard Work Of Success

There's a pattern to successful outcomes of IT projects — and it's not about who works the longest hours, or has the most robust infrastructure, or the most fashionable programming language.

Here is a recent specific example, which came to my attention specifically because it mentions my current employer — although the trend is a general one: How Nationwide taps Kafka, MongoDB to guide financial decisions. And here is the key part that I am talking about:

A lot of organizations try and go for a big data approach — let’s throw everything into a data lake and try and capture everything and then work out what we’re going to do with it. It’s interesting, but actually it doesn’t solve the problem. And therefore, the approach we’ve taken is to start at the other end. Let’s look at the business problem that we’re trying to solve, rather than trying to solve the mess of data that organizations are typically trying to untangle.

It is indeed a common pitfall in IT to start with the technology first. You hear about some cool new thing, and you want to try it out in practice, so you go casting around for an excuse to do that. You'll notice, however, that very few of these decisions lead to the sort of success stories that get profiled in the media. The more probable outcome is that the project either dies a quiet death in a corner when it turns out that the shiny new tech wasn't quite ready for prime time, or if the business stakeholders are important/loud enough, it gets a vastly expensive emergency rewrite at the 11th hour into something more traditional.

Meanwhile all the success stories start with a concrete business requirement. Somebody needs to get something done, so they work out what their desired outcome is, and how they will know when it has been achieved. Only then do you start coding, or procuring services, or whatever it is you were planning to do.

This is not to say that it's not worth experimenting with the new tech. It's just that "playing around with new toys" is its own thing, a proof of concept or whatever. You absolutely should be running these sorts of investigations, so that when the business need arises, you will have enough basic familiarity with the various possibilities to pick one that has a decent chance of working out for you. To take the specific example of what Nationwide was doing, data lakes are indeed enormously useful things, and once you have one in place, new ways of using it will almost certainly emerge — but your first use case, the one that justifies starting the project at all, should be able to stand on its own, without hand-waving or references to a nebulous future.

This is also why it's probably not a good idea to tie yourself too closely to a specific technology, in business let alone in education. You don't know what the requirements are going to look like in the future, so being overly specific now is to leave gratuitous hostages to fortune. Instead, focus on a requirement you have right now.

Nationwide is facing competition from fintechs and other non-traditional players in banking, and one of the axes of competition is giving customers better insight into their spending. The use case Nationwide have picked is to help users achieve their financial goals:

We’re looking at how we create insight for our members that we can then expose to them through the app. So you’ll see this through some of the challenger banks that will show you how you’ve spent your money. Well, that’s interesting — we can do that today. But it isn’t quite as interesting as a bit of insight that says, "If you actually want to hit your savings target for the holiday that you want next year, then perhaps you could do better if you didn’t spend it on these things."

Once this capability is in place, other use cases will no doubt emerge.

But what is the education equivalent of this thinking? Saying "let's teach kids Python in school!" is not useful. Python is in vogue right now, but kids starting elementary school this September will emerge from university fifteen or twenty years from now. I am willing to place quite a large bet that, while Python will certainly still be around, something else, maybe even several somethings, will have eclipsed its current importance.

We should not focus narrowly on teaching coding, let alone specific programming languages — not least because the curriculum is already very packed. What are we dropping to make room for Python?

And another question: how are we actually going to deliver the instruction? In theory, my high school curriculum included Basic (no, not Visual; just plain Basic). In practice, it was taught by the maths and physics teacher, and those subjects (rightly!) took precedence. I think we got maybe half a dozen hours a year of Basic instruction, and it may well have been less; it's been a while since high school.

The current flare-up of the conversation about teaching IT skills at school has this in common with failed projects in business: it's been dreamed up in isolation by technologists, with no reference to anyone in actual education, whether teachers, students, or parents. None of these groups operate at Silicon Valley pace, but that's fine; this is not a problem that can be solved with a quick hackathon or a quarter-end sprint. Very few worthwhile problems can be, or they would not remain unsolved.

Don't confuse today's needs with universal requirements, and don't think that the tools you have on the shelf today are the only ones anyone will ever need. Take the time to think through what the actual requirement is, and make sure to include the people doing the work today in your planning.


🖼️ Photos by Alvaro Reyes and Ivan Aleksic on Unsplash

Easy Like A Sunday Morning

This Sunday morning was not a time for epic rides, not least because it's the day after a good friend's wedding… I took a 90-minute loop from my front door, up into the foothills and back down. This landscape has not changed much since Roman times, and probably before; people were tilling the land and making wine around here before the Romans showed up, at least back into the Bronze Age.

This is the gate of Rivalta, on the bank of the river Trebbia.

If the name of the river Trebbia is ringing a bell, you may be thinking of your classical history. This was the site of a major battle of the Second Punic War, in which Hannibal defeated the Romans. The battle is commemorated today by a statue of one of Hannibal's war elephants.

People never believe me when I tell them of the wildlife I encounter on my rides: rabbits, deer… elephants?

Living in the Past

In Piacenza, near where I used to live, there is an old building that looks like an abandoned garage or something along those lines. The area is right on the border between the upper town, with the palaces of the various noble families, and the lower town, which was historically more working-class.

Muntà di Rat, l'anima fluviale di Piacenza. La seconda ...This is a historical photo of the Muntà di Rat, courtesy of the local newspaper, Libertà. As you can see, "lower" is not a metaphor! I used to live on the street at the bottom, at a right angle to the focus of this photograph. While the area no longer floods, everything else is almost exactly the same.

Here is a more recent photo, this one taken by me. The Muntà di Rat is at the end of the street, while the mysterious old garage-like structure is on the left, where the lady is walking. I had always idly wondered what it was, and how come it was left abandoned in the middle of town — but then again, it was in good company. Between generational turnover and changes in the real-estate market, there were several abandoned buildings in the area. Some were parts of well-known, soap-opera-style tales of multigenerational family intrigue and inheritance disputes, while others were more prosaic stories of light industry moving out of a residential area.

Now a story in the Libertà finally resolves the mystery: it's an old depot for commercial products imported from Italy's colonies in Libya, Ethiopia, and Somalia.

Reading something like that is a punch in the gut. Italy had such a relatively minor history of colonisation in the 20th century that it is often entirely forgotten. In fact, many Italians consider themselves to have been "good guys", not like those other colonisers, despite some equally shameful actions:

It’s estimated that during the 60 years of Italian colonialism, almost 1 million people died due to war, deportations, and internment. In the 1920s, when the Italian Army started a military campaign to recapture the Libyan territories controlled by rebels, they resorted to widespread summary executions, torture, and mass incarceration. To crush the Libyan resistance, in 1930 the Italian general Rodolfo Graziani, nicknamed "the butcher of Fezzan," put the civilian population in concentration camps. In Ethiopia, the Fascists deployed chemical attacks. When Ethiopian rebels tried to kill him, in 1937, Graziani had 19,000 Ethiopian civilians executed in retaliation.

Italians understandably prefer to refer back to the glories of Rome, or perhaps the Renaissance. Anything from the unification of Italy (which only occurred in 1871!) onwards is brushed over in school and rarely referred to afterwards. The Fascist period is known, of course, and outside of some unfortunate fringes, still a taboo subject.

But colonies? That's something other countries did.

A small example: there is a bar in Milan, unironically called the Colonial. It's meant to evoke a certain type of South-East Asian decor, perhaps along the lines of Raffles Hotel in Singapore — and evidently nobody ever thought twice about that name.

All of this forgetting was possible because Italy had never been a nation of immigrants, but rather of emigrants. The Scalabrinians are a religious order that was founded right here in Piacenza at around the same time as Italy was beginning its colonial adventure. The order's mission was specifically to support Italians emigrating to North and South America. Other emigrants went to France or to the coal-fields in Belgium, and to this day the summers see the roads in the hills fill back up with cars with French and Belgian number plates. If anyone in Italy thought of immigration in the mid- to late 20th century, it was in the context of people from Southern Italy moving to work in the factories of the North.

In the 90s, people started coming to Italy from Africa — and many of them settled down, and started having families. Now their children, the second generation, are here. They were born here, they went to school here, they speak correct and unaccented Italian, and they demand to be acknowledged.

It's not possible to ignore these proud new Italians, when we are surrounded by reminders that they did not manifest out of thin air, and that instead there is a long intertwined history. Now Italy finds itself wrestling with questions of identity — what does it mean to be Italian — which were already difficult in such a diverse1 and politically young country, and just got a whole lot more complicated.

That old warehouse stands (only just) as a reminder of the places and people that were broken so that cheap goods could be shipped to a depot in the centre of town. The past is not dead; it's barely even past yet.


  1. If you think "Italian" is a homogeneous identity, let me introduce you to the Italian word campanilismo, which means the belief that everything within the sound of the bells of the church you attend is the best. Italians will have hours-long arguments about the layers of historical insults in the mere existence of a different recipe for a favourite food, maybe twenty kilometres away. Italians only feel truly Italian among foreigners; at home, they are citizens of their own town — unless the national football team is playing, of course. 

Lessons in Hiring

Some of the most insightful and succinct commentary on the whole Antonio Garcìa Martìnez debacle comes from an ungulate with a Classic Mac for a head:

the Macalope believes Apple should not have hired García Martínez only to fire him. He believe it never should have hired him in the first place.

I'm not going to go over all of the many (many, many) red flags about this person's opinions that should have at the very least triggered some additional scrutiny before hiring him. The reaction from Apple employees was entirely predictable and correct. Even if the misogynistic opinions expressed in his public writing were exaggerated for effect, as he now claims, there would always be a question mark around his interactions with female employees or those from minority backgrounds. At the very least, that would be enormously disruptive to the organisation.

Leaving that aspect aside for a moment: even if this had been someone with the most milquetoast opinions possible (and no NYT bestselling book in which to trumpet them), it's still not great that Apple was looking for someone with his specific professional experience — honed at Facebook.

This particular hire blew up in Apple's face — but it's extremely concerning for Apple users that they were actively recruiting for this type of experience in the first place.

I'll lay my cards on the table: I dislike the idea of search ads as a category, especially in the App Store. We can argue the merits of allowing apps to "jump the queue" of results for generic searches, but as it is today, you can buy yourself into a position ahead of your competitor even for direct searches on that competitor app's name. Where is the value to users in that?

Display ads in Apple News or Stocks, which are the other two Apple properties discussed, might be acceptable — as long as they are not too intrusive. I don't have as much of a philosophical issue as some do with Apple using first-party tracking data within iOS, precisely because those data are not available to other parties or to other platforms. It's easy to opt out of Apple's tracking, simply by not using those apps, and ads from there won't follow me around the rest of the web.

The lesson I hope that Apple takes away from this whole situation is not "don't hire people with big public profiles" but "users really hate sleazy adtech". I would hate for Apple to go the way of YouTube, which is becoming unusable due to ad load. I understand that Apple is trying to boost its Services revenue, and App Store search ads are a way to do that, but if it makes my user experience worse, that's a problem. Apple products command a premium in large part because of how nice they are for users; anything that undermines that niceness weakens the rationale for staying in the Apple camp.

Midweek Ride Through The Shire

I took a mental health day off and rode a (metric) century up into the hills. Unfortunately the more spectacular scenery was a) tiring to ride, so I didn't want to stop, and b) on a main road, so there wasn't always a good place to stop even if I had wanted to. These shots are from the earlier, flatter part of the ride.

Riding up the bank of the river Nure (on the left behind the trees)

Crossing the old railway bridge at Ponte dell'Olio

Old lime kilns at Ponte dell'Olio

Interoperable Friendship

Whenever the gravitational pull of social networks comes up, there is a tendency to offer a quick fix by "just" letting them integrate with each other, or offer export/import capability.

Cory Doctorow tells an emotional tale in Wired about his grandmother's difficult decision to leave all of her family and friends behind in the USSR, and concludes with this impassioned appeal:

Network effects are why my grandmother's family stayed behind in the USSR. Low switching costs are why I was able to roam freely around the world, moving to the places where it seemed like I could thrive.

Network effects are a big deal, but it's switching costs that really matter. Facebook will tell you that it wants to keep bad guys out – not keep users in. Funnily enough, that's the same thing East Germany's politburo claimed about the Berlin Wall: it was there to keep the teeming hordes of the west out of the socialist worker's paradise, not to lock in the people of East Germany.

Mr Zuckerberg, tear down that wall.

As appealing as that vision is, here is why interoperability won't and can't work.

Let's take our good friends Alice and Bob, from every cryptography example ever. Alice and Bob are friends on one social network, let's call it Facester. They chat, they share photos, they enter a bunch of valuable personal information. So far so good; information about each user is stored in a database, and it's pretty trivial to export user information, chat logs, and photographs from the system.

Here's the problem: the account data is not the only thing that is valuable. You also want the relationships between users. If Alice wants to join a new network, let's call it Twitbook, being able to prepopulate it with her name and profile picture is the least of her issues. She is now faced with an empty Twitbook feed, because she isn't friends with anyone there yet.1

Alice and Bob's relationship on Facester is stored in a data structure called a graph; each link between nodes in the graph is called an edge. While this structure can be exported in purely technical terms, this is where things start getting complicated.

What if Alice and Bob's sworn enemy, Eve, registers on Twitbook with Bob's name? Or maybe there's simply more than one Bob in the world. How can Twitbook meaningfully import that relationship from Facester?

There are various policies that you could come up with, ranging from terrible to more terrible.

If both Alice and Bob go to a certain amount of effort, entering their Facester profile info on Twitbook and vice versa, the export and reimport will be able to reconcile the data that way — but that's a lot of work and potential for error. What happens if even one of your friends hasn't done this, or gets it wrong? Should the import stop or continue? And does the destination network get to keep that dangling edge? Here in what we still call the real world, Facebook already creates "ghost profiles" for people who do not use its services, but whose existence they have inferred from their surveillance-driven adtech. These user records have value to FB because they can still be used for targeting and can have ads sold against them.

Alice and Bob's common friend Charlie has chosen not to register for Twitbook because they dislike that service's privacy policy. However, if either Alice or Bob imports their data from Facester into Twitbook, Charlie could still end up with one of these ghost profiles against their wishes. Contact data are not the property of the person who holds them. Back to the real world again, this is the problem that people have with the likes of Signal or Clubhouse, that prompt users to import their whole address book and then spam all of those people. This functionality is not just irritating, it's also actively dangerous as a vector for abuse.

Another terrible policy is to have some kind of global unique identifier for users, whether this means mandating the use of government-assigned real names, or some global register of user IDs. Real names are problematic for all sorts of reasons, whether it's for people who prefer to use pseudonyms or nicknames, or people who change their name legitimately. Facebook got into all sorts of trouble with their own attempt at a real-name policy, and that was just for one network; you could still be pseudonymous on Twitter, precisely because the two networks are not linked.

People do want to partition off different parts of their identity. Maybe on Facester Alice presents as a buttoned-up suburban housewife, but on Twitbook she lets her hair down and focuses on her death metal fandom. She would prefer not to have to discuss some of the imagery and lyrics that go with that music at the PTA, so she doesn't use the same name and keeps these two aspects of her personality on separate networks. Full interoperability between Facester and Twitbook would collapse these different identities, whatever Alice's feelings on the matter.

Some are invoking the right to data portability that is enshrined in GDPR, but this legislation has the same problem with definitions: whose data are we talking about, exactly?

The GDPR states (emphasis mine):

The right to data portability allows individuals to obtain and reuse their personal data for their own purposes across different services.

Applying this requirement to social networks becomes complicated, though, because Alice's "personal data" also encompasses data about her relationships with Bob and Charlie. Who exactly does that data belong to? Who can give consent to its processing?

GDPR does not really address the question of how or whether Alice should be allowed to obtain and reuse data about Bob and Charlie; it focuses only on the responsibility of Facester and Twitbook as data controllers in this scenario. Here are its suggestions about third parties’ data:

What happens if the personal data includes information about others?

If the requested information includes information about others (eg third party data) you need to consider whether transmitting that data would adversely affect the rights and freedoms of those third parties.

Generally speaking, providing third party data to the individual making the portability request should not be a problem, assuming that the requestor provided this data to you within their information in the first place. However, you should always consider whether there will be an adverse effect on the rights and freedoms of third parties, in particular when you are transmitting data directly to another controller.

If the requested data has been provided to you by multiple data subjects (eg a joint bank account) you need to be satisfied that all parties agree to the portability request. This means that you may have to seek agreement from all the parties involved.

However, all of this is pretty vague and does not impose any actual requirements. People have tens if not hundreds of connections within social networks; it is not realistic that everybody get on board with each request, in the way that would work for the GDPR's example of a joint bank account, which usually involves only two people. If this regulation were to become the model for regulation of import/export functionality of social networks, I think it's a safe bet that preemptive consent would be buried somewhere in the terms and conditions, and that would be that.

Tearing down the walls between social networks would do more harm than good. It's true that social networks rely on the gravity of the data they have about users and their connections to build their power, but even if the goal is tearing down that power, interoperability is not the way to do it.


UPDATE: Thanks to Cory Doctorow for pointing me at this EFF white paper after I tagged him on Twitter. As you might expect, it goes into a lot more detail about how interoperability should work than either a short Wired article or this blog post do. However, I do not feel it covers the specific point about the sort of explicit consent that is required between users before sharing each others' data with the social networks, and the sorts of information leaks and context collapse that such sharing engenders.


🖼️ Photos by NordWood Themes, Alex Iby, and Scott Graham on Unsplash


  1. Or she doesn't follow anyone, or whatever the construct is. Let's assume for the sake of this argument that the relationships are fungible across different social networks — which is of course not the case in the real world: my LinkedIn connections are not the same people I follow on Twitter. 

The Changing Value Of Mistakes

The simplest possible definition of experience would equate it to mistakes. In other words, experience means having made many mistakes — and with any luck, learned from them.

This transubstantiation of mistakes into experience does rely on one hidden assumption, though, which is that the environment does not change too much. Experience is only valid as long as the environment in which the mistakes are made remains fairly similar to the current one. If the environment changes enough, the experience learned from those mistakes becomes obsolete, and new mistakes need to be made in the changed conditions in order to build up experience that is valid in that situation.

This reflection is important because there is a cultural misunderstanding of Ops and SRE that I see over and over — twice just this morning, hence this post.

I Come Not To Bury Ops, But To Praise It

Criticising Ops people for timidity or lack of courage because they are unwilling to introduce change into the environment they are responsible for is to miss that they have a built-in cultural bias towards stability. Their role is as advocates against risk — and change is inherently risky. Ops people made mistakes as juniors, preferably but not always in test environments, and would rather not throw out all that hard-earned experience to start making mistakes all over again. The ultimate Ops nightmare is to do something that turns your employer into front-page news.1

If you’re selling or marketing a product that requires Ops buy-in, you need to approach that audience with an understanding of their mindset. Get Ops on-side by de-risking your proposal, which includes helping them to understand it to a point where they are comfortable with it.

And don’t expect them to be proactive on your behalf; the best you can expect is permission and maybe an introduction. On the other hand, they will be extremely credible champions after your proposal goes into production — assuming, of course, that it does what you claim it does!

Let's break down how that process plays out.

Moving On From The Way Things Were Always Done

A stable, mature way of doing things is widely accepted and deployed. The team in charge of it understand it intimately — both how it works, and crucially, how it fails. Understanding the failure modes of a system is key to diagnosing inevitable failures quickly, after all, as well as to mitigating their impact.

A new alternative emerges that may be better, but is not proven yet. The experts in the existing system scoff at the limitations of the new system, and refuse to adopt it until forced to.

On the one hand, this is a healthy mechanism. It’s not a good idea to go undermining something that’s working just to jump on the latest bandwagon. When you already have something in place that does what you need it to do, anyone suggesting changes has got to promise big benefits, and ideally bring some proof too. The Ops team are not (just) being curmudgeonly stick-in-the-muds; you are asking them to devalue a lot of their hard-won experience and expose themselves to mistakes while they learn the new system. You have to bring a lot of value, and prove your promises too, in order to make that trade-off worth their while.

The problem is when this healthy immune response is taken too far, and the resistance continues even once the new approach has proven itself. Excessive resistance to change leads inevitably downwards into obsolescence and stasis. There's an old joke in IT circles that the system is perfect, if it weren't for all those pesky users. After all, every failure involves user action, so it follows logically that if only there were no users, there would be no failures — right? Unfortunately a system without users is also not particularly useful.

The reason why resistance to change can continue too long is precisely because the Ops' team's experience is the product of mistakes made over time. With each mistake that we make, we learn to avoid that particular mistake in the future. The experience that we gain this way is valuable precisely because it means that we are not constantly making mistakes – or at least, not the same obvious ones.

Learning By Making Mistakes

When I was still a wet-behind-the-ears sysadmin, I took the case off a running server to check something. I was used to PC-class hardware, where this sort of thing is not an issue. This time however, the whole machine shut down very abruptly, and the senior admin was not happy to have to spend a chunk of his time recovering the various databases that had been running on that machine. On the plus side, I never did it again…

We look for experts to run critical systems precisely because they have made mistakes elsewhere, earlier in their careers, and know to avoid them now. If we take an expert in one system and sit them down in front of a different system, however, they will have to make those early mistakes all over again before they can build their expertise back up.

Change devalues expertise because it creates scope for new mistakes that have not been experienced before, and which people have not yet learned to avoid.

Run The Book

Ops teams build runbooks for known situations. These are distillations of the team's experience, so that if a particular situation occurs, whoever is there when it all goes down does not have to start their diagnosis from first principles. They also don't need to call up the one lone expert on that particular system or component. Instead, they can rely on the runbook.

Historically, a runbook would have been a literal book: a big binder with printed instructions for all sorts of situations. These days, those instructions are probably automated scripts, but the idea is the same: the runbook is based on the experience of the team and their understanding of the system, and if the system changes enough, the runbook will have to be thrown out and re-written from scratch.

So how to square this circle and enable adoption of new approaches in a safe way that does not compromise the stability of running systems?

Make Small Mistakes

The best approach these days centres on agility, working around many small projects rather than single big-bang multi-year monsters. This agile approach enables mistakes to be made – and learned from – on a small scale, with limited consequences. The idea is to limit the blast radius of those mistakes, building experience before moving up to the big business-critical systems that absolutely cannot fail.

New technologies and processes these days embrace this agility, enabling that staged adoption with easy self-serve evaluations, small starting commitments, and consumption-based models. This way, people can try out the new proposed approaches, understand what benefits they offer, and make their own decisions about when to make a more wholesale move to a new system.

Small Mistakes Enable Big Changes

The positive consequences of this piecemeal approach are not just limited to the intrinsic benefits of the new system – faster, easier, cheaper, or some combination of the three. There are also indirect benefits: by working with cutting-edge systems instead of old legacy technology, it will also become easier to recruit people who are eager to develop their own careers. Old systems are harder to make new mistakes in, so it's also harder to build experience. Lots of experts in mature technologies have already maxed out their XP and are camping the top rungs of the career ladder, so there's not much scope for growth there — but large-scale change resets the game.

On top of that, technological agility leads to organisation agility. These days, processes are implemented in software, and the speed with which software can move is a very large component in the delivery of new offerings. Any increase in the agility of IT delivery is directly connected to an increase in business agility – launching new offerings faster, expending more quickly into new markets, responding to changing conditions.

Those business benefits also change the technological calculus: when all the mainframe did was billing, that was important, but doing it a little bit better than the next firm was not a game-changer. When software is literally running the entire business, even a small percentage increase in speed and agility there maps to major business-level differentiation.

Experience is learning from mistakes, but if the environment changes, new mistakes have to be made in order to learn. Agile processes and systems help minimise the impact of those changes, delivering the benefits of constant evolution.

Stasis on the technology side leads to stasis in the organisation. Don’t let natural caution turn into resistance to change for its own sake.


🖼️ Photos by Daniela Holzer, Varvara Grabova, Sear Greyson and John Cameron on Unsplash


  1. On the other hand, blaming such front-page news on "human error" is also a cop-out. Major failures are not the fault of an individual operator who fat-fingered one command: they are the ultimate outcome of strategic failures in process and system design that enabled that one mistake to have such strategic consequences. 

Omnichannel

I had been thinking vaguely about starting a newsletter, but never actually got around to doing anything about it — until Twitter bought Revue and integrated it right into their product.

I wrote about why exactly I felt the need to add this new channel in the first issue of the newsletter — so why don't you head over there and sign up?

And of course you should also sign up for Roll for Enterprise, the weekly podcast about enterprise IT (and more or less adjacent topics) that I record with some co-conspirators.


🖼️ Photo by Jon Tyson on Unsplash

Serendipity Considered Harmful

The internet is all about two things: making time and distance irrelevant, and making information freely available. Except right now, people are trying to reverse both of those trends, and I hate it.

Videogames used to deliver an isolated world that you could build or explore on your own. Multiplayer modes were only for certain categories of games, mostly those that inherited from arcades rather than from PC games. Then came MMORPGs and shared-world experiences, and now many top-shelf games don't even have a single-player mode at all. Instead, you play online, with groups of friends if you can arrange it, or with whoever’s there if not.

Clubhouse is an example of the same trend: you have to be there in the moment, with whoever is there when you are. If you miss a great conversation or an appearance by someone interesting, well, you missed it.

In case it wasn't clear, I don't like this model. I like my media to be available when I am. This may be because we didn't have a TV when I was growing up, so I never developed the reflex of arranging my day around watching a show at a certain time. My medium of choice is a book, and one of the things I love about books is that I can read a book that was published this year or two centuries ago with equal ease.

Computers seemed to be going my way — until they weren't.

The shift from individual experiences to ones that are shared in real-time is driven by changing constraints. A single-player game could be delivered on a physical disk before we had the bandwidth to download it, let alone stream it live — so it worked well in a pre-broadband era. Even then, there was a desire to play together. My first experience of this coming future was in my first year at university, where our fairly spartan rooms in the halls of residence nevertheless came with the unbelievable luxury of a 10 Mbps Ethernet port. As soon as we all got our PCs set up, epic deathmatches of Quake were the order of the day — not to mention a certain amount of media sharing. A couple of years later when I was living in a student house in town, we mounted a daring mission and strung Ethernet cable along the gutter to another student house a few doors down so that we could connect the two networks for the purpose of shooting each other in the face.

All of this is to say that I get the appeal of multiplayer games — but not to the exclusion of singleplayer ones. I stopped gaming partly because I started having children, but also because there were very few gaming experiences which attracted me any more. The combination is a familiar one: I have less free time overall, so when I want to play a game, it needs to be available right now — no finding who's online, assembling a team, waiting for opponents, and so on and so forth.1

I want offline games, and I need offline media.

All of these same constraints apply to Clubhouse2. I have these ten minutes while I shave or sort out the kitchen or whatever; I need something I can listen to ten minutes of right now, pause, and resume later in the day or the following week. The last thing I want is to spend time clicking around from room to room so I can listen to a random slice of someone's conversation that I won't even get to hear the end of.

I'm also not going to arrange my day to join some scheduled happening. If it's during the day, some work thing might come up — and if it's in the evening, which is probable given the West Coast bent of the early adopters, a family thing might. If neither of those conflicts happen, I still have a massive backlog of newsletters, books, blogs, and whatever to read and music and podcasts to listen to. Clubhouse is vying to displace some very established habits, and it has not shown me personally any compelling differentiation.

Plus, I just hate phone calls.

NFTs are part of this same trend, except made worse in every way by the addition of crypto. Some people wanted to reinvent rarity in a digital age, when the whole point of digital technology is that once something has been created, it can be duplicated and transmitted endlessly at essentially zero marginal cost.

This ease of duplication is of course a problem for artists, who would like to get paid for that one-time creation process. We are addressing this problem for music and video with streaming, when we all decided collectively that managing local music libraries was too much of a faff, and that a small monthly fee was easier than piracy and less than what most of us spent on legal music anyway. Streaming is still not perfect, with the division of royalties in particular needing work, but at least it doesn't require us to burn an entire forest to release an album — or the receipt saying we own it.

With all of us living online for the past year and change, there is a renewed interest in marking time. Certainly I have noticed that we seemed to be used to TV series getting dumped all at once for ease of bingeing, but now shows seem to be back to the one-episode-per-week format. I find I quite like that, since it provides a marker in the week, something to look forward to — but the important fact is that the episode does not air once and then disappear, it's there for me to watch the next evening or whenever I can get to it.

The fuss about Clubhouse seems to be dying down a bit, and I have to think that lessening of interest is at least partly due to the prospect of loosening restrictions, at least in its core market of the Bay Area, so that people are less desperate for something — anything! — to look forward to, and more likely to have something else to do at the precise time Marc Andreessen (or whoever) is on Clubhouse.

Unfortunately I don't see the same slackening of interest in NFTs, or at least, not yet. The tokens feed on both art speculation and crypto-currencies, and the same pyramid-scheme, get-rich-quick mechanisms underlying both will not go away until the supply of new entrants to the market (rubes to fleece) is exhausted. Alternatively, more governments will follow Inner Mongolia's example and ban cryptocurrency mining.

Or the summer weather and loosening of restrictions will give us all better things to do.


🖼️ Photos by Sean Do and André François McKenzie on Unsplash


  1. The same factors, plus geography, led me to give up pencil & paper RPGs. Very few campaigns can survive a play schedule of "maybe once or twice a year". 

  2. I like this extrapolation of the likely future of Clubhouse

The Wrong Frame

The conversation about the proposed Australian law requiring Internet companies to pay for news continues (previously, previously).

Last time around, Google had agreed to pay A$60m to local news organisations, and had therefore been exempted from the ban. Facebook initially refused to cough up, and banned news in Australia — and Australian news sites entirely — but later capitulated and reversed their ban on news pages in Australia. They even committed to invest $1 billion in news.

One particular thread keeps coming up in this debate, which is that news publications benefit from the traffic that Facebook and Google send their way. This is of course true, which is why legislation that demands that FB & Google pay for links to news sites is spectacularly ill-conceived, easy to criticise, and certain to backfire if implemented.

Many cite the example of Spain, where Google shuttered the local Google News service after a sustained campaign — only for newspapers to call on European competition authorities to stop Google shutting its operation. However, it turns out that since the Google News shutdown in Spain, overall traffic to news sites went largely unchanged.

Getting the facts right in these cases is very important because the future of the web and of news media is at stake. The last couple of decades have in my opinion been a huge mistake, with the headlong rush after ever more data to produce ever more perfectly targeted advertising obscuring all other concerns. Leaving aside privacy as an absolute good, even on the utilitarian terms of effective advertising, this has been a very poor bargain. Certainly I have yet to see any targeted ads worth their CPM, despite the torrent of data I generate. Meanwhile, ads based off a single bit of information — "Dominic is reading Wired" (or evo, or Monocle) have lead me to many purchases.

The worst of it is that news media do not benefit at all from the adtech economy. Their role is to be the honeypot that attracts high-value users — but the premise of cross-site tracking is that once advertisers have identified those high-value users, they can go and advertise to them on sites that charge a lot less than top-tier newspapers or magazines. The New York Times found this out when they turned off tracking on their website due to GDPR — and saw no reduction in ad revenues.

Of course not every site has the cachet or the international reach of the NYT, but if you want local news, you read your local paper — say, the Sydney Morning Herald. Meanwhile, if you're an advertiser wanting to reach people in Sydney, you can either profile them and track them all over the web (or rather, pay FB & G to do it for you) — or just put your ad in the SMH.

Hard cases make bad law. The question of how to make news media profitable in the age of the Web where the traditional dynamics of that market have been completely upended is a hard and important one. This Australian law is not the right way to solve that question, even aside from the implications of this basically being a handout to Rupert Murdoch — and one which would end up being paid in the US, not even in Australia.

Let us hope that the next government to address this question makes a better job of it.


🖼️ Photo by AbsolutVision on Unsplash