Interoperable Friendship

Whenever the gravitational pull of social networks comes up, there is a tendency to offer a quick fix by "just" letting them integrate with each other, or offer export/import capability.

Cory Doctorow tells an emotional tale in Wired about his grandmother's difficult decision to leave all of her family and friends behind in the USSR, and concludes with this impassioned appeal:

Network effects are why my grandmother's family stayed behind in the USSR. Low switching costs are why I was able to roam freely around the world, moving to the places where it seemed like I could thrive.

Network effects are a big deal, but it's switching costs that really matter. Facebook will tell you that it wants to keep bad guys out – not keep users in. Funnily enough, that's the same thing East Germany's politburo claimed about the Berlin Wall: it was there to keep the teeming hordes of the west out of the socialist worker's paradise, not to lock in the people of East Germany.

Mr Zuckerberg, tear down that wall.

As appealing as that vision is, here is why interoperability won't and can't work.

Let's take our good friends Alice and Bob, from every cryptography example ever. Alice and Bob are friends on one social network, let's call it Facester. They chat, they share photos, they enter a bunch of valuable personal information. So far so good; information about each user is stored in a database, and it's pretty trivial to export user information, chat logs, and photographs from the system.

Here's the problem: the account data is not the only thing that is valuable. You also want the relationships between users. If Alice wants to join a new network, let's call it Twitbook, being able to prepopulate it with her name and profile picture is the least of her issues. She is now faced with an empty Twitbook feed, because she isn't friends with anyone there yet.1

Alice and Bob's relationship on Facester is stored in a data structure called a graph; each link between nodes in the graph is called an edge. While this structure can be exported in purely technical terms, this is where things start getting complicated.

What if Alice and Bob's sworn enemy, Eve, registers on Twitbook with Bob's name? Or maybe there's simply more than one Bob in the world. How can Twitbook meaningfully import that relationship from Facester?

There are various policies that you could come up with, ranging from terrible to more terrible.

If both Alice and Bob go to a certain amount of effort, entering their Facester profile info on Twitbook and vice versa, the export and reimport will be able to reconcile the data that way — but that's a lot of work and potential for error. What happens if even one of your friends hasn't done this, or gets it wrong? Should the import stop or continue? And does the destination network get to keep that dangling edge? Here in what we still call the real world, Facebook already creates "ghost profiles" for people who do not use its services, but whose existence they have inferred from their surveillance-driven adtech. These user records have value to FB because they can still be used for targeting and can have ads sold against them.

Alice and Bob's common friend Charlie has chosen not to register for Twitbook because they dislike that service's privacy policy. However, if either Alice or Bob imports their data from Facester into Twitbook, Charlie could still end up with one of these ghost profiles against their wishes. Contact data are not the property of the person who holds them. Back to the real world again, this is the problem that people have with the likes of Signal or Clubhouse, that prompt users to import their whole address book and then spam all of those people. This functionality is not just irritating, it's also actively dangerous as a vector for abuse.

Another terrible policy is to have some kind of global unique identifier for users, whether this means mandating the use of government-assigned real names, or some global register of user IDs. Real names are problematic for all sorts of reasons, whether it's for people who prefer to use pseudonyms or nicknames, or people who change their name legitimately. Facebook got into all sorts of trouble with their own attempt at a real-name policy, and that was just for one network; you could still be pseudonymous on Twitter, precisely because the two networks are not linked.

People do want to partition off different parts of their identity. Maybe on Facester Alice presents as a buttoned-up suburban housewife, but on Twitbook she lets her hair down and focuses on her death metal fandom. She would prefer not to have to discuss some of the imagery and lyrics that go with that music at the PTA, so she doesn't use the same name and keeps these two aspects of her personality on separate networks. Full interoperability between Facester and Twitbook would collapse these different identities, whatever Alice's feelings on the matter.

Some are invoking the right to data portability that is enshrined in GDPR, but this legislation has the same problem with definitions: whose data are we talking about, exactly?

The GDPR states (emphasis mine):

The right to data portability allows individuals to obtain and reuse their personal data for their own purposes across different services.

Applying this requirement to social networks becomes complicated, though, because Alice's "personal data" also encompasses data about her relationships with Bob and Charlie. Who exactly does that data belong to? Who can give consent to its processing?

GDPR does not really address the question of how or whether Alice should be allowed to obtain and reuse data about Bob and Charlie; it focuses only on the responsibility of Facester and Twitbook as data controllers in this scenario. Here are its suggestions about third parties’ data:

What happens if the personal data includes information about others?

If the requested information includes information about others (eg third party data) you need to consider whether transmitting that data would adversely affect the rights and freedoms of those third parties.

Generally speaking, providing third party data to the individual making the portability request should not be a problem, assuming that the requestor provided this data to you within their information in the first place. However, you should always consider whether there will be an adverse effect on the rights and freedoms of third parties, in particular when you are transmitting data directly to another controller.

If the requested data has been provided to you by multiple data subjects (eg a joint bank account) you need to be satisfied that all parties agree to the portability request. This means that you may have to seek agreement from all the parties involved.

However, all of this is pretty vague and does not impose any actual requirements. People have tens if not hundreds of connections within social networks; it is not realistic that everybody get on board with each request, in the way that would work for the GDPR's example of a joint bank account, which usually involves only two people. If this regulation were to become the model for regulation of import/export functionality of social networks, I think it's a safe bet that preemptive consent would be buried somewhere in the terms and conditions, and that would be that.

Tearing down the walls between social networks would do more harm than good. It's true that social networks rely on the gravity of the data they have about users and their connections to build their power, but even if the goal is tearing down that power, interoperability is not the way to do it.


UPDATE: Thanks to Cory Doctorow for pointing me at this EFF white paper after I tagged him on Twitter. As you might expect, it goes into a lot more detail about how interoperability should work than either a short Wired article or this blog post do. However, I do not feel it covers the specific point about the sort of explicit consent that is required between users before sharing each others' data with the social networks, and the sorts of information leaks and context collapse that such sharing engenders.


🖼️ Photos by NordWood Themes, Alex Iby, and Scott Graham on Unsplash


  1. Or she doesn't follow anyone, or whatever the construct is. Let's assume for the sake of this argument that the relationships are fungible across different social networks — which is of course not the case in the real world: my LinkedIn connections are not the same people I follow on Twitter. 

The Changing Value Of Mistakes

The simplest possible definition of experience would equate it to mistakes. In other words, experience means having made many mistakes — and with any luck, learned from them.

This transubstantiation of mistakes into experience does rely on one hidden assumption, though, which is that the environment does not change too much. Experience is only valid as long as the environment in which the mistakes are made remains fairly similar to the current one. If the environment changes enough, the experience learned from those mistakes becomes obsolete, and new mistakes need to be made in the changed conditions in order to build up experience that is valid in that situation.

This reflection is important because there is a cultural misunderstanding of Ops and SRE that I see over and over — twice just this morning, hence this post.

I Come Not To Bury Ops, But To Praise It

Criticising Ops people for timidity or lack of courage because they are unwilling to introduce change into the environment they are responsible for is to miss that they have a built-in cultural bias towards stability. Their role is as advocates against risk — and change is inherently risky. Ops people made mistakes as juniors, preferably but not always in test environments, and would rather not throw out all that hard-earned experience to start making mistakes all over again. The ultimate Ops nightmare is to do something that turns your employer into front-page news.1

If you’re selling or marketing a product that requires Ops buy-in, you need to approach that audience with an understanding of their mindset. Get Ops on-side by de-risking your proposal, which includes helping them to understand it to a point where they are comfortable with it.

And don’t expect them to be proactive on your behalf; the best you can expect is permission and maybe an introduction. On the other hand, they will be extremely credible champions after your proposal goes into production — assuming, of course, that it does what you claim it does!

Let's break down how that process plays out.

Moving On From The Way Things Were Always Done

A stable, mature way of doing things is widely accepted and deployed. The team in charge of it understand it intimately — both how it works, and crucially, how it fails. Understanding the failure modes of a system is key to diagnosing inevitable failures quickly, after all, as well as to mitigating their impact.

A new alternative emerges that may be better, but is not proven yet. The experts in the existing system scoff at the limitations of the new system, and refuse to adopt it until forced to.

On the one hand, this is a healthy mechanism. It’s not a good idea to go undermining something that’s working just to jump on the latest bandwagon. When you already have something in place that does what you need it to do, anyone suggesting changes has got to promise big benefits, and ideally bring some proof too. The Ops team are not (just) being curmudgeonly stick-in-the-muds; you are asking them to devalue a lot of their hard-won experience and expose themselves to mistakes while they learn the new system. You have to bring a lot of value, and prove your promises too, in order to make that trade-off worth their while.

The problem is when this healthy immune response is taken too far, and the resistance continues even once the new approach has proven itself. Excessive resistance to change leads inevitably downwards into obsolescence and stasis. There's an old joke in IT circles that the system is perfect, if it weren't for all those pesky users. After all, every failure involves user action, so it follows logically that if only there were no users, there would be no failures — right? Unfortunately a system without users is also not particularly useful.

The reason why resistance to change can continue too long is precisely because the Ops' team's experience is the product of mistakes made over time. With each mistake that we make, we learn to avoid that particular mistake in the future. The experience that we gain this way is valuable precisely because it means that we are not constantly making mistakes – or at least, not the same obvious ones.

Learning By Making Mistakes

When I was still a wet-behind-the-ears sysadmin, I took the case off a running server to check something. I was used to PC-class hardware, where this sort of thing is not an issue. This time however, the whole machine shut down very abruptly, and the senior admin was not happy to have to spend a chunk of his time recovering the various databases that had been running on that machine. On the plus side, I never did it again…

We look for experts to run critical systems precisely because they have made mistakes elsewhere, earlier in their careers, and know to avoid them now. If we take an expert in one system and sit them down in front of a different system, however, they will have to make those early mistakes all over again before they can build their expertise back up.

Change devalues expertise because it creates scope for new mistakes that have not been experienced before, and which people have not yet learned to avoid.

Run The Book

Ops teams build runbooks for known situations. These are distillations of the team's experience, so that if a particular situation occurs, whoever is there when it all goes down does not have to start their diagnosis from first principles. They also don't need to call up the one lone expert on that particular system or component. Instead, they can rely on the runbook.

Historically, a runbook would have been a literal book: a big binder with printed instructions for all sorts of situations. These days, those instructions are probably automated scripts, but the idea is the same: the runbook is based on the experience of the team and their understanding of the system, and if the system changes enough, the runbook will have to be thrown out and re-written from scratch.

So how to square this circle and enable adoption of new approaches in a safe way that does not compromise the stability of running systems?

Make Small Mistakes

The best approach these days centres on agility, working around many small projects rather than single big-bang multi-year monsters. This agile approach enables mistakes to be made – and learned from – on a small scale, with limited consequences. The idea is to limit the blast radius of those mistakes, building experience before moving up to the big business-critical systems that absolutely cannot fail.

New technologies and processes these days embrace this agility, enabling that staged adoption with easy self-serve evaluations, small starting commitments, and consumption-based models. This way, people can try out the new proposed approaches, understand what benefits they offer, and make their own decisions about when to make a more wholesale move to a new system.

Small Mistakes Enable Big Changes

The positive consequences of this piecemeal approach are not just limited to the intrinsic benefits of the new system – faster, easier, cheaper, or some combination of the three. There are also indirect benefits: by working with cutting-edge systems instead of old legacy technology, it will also become easier to recruit people who are eager to develop their own careers. Old systems are harder to make new mistakes in, so it's also harder to build experience. Lots of experts in mature technologies have already maxed out their XP and are camping the top rungs of the career ladder, so there's not much scope for growth there — but large-scale change resets the game.

On top of that, technological agility leads to organisation agility. These days, processes are implemented in software, and the speed with which software can move is a very large component in the delivery of new offerings. Any increase in the agility of IT delivery is directly connected to an increase in business agility – launching new offerings faster, expending more quickly into new markets, responding to changing conditions.

Those business benefits also change the technological calculus: when all the mainframe did was billing, that was important, but doing it a little bit better than the next firm was not a game-changer. When software is literally running the entire business, even a small percentage increase in speed and agility there maps to major business-level differentiation.

Experience is learning from mistakes, but if the environment changes, new mistakes have to be made in order to learn. Agile processes and systems help minimise the impact of those changes, delivering the benefits of constant evolution.

Stasis on the technology side leads to stasis in the organisation. Don’t let natural caution turn into resistance to change for its own sake.


🖼️ Photos by Daniela Holzer, Varvara Grabova, Sear Greyson and John Cameron on Unsplash


  1. On the other hand, blaming such front-page news on "human error" is also a cop-out. Major failures are not the fault of an individual operator who fat-fingered one command: they are the ultimate outcome of strategic failures in process and system design that enabled that one mistake to have such strategic consequences. 

Omnichannel

I had been thinking vaguely about starting a newsletter, but never actually got around to doing anything about it — until Twitter bought Revue and integrated it right into their product.

I wrote about why exactly I felt the need to add this new channel in the first issue of the newsletter — so why don't you head over there and sign up?

And of course you should also sign up for Roll for Enterprise, the weekly podcast about enterprise IT (and more or less adjacent topics) that I record with some co-conspirators.


🖼️ Photo by Jon Tyson on Unsplash

Serendipity Considered Harmful

The internet is all about two things: making time and distance irrelevant, and making information freely available. Except right now, people are trying to reverse both of those trends, and I hate it.

Videogames used to deliver an isolated world that you could build or explore on your own. Multiplayer modes were only for certain categories of games, mostly those that inherited from arcades rather than from PC games. Then came MMORPGs and shared-world experiences, and now many top-shelf games don't even have a single-player mode at all. Instead, you play online, with groups of friends if you can arrange it, or with whoever’s there if not.

Clubhouse is an example of the same trend: you have to be there in the moment, with whoever is there when you are. If you miss a great conversation or an appearance by someone interesting, well, you missed it.

In case it wasn't clear, I don't like this model. I like my media to be available when I am. This may be because we didn't have a TV when I was growing up, so I never developed the reflex of arranging my day around watching a show at a certain time. My medium of choice is a book, and one of the things I love about books is that I can read a book that was published this year or two centuries ago with equal ease.

Computers seemed to be going my way — until they weren't.

The shift from individual experiences to ones that are shared in real-time is driven by changing constraints. A single-player game could be delivered on a physical disk before we had the bandwidth to download it, let alone stream it live — so it worked well in a pre-broadband era. Even then, there was a desire to play together. My first experience of this coming future was in my first year at university, where our fairly spartan rooms in the halls of residence nevertheless came with the unbelievable luxury of a 10 Mbps Ethernet port. As soon as we all got our PCs set up, epic deathmatches of Quake were the order of the day — not to mention a certain amount of media sharing. A couple of years later when I was living in a student house in town, we mounted a daring mission and strung Ethernet cable along the gutter to another student house a few doors down so that we could connect the two networks for the purpose of shooting each other in the face.

All of this is to say that I get the appeal of multiplayer games — but not to the exclusion of singleplayer ones. I stopped gaming partly because I started having children, but also because there were very few gaming experiences which attracted me any more. The combination is a familiar one: I have less free time overall, so when I want to play a game, it needs to be available right now — no finding who's online, assembling a team, waiting for opponents, and so on and so forth.1

I want offline games, and I need offline media.

All of these same constraints apply to Clubhouse2. I have these ten minutes while I shave or sort out the kitchen or whatever; I need something I can listen to ten minutes of right now, pause, and resume later in the day or the following week. The last thing I want is to spend time clicking around from room to room so I can listen to a random slice of someone's conversation that I won't even get to hear the end of.

I'm also not going to arrange my day to join some scheduled happening. If it's during the day, some work thing might come up — and if it's in the evening, which is probable given the West Coast bent of the early adopters, a family thing might. If neither of those conflicts happen, I still have a massive backlog of newsletters, books, blogs, and whatever to read and music and podcasts to listen to. Clubhouse is vying to displace some very established habits, and it has not shown me personally any compelling differentiation.

Plus, I just hate phone calls.

NFTs are part of this same trend, except made worse in every way by the addition of crypto. Some people wanted to reinvent rarity in a digital age, when the whole point of digital technology is that once something has been created, it can be duplicated and transmitted endlessly at essentially zero marginal cost.

This ease of duplication is of course a problem for artists, who would like to get paid for that one-time creation process. We are addressing this problem for music and video with streaming, when we all decided collectively that managing local music libraries was too much of a faff, and that a small monthly fee was easier than piracy and less than what most of us spent on legal music anyway. Streaming is still not perfect, with the division of royalties in particular needing work, but at least it doesn't require us to burn an entire forest to release an album — or the receipt saying we own it.

With all of us living online for the past year and change, there is a renewed interest in marking time. Certainly I have noticed that we seemed to be used to TV series getting dumped all at once for ease of bingeing, but now shows seem to be back to the one-episode-per-week format. I find I quite like that, since it provides a marker in the week, something to look forward to — but the important fact is that the episode does not air once and then disappear, it's there for me to watch the next evening or whenever I can get to it.

The fuss about Clubhouse seems to be dying down a bit, and I have to think that lessening of interest is at least partly due to the prospect of loosening restrictions, at least in its core market of the Bay Area, so that people are less desperate for something — anything! — to look forward to, and more likely to have something else to do at the precise time Marc Andreessen (or whoever) is on Clubhouse.

Unfortunately I don't see the same slackening of interest in NFTs, or at least, not yet. The tokens feed on both art speculation and crypto-currencies, and the same pyramid-scheme, get-rich-quick mechanisms underlying both will not go away until the supply of new entrants to the market (rubes to fleece) is exhausted. Alternatively, more governments will follow Inner Mongolia's example and ban cryptocurrency mining.

Or the summer weather and loosening of restrictions will give us all better things to do.


🖼️ Photos by Sean Do and André François McKenzie on Unsplash


  1. The same factors, plus geography, led me to give up pencil & paper RPGs. Very few campaigns can survive a play schedule of "maybe once or twice a year". 

  2. I like this extrapolation of the likely future of Clubhouse

The Wrong Frame

The conversation about the proposed Australian law requiring Internet companies to pay for news continues (previously, previously).

Last time around, Google had agreed to pay A$60m to local news organisations, and had therefore been exempted from the ban. Facebook initially refused to cough up, and banned news in Australia — and Australian news sites entirely — but later capitulated and reversed their ban on news pages in Australia. They even committed to invest $1 billion in news.

One particular thread keeps coming up in this debate, which is that news publications benefit from the traffic that Facebook and Google send their way. This is of course true, which is why legislation that demands that FB & Google pay for links to news sites is spectacularly ill-conceived, easy to criticise, and certain to backfire if implemented.

Many cite the example of Spain, where Google shuttered the local Google News service after a sustained campaign — only for newspapers to call on European competition authorities to stop Google shutting its operation. However, it turns out that since the Google News shutdown in Spain, overall traffic to news sites went largely unchanged.

Getting the facts right in these cases is very important because the future of the web and of news media is at stake. The last couple of decades have in my opinion been a huge mistake, with the headlong rush after ever more data to produce ever more perfectly targeted advertising obscuring all other concerns. Leaving aside privacy as an absolute good, even on the utilitarian terms of effective advertising, this has been a very poor bargain. Certainly I have yet to see any targeted ads worth their CPM, despite the torrent of data I generate. Meanwhile, ads based off a single bit of information — "Dominic is reading Wired" (or evo, or Monocle) have lead me to many purchases.

The worst of it is that news media do not benefit at all from the adtech economy. Their role is to be the honeypot that attracts high-value users — but the premise of cross-site tracking is that once advertisers have identified those high-value users, they can go and advertise to them on sites that charge a lot less than top-tier newspapers or magazines. The New York Times found this out when they turned off tracking on their website due to GDPR — and saw no reduction in ad revenues.

Of course not every site has the cachet or the international reach of the NYT, but if you want local news, you read your local paper — say, the Sydney Morning Herald. Meanwhile, if you're an advertiser wanting to reach people in Sydney, you can either profile them and track them all over the web (or rather, pay FB & G to do it for you) — or just put your ad in the SMH.

Hard cases make bad law. The question of how to make news media profitable in the age of the Web where the traditional dynamics of that market have been completely upended is a hard and important one. This Australian law is not the right way to solve that question, even aside from the implications of this basically being a handout to Rupert Murdoch — and one which would end up being paid in the US, not even in Australia.

Let us hope that the next government to address this question makes a better job of it.


🖼️ Photo by AbsolutVision on Unsplash

The Framing Continues

The framing of Australia's battle against Google and Facebook continues in a new piece with the inflammatory title Australian law could make internet ‘unworkable’, says World Wide Web inventor Tim Berners-Lee.

Here's what Sir Timothy had to say:

"Specifically, I am concerned that that code risks breaching a fundamental principle of the web by requiring payment for linking between certain content online"

This is indeed the problem: I am not a lawyer, nor do I play one on the internet, so I won't comment on the legalities of the Australian situation — but any requirement to pay for links would indeed break the Web (not the Internet!) as we know it. But that's not the issue at risk, despite Google's attempts to frame the situation that way (emphasis mine):

Google contends the law does require it to pay for clicks. Google regional managing director Melanie Silva told the same Senate committee that read Berners-Lee’s submission last month she is most concerned that the code "requires payments simply for links and snippets."

As far as I can tell, the News Media and Digital Platforms Mandatory Bargaining Code does not actually clarify one way or the other whether it applies to links or snippets. This lack of clarity is the problem with regulations drafted to address tech problems created by the refusal of tech companies to engage in good-faith negotiations. Paying for links, such as the links throughout this blog post, is one thing — and that would indeed break the Web. Paying for snippets, where the whole point is that Google or Facebook quote enough of the article, including scraping images, that readers may not feel they need to click through to the original source, is something rather different.

Lazily conflating the two only helps unscrupulous actors hide behind respected names like Tim Berners-Lee's to frame the argument their own way. In law and in technology, details matter.

And of course you can't trust anything Facebook says, as they have once again been caught over-inflating their ad reach metrics:

According to sections of a filing in the lawsuit that were unredacted on Wednesday, a Facebook product manager in charge of potential reach proposed changing the definition of the metric in mid-2018 to render it more accurate.

However, internal emails show that his suggestion was rebuffed by Facebook executives overseeing metrics on the grounds that the "revenue impact" for the company would be "significant", the filing said.

The product manager responded by saying "it’s revenue we should have never made given the fact it’s based on wrong data", the complaint said.

The proposed Australian law is a bad law, and the reason it is bad is because it is based on a misapprehension of the problem it aims to solve.

In The Frame

Google and Facebook have been feuding with the Australian government for a while, because in our cyberpunk present, that's what happens: transnational megacorporations go toe-to-toe with governments. The news today is that Google capitulated, and will pay a fee to continue accessing Australian news, while Facebook very much did not capitulate. This is what users are faced with, whether sharing a news item from an Australian source, or sharing an international source into Australia:

Image

I see a lot of analysis and commentary around this issue that is simply factually wrong, so here's a quick explainer. Google first, because I think it's actually the more interesting of the two.

The best way to influence the outcome of an argument is to apply the right framing from the beginning. If you can get that framing accepted by other parties — opponents, referees, and bystanders in the court of public opinion — you’re home free. For a while there, it looked like Google had succeeded in getting their framing accepted, and in the longer run, that may still be enough of a win for them.

The problem that news media have with Google is not with whether or not Google links to their websites. After all, 95% of Australian search traffic goes to Google, so that’s the way to acquire readers. The idea is that Google users search for some topic that’s in the news, click through to a news article, and there they are, on the newspaper’s website, being served the newspaper’s ads.

The difficulty arises if Google does not send the readers through to the newspaper’s own site, but instead displays the text of the article in a snippet on its own site. Those readers do not click through to the newspaper’s site, do not get served ads by the newspaper, and do not click around to other pages on the newspaper’s site. In fact, as far as the newspaper is concerned, those readers are entirely invisible, not even counted as immaterial visitors to swell their market penetration data.

This scenario is not some far-fetched hypothetical; this exact sequence of events played out with a site called CelebrityNetWorth. The site was founded on the basis that people would want to know how rich a given famous person was, and all was well — until Google decided that, instead of sending searches on to CelebrityNetWorth, they would display the data themselves, directly in Google. CelebrityNetWorth's traffic cratered, together with their ad revenue.

That is the scenario that news media want to avoid.

Facebook does the same sort of thing, displaying a preview of the article directly in the Facebook News Feed. However, the reason why Google have capitulated to Australia's demands and Facebook have not is that Facebook is actively trying to get out of dealing with news. It's simply more trouble than it's worth, netting them accusations from all quarters: they are eviscerating the news media, while also radicalising people by creating filter bubbles that only show a certain kind of news. I would not actually be surprised if they used the Australian situation as an experiment prior to phasing out news more generally (it's already only 4% of the News Feed, apparently).

There has also been some overreach on the Australian side, to be sure. In particular, early drafts of the bill would have required that tech companies give their news media partners 28 days’ notice before making any changes that would affect how users interact with their content.

The reason these algorithms important is that for many years websites — and news media sites are no exception — have had to dance to the whims of Facebook and Google's algorithms. In the early naive days of the web, you could describe your page by simply putting relevant tags in the META elements of the page source. Search engines would crawl and index these, and a search would find relevant pages. However, people being people, unscrupulous web site operators quickly began "tag stuffing", putting all sorts of tags in their pages that were not really relevant but would boost their search ranking.

And so began an arms race between search engines trying to produce better results for users, and "dark SEO" types trying to game the algorithm.

Then on top of that come social networks like Facebook, which track users' engagement with the platform and attempt to present users with content that will drive them to engage further. A simplistic (but not untrue) extrapolation is that inflammatory content does well in that environment because people will be driven to interact with it, share it, comment on it, and flame other commenters.

So we have legitimate websites (let's generously assume that all news media are legit) trying to figure out this constantly changing landscape, dancing to the platforms' whims. They have no insight into the workings of the algorithm; after all, nothing can be published without the scammers also taking advantage. Even the data that is provided is not trustworthy; famously, Facebook vastly over-inflated its video metrics, leading publications to "pivot to video", only to see little to no return on their investments. Some of us, of course, pointed out at the time that not everyone wants video — but publications desperate for any SEO edge went in big, and regretted it.1

Who decides what we see? The promise of "new media" was that we would not be beholden to the whims of a handful of (pale, male and stale) newspaper editors. Instead, we now have a situation in which it is not even clear what is news and what is not, with everybody — users and platforms — second-guessing each other.

And so we find ourselves running an experiment in Australia: is it possible to make news pay? Or will users not miss it once it's gone? Either way, it's going to be interesting. For now, the only big loser seems to be Bing, who had hoped to swoop in and take the Australian web search market from Google. The deal Google signed with News Corporation runs for three years, which should be enough time to see some results.


🖼️ Photo by Markus Winkler on Unsplash


  1. Another Facebook metric that people relied on was Potential Reach; now it emerges that Facebook knowingly allowed customers to rely on vastly over-inflated Potential Reach numbers

Who Needs Alps Anyway

Booked a day off work today because 2021 has done a number on me — and I really lucked out, with a lovely warm day for a 100km ride up into the hills. My legs are hurting now, but it was oh so worth it!

I also got to use my new Hestra Nimbus Split Mitts for the first time. These things are not gloves, but over-gloves; you wear them over your normal cycling gloves. They are completely unpadded and pretty unstructured, but that's the point; they are only there to protect your hands from the elements. The idea is that, on a ride like today's that spans from the low single-digits (Celsius) to the mid-high-teens, you can start off with the mitts, but then as you and the atmosphere warm up, you can peel them off and stuff them in a jersey pocket, while still having your usual gel-padded cycling gloves that you were wearing underneath.

I jumped on these mitts based on a recommendation from The Cycling Independent because I have hot hands, so there's a gap between the sort of weather where I want my heaviest gloves, that could masquerade as ski gloves in a pinch — basically sub-freezing — and when I'm comfortable in just plain finger-gloves without quilting on the backs. It felt a bit ridiculous to buy a whole other pair of gloves just for those in-the-middle days, plus I'd never know which gloves to wear and probably get it wrong all the time, so this combo of glove and over-glove works perfectly.

At least so far, they definitely work as advertised; they kept my hands warm as I pedalled through the fog, and then I took them off when I stopped for this pic, just before the serious climbing started. This ride spanned from 65m to over 900m, and it wasn't just one climb, either; there was plenty of up & down, as my legs will attest.

Clubhouse — But Why?

Everyone is talking about Clubhouse, and I just can't get excited about it.

Part of the reason people are excited about Clubhouse is that everyone is always on the lookout for the next big thing. The problem is that the Next Big Things that actually catch on tend to be the ones that are fun and even look like toys at the beginning — TikTok, or Snapchat before it. A floating conference call full of California techbros bigging each other's jobs up? Honestly, I'd pay good money to get out of that.

Clubhouse is not like TikTok in some important ways — and I'm talking about more than just the average age of their respective user bases. TikTok's innovation is its algorithm, which means that TikTok does not rely on existing social networks. Clubhouse is the polar opposite, piggybacking on users' social networks — and even their actual contact lists. Yes, it does that thing everyone hates where it tells you that somebody whose contact info you'd forgotten you had is on the new app you just joined — and worse, it tells them too.

Is this the next thing after podcasts? After all, podcasts are very one-directional; there is no inline interaction. The way my own Roll for Enterprise podcast works is, we record an episode, we clean it up and put it out, and people download it and listen to it. If you want to comment on something we said, you can message us on Twitter or LinkedIn — or of course start up your own podcast, and correct the record there.

The biggest reason I'm not convinced by Clubhouse, though, is that there seems to be an assumption that most users are going to listen passively and in real time to what is effectively an unmoderated radio phone-in panel. I listen to a number of podcasts, but I listen on my own schedule. The whole point is the offline nature of the podcasting, which means they're waiting for me when I'm ready for them, not vice versa. When it's time to shave or wash the dishes, I have a library of new episodes I can listen to. I don't have to worry about whether my favourite podcasters are streaming live right now; I have the recording, nicely cleaned-up and edited for my listening pleasure.

The whole podcast model is that once it's recorded, it's done and unchangeable. Clubhouse is not that; in fact it's the opposite of that. It's not even possible to record Clubhouse rooms from inside the app (although apparently they do retain recordings for their own purposes). This is where the problems start. Because right now Clubhouse seems to be just Silicon Valley insiders talking to each other, about each other, in their own time, there is basically nobody else in the world outside the West Coast of the US that can join in. Evening in California is too late for even New York, let alone Europe.

Or is this going for the Pacific market? People in Tokyo or Sydney spending their lunch break listening to American after-work chatter?

I've been wrong about social networks before, so I'm not saying this thing doesn't have a future. I'm saying it definitely isn't for me. If you disagree, you should come on the Roll for Enterprise podcast and tell us all what we're missing.


🖼️ Photo by Josh Rose on Unsplash

That Feeling When…

You know that feeling when you realise you may be a little bit outside the design envelope for your gear? That.

I was on the Bianchi, my gravel bike, not my full-sus fat-tyre MTB, when I ran into a stretch of uncleared road. I thought it was just two corners' worth, but it turned out to be quite a bit more than that, and icy underneath the snow.

Not bad for the last ride of the year!