There has been this interesting shift going on in coverage of Silicon Valley companies, with increasing scepticism informing what had previously been reliable hero-worshipping. Case in point: this fascinating polemic by John Battelle about the oft-ignored human externalities of “disruption" (scare quotes definitely intended).
Battelle starts from a critique of Amazon Go, the new cashier-less stores Amazon is trialling. I think it’s safe to say that he’s not a fan:
My first take on Amazon Go is this: F*cking A, do we really want eggplants and cuts of meat reduced to parameterized choices spit onto algorithmized shelves? Ick. I like the human confidence I get when a butcher considers a particular rib eye, then explains the best way to cook that one cut of meat. Sure, technology could probably deliver me a defensibly "better" steak, perhaps even one tailored to my preferences as expressed through reams of data collected through means I’ll probably never understand.
But come on.
Sometimes you just want to look a guy in the eye and sense, at that moment, that THIS rib eye is perfect for ME, because I trust that butcher across the counter. We don’t need meat informed by data and butchered by bloodless algorithms. We want our steak with a side of humanity. We lose that, we lose our own narrative.
Battelle then goes on to extrapolate that "ick" out to a critique of the whole Silicon Valley model:
It’s this question that dogs me as I think about how Facebook comports itself : We know what’s best for you, better than you do in fact, so trust us, we’ll roll the code, you consume what we put in front of you.
But… all interactions of humanity should not be seen as a decision tree waiting to be modeled, as data sets that can be scanned for patterns to inform algorithms.
Cut Down The Decision Tree For Firewood
I do think there is some merit to this critique. Charlie Stross has previously characterised corporations as immortal hive organisms which pursue the three corporate objectives of growth, profitability, and pain avoidance:
We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden. Individual atomized humans are thus either co-opted by these entities (you can live very nicely as a CEO or a politician, as long as you don't bite the feeding hand) or steamrollered if they try to resist.
In short, we are living in the aftermath of an alien invasion.
These alien beings do not quite understand our human reactions and relations, and they try pin them down and quantify them in their models. Searching for understanding through modelling is value-neutral in general, but problems start to appear when the model is taken as authoritative, with any real-life deviation from the model considered as an error to be rectified – by correcting the real-life discrepancy.
Fred Turner describes the echo chamber these corporations inhabit, and the circular reasoning it leads to, in this interview:
About ten years back, I spent a lot of time inside Google. What I saw there was an interesting loop. It started with, "Don't be evil." So then the question became, "Okay, what's good?" Well, information is good. Information empowers people. So providing information is good. Okay, great. Who provides information? Oh, right: Google provides information. So you end up in this loop where what's good for people is what's good for Google, and vice versa. And that is a challenging space to live in.
We all live in Google’s space, and it can indeed be challenging, especially if you disagree with Google about how information should be gathered and disseminated. We are all grist for its mighty Algorithm.
This presumption of infallibility on the part of the Algorithm, and of the world view that it implements is dangerous, as I have written before. Machines simply do not see the world as we do. Building our entire financial and governance systems around them risks some very unwelcome consequences.
But What About The Supermarket?
Back to Battelle for a moment, zooming back in on Amazon and its supermarket efforts:
But as they pursue the crack cocaine of capitalism — unmitigated growth — are technology platforms pushing into markets where perhaps they simply don’t belong? When a tech startup called Bodega launched with a business plan nearly identical to Amazon’s, it was laughed off the pages of TechCrunch. Why do we accept the same idea from Amazon? Because Amazon can actually pull it off?
The simple answer is that Bodega falls into the uncanny valley of AI assistance, trying to mimic a human interaction instead of embracing its new medium. A smart vending machine that learns what to stock? That has value - for the sorts of products that people like to buy from vending machines.
This is Amazon’s home turf, where the Everything Store got its start, shipping the ultimate undifferentiated good. A book is a book is a book; it doesn’t really get any less fresh, at least not once it has undergone its metamorphosis from newborn hardback to long-lived paperback.
In this context, nappies/diapers or bottled water are a perfect fit, and something that Amazon Prime has already been selling for a long time, albeit at a larger remove. Witness those ridiculous Dash buttons, those single-purpose IoT devices that you can place around your home so that when you see you’re low on laundry powder or toilet paper you can press the button and the product will appear miraculously on your next Amazon order.
Steaks or fresh vegetables are a different story entirely. I have yet to see the combination of sensors and algorithms that can figure out that a) these avocados are close to over-ripe, but b) that’s okay because I need them for guacamole tonight, or c) these bananas are too green to eat any time soon, and d) that’s exactly what I need because they’re for the kids’ after-school snack all next week.
People Curate, Algorithms Deliver
Why get rid of the produce guy in the first place?
Why indeed? But why make me deal with a guy for my bottled water?
I already do cashier-less shopping; I use a hand-held scanner, scan products as I go, and swipe my credit card (or these days, my phone) on my way out. The interaction with the cashier was not the valuable one. The valuable interaction was with the people behind the various counters - fish, meat, deli - who really were, and still are, giving me personalised service. If I want even more personalised service, I go to the actual greengrocer, where the staff all know me and my kids, and will actively recommend produce for us and our tastes.
All of that personalisation would be overkill, though, if all I needed were to stock up on kitchen rolls, bottled milk, and breakfast cereal. These are routine, undifferentiated transactions, and the more human effort we can remove from those, the better. Interactions with humans are costly activities, in time (that I spend dealing with a person instead of just taking a product off the shelf) and in money (someone has to pay that person’s salary, healthcare, taxes, and so on). They should be reserved for situations where there is a proportionate payoff: the assurance that my avos will be ripe, my cut of beef will be right for the dish I am making, and my kids’ bananas will not have gone off by the time they are ready to eat them.
We are cyborgs, every day a little bit more: humans augmented by machine intelligence, with new abilities that we are only just learning to deal with. The idea of a cashier-less supermarket does not worry me that much. In fact, I suspect that if anything, by taking the friction out of shopping for undifferentiated goods, we will actually create more demand for, and appreciation of, the sort of "curated" (sorry) experience that only human experts can provide.
Photos by Julian Hanslmaier and Anurag Arora on Unsplash