Showing all posts tagged ai:

The Road To Augmented Intelligence

A company called Babylon has been in the news, claiming that its chatbot can pass a standard medical exam with higher scores than most human candidates. Naturally, the medical profession is not overjoyed with this result:

No app or algorithm will be able to do what a GP does.

On the surface, this looks like just the latest front in the ongoing replacement of human professionals with automation. It has been pointed out that supporters of the automation of blue-collar jobs become hypocritically defensive when it looks like their own white-collar jobs may be next in the firing line, and this reaction from the RCGP seems to be par for that course.

For what it’s worth, I don’t think that is what is going on here. As I have written before, automation takes over tasks, not jobs. That earlier wave of automation of blue-collar jobs was enabled by the fact that the jobs in question had already been refined down to single tasks on an assembly line. It was this subdivision which made it practical for machinery to take over those discrete tasks.

Most white-collar jobs are not so neatly subdivided, consisting of many different tasks. Automating away one task should, all things being equal, help people focus on other parts of the job. GPs – General Practitioners – by definition have jobs that encompass many tasks, and requiring significant human empathy. While I do therefore agree with the RCGP that there is no immediate danger to GP’s jobs, that is not to say there is no impact to jobs from automation; I’d hate to be a travel agent right now, for instance.

Here is a different example, still in the medical field: a neural network is apparently able to identify early signs of tumours in X-ray images. So does that mean there is no role for doctors here either? Well, no; spotting the tumour is just one task for oncologists, and should this technology live up to its promise (as yet unproven), it would become one more tool that doctors could use, removing the bottleneck of reliance on a few overworked X-ray technicians.

Augmenting Human Capabilities

These situations, where some tasks are automated within the context of a wider-scoped job, can be defined as augmented intelligence: AI and machine-learning enabling new capabilities for people, not replacing them.

Augmented intelligence is not a get-out-of-jail-free card, though. There are still impacts from automation, and not just to the X-ray technicians whose jobs might be endangered. Azeem Azhar writes in his essential Exponential View newsletter about a different sort of impact from automation, citing that RGCP piece I linked to earlier:

Babylon’s services were more likely to appeal to the young, healthy, educated and technology-savvy, allowing Babylon to cherry pick low-cost patients, leaving the traditional GPs with more complex, older patients. This is a real concern, if only because older patients often have multiple co-morbidities and are vulnerable in many ways other than their physical health. The nature of health funding in the UK depends, in some ways, on pooling patients of different risks. In other words, that unequal access to technology ends up benefiting the young (and generally more healthy) at the cost of those who aren’t well served by the technology in its present state.

Exponential View has repeatedly flagged the risks of unequal access to technology because these technologies are, whatever you think of them, literally the interface to the resources we need to live in the societies of today and tomorrow.

My rose-tinted view of the future is that making one type of patient cheaper to care for frees up more resources to devote to caring for other patients. On the other hand, I am sure some Pharma Bro 2.0 is even now writing up a business plan for something even worse than Theranos, powered by algorithms and possibly – why not? – some blockchain for good measure.1

Ethical concerns are just some of the many reasons I don’t work in healthcare. As a general rule, IT comes with far fewer moral dilemmas. In IT, in fact, we are actively encouraged to cull the weak and the sick, and indeed to do so on a purely algorithmic basis.

It is, however, extremely important that we don’t forget which domain we are operating in. An error in a medical diagnosis, whether false-positive or false-negative, can have devastating consequences, as can any system which relies on (claims of) infallible technology, such as autonomous vehicles.

A human in the loop can help correct these imbalances, such as when a GP is able to, firstly, interpret the response of the algorithm analysing X-ray images, and secondly, break the news to a patient in a compassionate way. For this type of augmentation to work, though, the process must also be designed correctly. It is not sufficient to have a human sitting in the driver’s seat and expected to take control at any time and with only seconds’ notice. Systems and processes must be designed in such a way as to take advantage of the capabilities of both participants – humans and machines.

Maybe this is something the machines can also help us with? The image above shows a component as designed by human engineers on the left, side-by-side with versions of the same component designed by neural networks.

What might our companies’ org charts look like if they were subjected to the same process? What about our economies and governments? It would be fascinating for people to use these new technologies to find out.


Photo from EVG Photos via Pexels


  1. One assumes that any would-be emulator of Martin Shkreli has at least learned not to disrespect the Wu-Tang Clan

Privacy Versus AI

There is a widespread assumption in tech circles that privacy and (useful) AI are mutually exclusive. Apple is assumed to be behind Amazon and Google in this race because of its choice to do most data processing locally on the phone, instead of uploading users’ private data in bulk to the cloud.

A recent example of this attitude comes courtesy of The Register:

Predicting an eventual upturn in the sagging smartphone market, [Gartner] research director Ranjit Atwal told The Reg that while artificial intelligence has proven key to making phones more useful by removing friction from transactions, AI required more permissive use of data to deliver. An example he cited was Uber "knowing" from your calendar that you needed a lift from the airport.

I really, really resent this assumption that connecting these services requires each and every one of them to have access to everything about me. I might not want information about my upcoming flight shared with Uber – where it can be accessed improperly, leading to someone knowing I am away from home and planning a burglary at my house. Instead, I want my phone to know that I have an upcoming flight, and offer to call me an Uber to the airport. At that point, of course I am sharing information with Uber, but I am also getting value out of it. Otherwise, the only one getting value is Uber. They get to see how many people in a particular geographical area received a suggestion to take an Uber and declined it, so they can then target those people with special offers or other marketing to persuade them to use Uber next time they have to get to the airport.

I might be happy sharing a monthly aggregate of my trips with the government – so many by car, so many on foot, or by bicycle, public transport, or ride sharing service – which they could use for better planning. I would absolutely not be okay with sharing details of every trip in real time, or giving every busybody the right to query my location in real time.

The fact that so much of the debate is taken up with unproductive discussions is what is preventing progress here. I have written about this concept of granular privacy controls before:

The government sets up an IDDB which has all of everyone's information in it; so far, so icky. But here's the thing: set it up so that individuals can grant access to specific data in that DB - such as the address. Instead of telling various credit card companies, utilities, magazine companies, Amazon, and everyone else my new address, I just update it in the IDDB, and bam, those companies' tokens automatically update too - assuming I don't revoke access in the mean time.

This could also be useful for all sorts of other things, like marital status, insurance, healthcare, and so on. Segregated, granular access to the information is the name of the game. Instead of letting government agencies and private companies read all the data, users each get access only to those data they need to do their jobs.

Unfortunately, we are stuck in an stale all-or-nothing discussion: either you surround yourself with always-on internet-connected microphones and cameras, or you might as well retreat to a shack in the woods. There is a middle ground, and I wish more people (besides Apple) recognised that.


Photo by Kyle Glenn on Unsplash

War of the World Views

There has been this interesting shift going on in coverage of Silicon Valley companies, with increasing scepticism informing what had previously been reliable hero-worshipping. Case in point: this fascinating polemic by John Battelle about the oft-ignored human externalities of “disruption" (scare quotes definitely intended).

Battelle starts from a critique of Amazon Go, the new cashier-less stores Amazon is trialling. I think it’s safe to say that he’s not a fan:

My first take on Amazon Go is this: F*cking A, do we really want eggplants and cuts of meat reduced to parameterized choices spit onto algorithmized shelves? Ick. I like the human confidence I get when a butcher considers a particular rib eye, then explains the best way to cook that one cut of meat. Sure, technology could probably deliver me a defensibly "better" steak, perhaps even one tailored to my preferences as expressed through reams of data collected through means I’ll probably never understand.
But come on.
Sometimes you just want to look a guy in the eye and sense, at that moment, that THIS rib eye is perfect for ME, because I trust that butcher across the counter. We don’t need meat informed by data and butchered by bloodless algorithms. We want our steak with a side of humanity. We lose that, we lose our own narrative.

Battelle then goes on to extrapolate that "ick" out to a critique of the whole Silicon Valley model:

It’s this question that dogs me as I think about how Facebook comports itself : We know what’s best for you, better than you do in fact, so trust us, we’ll roll the code, you consume what we put in front of you.
But… all interactions of humanity should not be seen as a decision tree waiting to be modeled, as data sets that can be scanned for patterns to inform algorithms.

Cut Down The Decision Tree For Firewood

I do think there is some merit to this critique. Charlie Stross has previously characterised corporations as immortal hive organisms which pursue the three corporate objectives of growth, profitability, and pain avoidance:

We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals. They have enormous media reach, which they use to distract attention from threats to their own survival. They also have an enormous ability to support litigation against public participation, except in the very limited circumstances where such action is forbidden. Individual atomized humans are thus either co-opted by these entities (you can live very nicely as a CEO or a politician, as long as you don't bite the feeding hand) or steamrollered if they try to resist.
In short, we are living in the aftermath of an alien invasion.

These alien beings do not quite understand our human reactions and relations, and they try pin them down and quantify them in their models. Searching for understanding through modelling is value-neutral in general, but problems start to appear when the model is taken as authoritative, with any real-life deviation from the model considered as an error to be rectified – by correcting the real-life discrepancy.

Fred Turner describes the echo chamber these corporations inhabit, and the circular reasoning it leads to, in this interview:

About ten years back, I spent a lot of time inside Google. What I saw there was an interesting loop. It started with, "Don't be evil." So then the question became, "Okay, what's good?" Well, information is good. Information empowers people. So providing information is good. Okay, great. Who provides information? Oh, right: Google provides information. So you end up in this loop where what's good for people is what's good for Google, and vice versa. And that is a challenging space to live in.

We all live in Google’s space, and it can indeed be challenging, especially if you disagree with Google about how information should be gathered and disseminated. We are all grist for its mighty Algorithm.

This presumption of infallibility on the part of the Algorithm, and of the world view that it implements is dangerous, as I have written before. Machines simply do not see the world as we do. Building our entire financial and governance systems around them risks some very unwelcome consequences.

But What About The Supermarket?

Back to Battelle for a moment, zooming back in on Amazon and its supermarket efforts:

But as they pursue the crack cocaine of capitalism — unmitigated growth — are technology platforms pushing into markets where perhaps they simply don’t belong? When a tech startup called Bodega launched with a business plan nearly identical to Amazon’s, it was laughed off the pages of TechCrunch. Why do we accept the same idea from Amazon? Because Amazon can actually pull it off?

The simple answer is that Bodega falls into the uncanny valley of AI assistance, trying to mimic a human interaction instead of embracing its new medium. A smart vending machine that learns what to stock? That has value - for the sorts of products that people like to buy from vending machines.

This is Amazon’s home turf, where the Everything Store got its start, shipping the ultimate undifferentiated good. A book is a book is a book; it doesn’t really get any less fresh, at least not once it has undergone its metamorphosis from newborn hardback to long-lived paperback.

In this context, nappies/diapers1 or bottled water are a perfect fit, and something that Amazon Prime has already been selling for a long time, albeit at a larger remove. Witness those ridiculous Dash buttons, those single-purpose IoT devices that you can place around your home so that when you see you’re low on laundry powder or toilet paper you can press the button and the product will appear miraculously on your next Amazon order.

Steaks or fresh vegetables are a different story entirely. I have yet to see the combination of sensors and algorithms that can figure out that a) these avocados are close to over-ripe, but b) that’s okay because I need them for guacamole tonight, or c) these bananas are too green to eat any time soon, and d) that’s exactly what I need because they’re for the kids’ after-school snack all next week.

People Curate, Algorithms Deliver

Why get rid of the produce guy in the first place?

Why indeed? But why make me deal with a guy for my bottled water?2

I already do cashier-less shopping; I use a hand-held scanner, scan products as I go, and swipe my credit card (or these days, my phone) on my way out. The interaction with the cashier was not the valuable one. The valuable interaction was with the people behind the various counters - fish, meat, deli - who really were, and still are, giving me personalised service. If I want even more personalised service, I go to the actual greengrocer, where the staff all know me and my kids, and will actively recommend produce for us and our tastes.

All of that personalisation would be overkill, though, if all I needed were to stock up on kitchen rolls, bottled milk, and breakfast cereal. These are routine, undifferentiated transactions, and the more human effort we can remove from those, the better. Interactions with humans are costly activities, in time (that I spend dealing with a person instead of just taking a product off the shelf) and in money (someone has to pay that person’s salary, healthcare, taxes, and so on). They should be reserved for situations where there is a proportionate payoff: the assurance that my avos will be ripe, my cut of beef will be right for the dish I am making, and my kids’ bananas will not have gone off by the time they are ready to eat them.

We are cyborgs, every day a little bit more: humans augmented by machine intelligence, with new abilities that we are only just learning to deal with. The idea of a cashier-less supermarket does not worry me that much. In fact, I suspect that if anything, by taking the friction out of shopping for undifferentiated goods, we will actually create more demand for, and appreciation of, the sort of "curated" (sorry) experience that only human experts can provide.


Photos by Julian Hanslmaier and Anurag Arora on Unsplash


  1. Delete as appropriate, depending on which side of the Atlantic you learned your English. 

  2. I like my water carbonated, so sue me. I recycle the plastic bottles, if that helps. Sometimes I even refill them from the municipal carbonated-water taps. No, I’m not even kidding; those are a thing around here (link in Italian). 

ML Joke

A machine learning algorithm walked into a bar.

The bartender asked, "What would you like to drink?"

The algorithm replied, "What’s everyone else having?"

Think Outside The Black Box

AI and machine-learning (ML) are the hot topic of the day. As is usually the case when something is on the way up to the Peak of Inflated Expectations, wild proclamations abound of how this technology is going to either doom or save us all. Going on past experience, the results will probably be more mundane – it will be useful in some situations, less so in others, and may be harmful where actively misused or negligently implemented. However, it can be hard to see that stable future from inside the whirlwind.

In that vein, I was reading an interesting article which gets a lot right, but falls down by conflating two issues which, while related, should remain distinct.

there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another.

The black-box nature of AI comes with the territory. The whole point is that, instead of having to write extensive sets of deterministic rules (IF this THEN that ELSE whatever) to cover every possible contingency, you feed data to the system and get results back. Instead of building rules, you train the system by telling it which results are good and which are not, until it starts being able to identify good results on its own.

This is great, as developing those rules is time-consuming and not exactly riveting, and maintaining them over time is even worse. There is a downside, though, in that rules are easy to debug. If you want to know why something happened, you can step through execution one instruction at a time, set breakpoints so that you can dig into what is going on at a precise moment in time, and generally have a good mechanical understanding of how the system works - or how it is failing.

I spend a fair amount of my time at work dealing with prospective customers of our own machine-learning solution. There are two common objections I hear, which fall at opposite ends of the same spectrum, but both illustrate just how different users find these new techniques.

Yes, there is an XKCD for every occasion

The first group of doubters ask to “see the machine learning". Whatever results are presented are dismissed as “just statistics". This is a common problem in AI research, where there is a general public perception of a lack of progress over the last fifty years. It is certainly true that some of the overly-optimistic predictions by the likes of Marvin Minsky have not worked out in practice, but there have been a number of successes over the years. The problem is that each time, the definition of AI has been updated to exclude the recent achievement.

Something of the calibre of Siri or Alexa would absolutely have been considered AI, but now their failure to understand exactly what is meant in every situation is considered to mean that they are not AI. Certainly Siri is not conscious in any way, just a smart collection of responses, but neither is it entirely deterministic in the way that something like Eliza is.1

This leads us to the second class of objection: “how can I debug it?" People want to be able to pause execution and inspect the state of variables, or to have some sort of log that explains exactly the decision tree that led to a certain outcome. Unfortunately machine learning simply does not work that way. Its results are what they are, and the only way to influence them is to flag which are good and which are bad.

This is where the confusion I mentioned above comes in. When these techniques are applied in a purely technical domain - in my case, enterprise IT infrastructure - the results are fairly value-neutral. If a monitoring event gets mis-classified, the nature of Big Data (yay! even more buzzwords!) means that the overall issue it is a symptom of will probably still be caught, because enough other related events will be classified correctly. If however the object of mis-categorisation happens to be a human being, then even one failure could affect that person’s job prospects, romantic success, or even their criminal record.

The black-box nature of AI & ML is where very great care must be taken to ensure that ML is a safe and useful technique to use in each case, in legal matters especially. The code of law is about as deterministic as it is possible to be; edge cases tend to get worked out in litigation, but the code itself generally aims for clarity. It is also mostly easy to debug: the points of law behind a judicial decision are documented and available for review.

None of these constraints apply to ML. If a faulty facial-recognition algorithm places you at the heart of a riot, it’s going to be tough to explain to your spouse or boss why you are being hauled off in handcuffs. Even if your name is ultimately cleared, there may still be long-term damage done, to your reputation or perhaps to your front door.

It’s important to note that, despite the potential for draconian consequences, the law is actually in some ways a best case. If an algorithm kicks you off Google and all its ancillary services (or Facebook or LinkedIn or whatever your business relies on), good luck getting that decision reviewed, certainly in any sort of timely manner.

The main fear that we should have when it comes to AI is not “what if it works and tries to enslave us all", but “what if it doesn’t work but gets used anyway".


Photo by Ricardo Gomez Angel via Unsplash


  1. Yes, it is noticeable that all of these personifications of AI just happen to be female. 

Algorithmic Reality

What if all of those earnest post-Matrix philosophical discussions were more on point than we knew?

One of the central conceits of the Matrix films is that the machines simulate a late-twentieth-century environment for their human “batteries"…

Oh. Spoiler warning, I guess? Do we still need that for a film that came out in 1999? I’m calling it - anything from last century is now fair game.

As we were: all the humans live in a simulated late-90s world, complete with all sorts of weird and wonderful mobile phones, before we decided collectively that all phones should look like smooth rectangles of black glass.

This of course had nothing whatsoever to do with the fact that the late 90s were contemporaneous with when the films were being made, and therefore cheap to film, and everything to do with the late 90s apparently being recognised as the pinnacle of human civilisation.

Here’s the thing: what if the Wachowskis were right?

The twenty-first century is no longer the domain of a purely human civilisation. We are now a hybrid, cyborg civilisation, where baseline humans are augmented by artificial systems. I don’t think we are heading towards a Matrix-style takeover by the machines, but this is going to be a significant change, and one that is hard to fully comprehend from the inside, while it is happening. Also, once the change has happened, what came before will be fundamentally incomprehensible to anyone who comes of age in that future world.

The world they will inhabit will have bots and algorithms the way we baseline humans today have commensal bacteria in our guts. Our guts have enormous structures of neurons, second only to the brain itself:

Why is our gut the only organ in our body that needs its own "brain"? Is it just to manage the process of digestion? Or could it be that one job of our second brain is to listen in on the trillions of microbes residing in the gut?

Algorithms will begin to take part in this process too, as more and more of our cognition occurs outside our own biological minds. These off-board exo-selves will feel as much a part of us as our “gut feel" does today, but they will fundamentally change what it means - and how it feels - to be human.

We can see the beginnings of this process already: we drive where the algorithms tell us to drive, we exercise the way the algorithms tell us to exercise, and we even date whom the algorithms tell us to date. We buy films, music, and books that the algorithms recommend, go on holiday where they suggest, and take jobs that they set us up with. In the future, what other decisions will we hand over to algorithms - unquestioning and unconcerned?

The algorithms and bots may not be out to enslave us, but they do see things dramatically differently than we do. For an example, take a look at this map:

This is a snapshot of a map of the continental US doing the recent solar eclipse. The traffic algorithm has no idea of what an eclipse is, but it does know that something weird is happening: people are stopping their cars in the middle of roads across a wide strip of the US.

Famously, an algorithm figured out a teenage girl was pregnant before her dad did:

An angry man went into a Target outside of Minneapolis, demanding to talk to a manager:
"My daughter got this in the mail!" he said. "She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?"
The manager didn’t have any idea what the man was talking about. He looked at the mailer. Sure enough, it was addressed to the man’s daughter and contained advertisements for maternity clothing, nursery furniture and pictures of smiling infants. The manager apologized and then called a few days later to apologize again.
On the phone, though, the father was somewhat abashed. "I had a talk with my daughter," he said. "It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology."

And that’s not even the creepiest thing algorithms can do. They can identify your face, even when you hide it with a scarf to go to a protest - (unless of course they can’t ), and they can tell your sexual orientation from a photograph.

This is why DRM, privacy, and user control in general are such important topics: we are talking about our own future exoselves here. There are perfectly legitimate reasons not to want to broadcast your identity and all your particulars to all and sundry, especially in a world which is unfortunately still filled with prejudices against anyone who doesn’t fit in with the majority. And if something that is guiding your actions and your very thoughts belongs to a corporation that makes money from people who want to influence your actions and your thoughts, where does that leave you? About as enslaved as those human batteries in the Matrix, I’d say.

I’m a straight white middle-class dude, cis-het or whatever, and basically so square I’m practically cubic, so all of this is very far from affecting me personally. I’m at the very bottom of Niemöller’s poem - but I have friends and relatives who are much higher up, so I have both personal and selfish reasons for wanting to make sure this is done right. Personal, because don’t mess with my friends, and selfish, because as the Reverend Martin wrote, if we don’t fix it early, by the time it gets to causing problems for me, it will be way too late to do anything about it.

And of course there are all sorts of other aspects of this new future that we are building which all too few people are thinking about. Future historians will refer to these decades as "Digital Dark Ages": our history will be lost behind gratuitously incompatible file formats and DRM to which no living entity (human or corporate) has the keys any more. I was able to flip through my grandparents’ pictures and read a great-uncle’s book; as things stand, my grandchildren will not be able to have this experience.

The late twentieth century may indeed go down as the high-water mark of the purely human civilisation. The technologies that would make up the new world already existed - I played a full VR game, with goggles, 3D mouse, and a subwoofer in a backpack rig, in 1998 - but they were not yet fully joined up, and only vanishingly few people appreciated what would happen when they would all be connected up.

I have no intention of standing athwart history, yelling Stop - but we do need to think carefully about what kind of future we are building, and where it will take us. If the first couple of decades of this scary new century have taught us anything, it’s that the defences of “oh, that’ll never work" and “nobody would ever do that" are no defence at all, in cryptology, civil liberties, or anywhere else - if, indeed, they ever were.