A company called Babylon has been in the news, claiming that its chatbot can pass a standard medical exam with higher scores than most human candidates. Naturally, the medical profession is not overjoyed with this result:

No app or algorithm will be able to do what a GP does.

On the surface, this looks like just the latest front in the ongoing replacement of human professionals with automation. It has been pointed out that supporters of the automation of blue-collar jobs become hypocritically defensive when it looks like their own white-collar jobs may be next in the firing line, and this reaction from the RCGP seems to be par for that course.

For what it’s worth, I don’t think that is what is going on here. As I have written before, automation takes over tasks, not jobs. That earlier wave of automation of blue-collar jobs was enabled by the fact that the jobs in question had already been refined down to single tasks on an assembly line. It was this subdivision which made it practical for machinery to take over those discrete tasks.

Most white-collar jobs are not so neatly subdivided, consisting of many different tasks. Automating away one task should, all things being equal, help people focus on other parts of the job. GPs – General Practitioners – by definition have jobs that encompass many tasks, and requiring significant human empathy. While I do therefore agree with the RCGP that there is no immediate danger to GP’s jobs, that is not to say there is no impact to jobs from automation; I’d hate to be a travel agent right now, for instance.

Here is a different example, still in the medical field: a neural network is apparently able to identify early signs of tumours in X-ray images. So does that mean there is no role for doctors here either? Well, no; spotting the tumour is just one task for oncologists, and should this technology live up to its promise (as yet unproven), it would become one more tool that doctors could use, removing the bottleneck of reliance on a few overworked X-ray technicians.

Augmenting Human Capabilities

These situations, where some tasks are automated within the context of a wider-scoped job, can be defined as augmented intelligence: AI and machine-learning enabling new capabilities for people, not replacing them.

Augmented intelligence is not a get-out-of-jail-free card, though. There are still impacts from automation, and not just to the X-ray technicians whose jobs might be endangered. Azeem Azhar writes in his essential Exponential View newsletter about a different sort of impact from automation, citing that RGCP piece I linked to earlier:

Babylon’s services were more likely to appeal to the young, healthy, educated and technology-savvy, allowing Babylon to cherry pick low-cost patients, leaving the traditional GPs with more complex, older patients. This is a real concern, if only because older patients often have multiple co-morbidities and are vulnerable in many ways other than their physical health. The nature of health funding in the UK depends, in some ways, on pooling patients of different risks. In other words, that unequal access to technology ends up benefiting the young (and generally more healthy) at the cost of those who aren’t well served by the technology in its present state.

Exponential View has repeatedly flagged the risks of unequal access to technology because these technologies are, whatever you think of them, literally the interface to the resources we need to live in the societies of today and tomorrow.

My rose-tinted view of the future is that making one type of patient cheaper to care for frees up more resources to devote to caring for other patients. On the other hand, I am sure some Pharma Bro 2.0 is even now writing up a business plan for something even worse than Theranos, powered by algorithms and possibly – why not? – some blockchain for good measure.1

Ethical concerns are just some of the many reasons I don’t work in healthcare. As a general rule, IT comes with far fewer moral dilemmas. In IT, in fact, we are actively encouraged to cull the weak and the sick, and indeed to do so on a purely algorithmic basis.

It is, however, extremely important that we don’t forget which domain we are operating in. An error in a medical diagnosis, whether false-positive or false-negative, can have devastating consequences, as can any system which relies on (claims of) infallible technology, such as autonomous vehicles.

A human in the loop can help correct these imbalances, such as when a GP is able to, firstly, interpret the response of the algorithm analysing X-ray images, and secondly, break the news to a patient in a compassionate way. For this type of augmentation to work, though, the process must also be designed correctly. It is not sufficient to have a human sitting in the driver’s seat and expected to take control at any time and with only seconds’ notice. Systems and processes must be designed in such a way as to take advantage of the capabilities of both participants – humans and machines.

Maybe this is something the machines can also help us with? The image above shows a component as designed by human engineers on the left, side-by-side with versions of the same component designed by neural networks.

What might our companies’ org charts look like if they were subjected to the same process? What about our economies and governments? It would be fascinating for people to use these new technologies to find out.


Photo from EVG Photos via Pexels


  1. One assumes that any would-be emulator of Martin Shkreli has at least learned not to disrespect the Wu-Tang Clan