Showing all posts tagged artificial-intelligence:

Which Algorithms Will Watch The Algorithms?

This week’s AI-powered scandal is the news that Amazon scrapped a "secret" AI recruiting tool that showed bias against women.

Amazon.com Inc’s machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.

The AI, of course, has no opinion about one women one way or the other. Amazon HR's recruitment tool was not “biased" in the sense that a human recruiter might be; it was simply unearthing existing bias:

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

If you train your neural networks with biased data, you are just re-encoding and reinforcing the bias! This was not news even in 2015 when Amazon started this experiment. However, it illustrates a general problem with AI, namely its users’ naive tendency to take the algorithms’ output at face value. As with any automated system, human experts will be needed to make intelligent evaluations of the output of the system.

Train The Guardians

The need for secondary evaluation is only going to increase as these systems proliferate. For instance, Canada now plans to use AI to decide immigration cases. If your request for a visa is rejected by an expert system, what recourse do you have? What if the basis of rejection is simply that claims like yours were usually denied in the past?

These concerns will become more and more critical as AI tools continue to become more mainstream.

"The computer says no" has always been the jobsworth’s go-to excuse. But who programmed the computer to say "no"? The fact that the computers are now programming themselves to say "no" does not absolve organisations of responsibility. The neural networks are simply reflecting the inputs they are given. It is on us to give them good inputs.

After all, we are educating our own children.

Privacy Versus AI

There is a widespread assumption in tech circles that privacy and (useful) AI are mutually exclusive. Apple is assumed to be behind Amazon and Google in this race because of its choice to do most data processing locally on the phone, instead of uploading users’ private data in bulk to the cloud.

A recent example of this attitude comes courtesy of The Register:

Predicting an eventual upturn in the sagging smartphone market, [Gartner] research director Ranjit Atwal told The Reg that while artificial intelligence has proven key to making phones more useful by removing friction from transactions, AI required more permissive use of data to deliver. An example he cited was Uber "knowing" from your calendar that you needed a lift from the airport.

I really, really resent this assumption that connecting these services requires each and every one of them to have access to everything about me. I might not want information about my upcoming flight shared with Uber – where it can be accessed improperly, leading to someone knowing I am away from home and planning a burglary at my house. Instead, I want my phone to know that I have an upcoming flight, and offer to call me an Uber to the airport. At that point, of course I am sharing information with Uber, but I am also getting value out of it. Otherwise, the only one getting value is Uber. They get to see how many people in a particular geographical area received a suggestion to take an Uber and declined it, so they can then target those people with special offers or other marketing to persuade them to use Uber next time they have to get to the airport.

I might be happy sharing a monthly aggregate of my trips with the government – so many by car, so many on foot, or by bicycle, public transport, or ride sharing service – which they could use for better planning. I would absolutely not be okay with sharing details of every trip in real time, or giving every busybody the right to query my location in real time.

The fact that so much of the debate is taken up with unproductive discussions is what is preventing progress here. I have written about this concept of granular privacy controls before:

The government sets up an IDDB which has all of everyone's information in it; so far, so icky. But here's the thing: set it up so that individuals can grant access to specific data in that DB - such as the address. Instead of telling various credit card companies, utilities, magazine companies, Amazon, and everyone else my new address, I just update it in the IDDB, and bam, those companies' tokens automatically update too - assuming I don't revoke access in the mean time.

This could also be useful for all sorts of other things, like marital status, insurance, healthcare, and so on. Segregated, granular access to the information is the name of the game. Instead of letting government agencies and private companies read all the data, users each get access only to those data they need to do their jobs.

Unfortunately, we are stuck in an stale all-or-nothing discussion: either you surround yourself with always-on internet-connected microphones and cameras, or you might as well retreat to a shack in the woods. There is a middle ground, and I wish more people (besides Apple) recognised that.


Photo by Kyle Glenn on Unsplash