Which Algorithms Will Watch The Algorithms?
This week’s AI-powered scandal is the news that Amazon scrapped a "secret" AI recruiting tool that showed bias against women.
Amazon.com Inc’s machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.
The AI, of course, has no opinion about one women one way or the other. Amazon HR's recruitment tool was not "biased" in the sense that a human recruiter might be; it was simply unearthing existing bias:
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
If you train your neural networks with biased data, you are just re-encoding and reinforcing the bias! This was not news even in 2015 when Amazon started this experiment. However, it illustrates a general problem with AI, namely its users’ naive tendency to take the algorithms’ output at face value. As with any automated system, human experts will be needed to make intelligent evaluations of the output of the system.
Train The Guardians
The need for secondary evaluation is only going to increase as these systems proliferate. For instance, Canada now plans to use AI to decide immigration cases. If your request for a visa is rejected by an expert system, what recourse do you have? What if the basis of rejection is simply that claims like yours were usually denied in the past?
These concerns will become more and more critical as AI tools continue to become more mainstream.
"The computer says no" has always been the jobsworth’s go-to excuse. But who programmed the computer to say "no"? The fact that the computers are now programming themselves to say "no" does not absolve organisations of responsibility. The neural networks are simply reflecting the inputs they are given. It is on us to give them good inputs.
After all, we are educating our own children.