Ai has come a long way. Just after WW2, there was a preconception that developing Artificial Intelligence would lead to something like an ‘Attack of the Zombie Robots’, and that AI could only be a bad thing for humanity. Fortunately we have come a long way from the old sci-fi view of AI, and we even have robotics used in surgery, but there is still a lingering feeling that AI and robotics are threatening in some way, and one of those ways is ‘bias’.
AI is very much part of the fourth industrial revolution, which also includes cyber-physical systems powered by technologies like machine learning, blockchain, genome editing and decentralised governance. The challenges that we face in developing our use of AI, are tricky ethical ones for the most part, which need a sensitive approach.
So, what is the issue? As James Warner writes in his article on AI and bias,
“AI is aiding the decision making from all walks of life. But, the point is that the foundation of AI is laid by the way humans function.” And as we all know — humans have bias, unfortunately. Warner says, “It is the result of a mandatory limited view of the world that any single person or group can achieve. This social bias just as it reflects in humans in sensitive areas can also reflect in artificial intelligence.”
And this human bias, as it cascades down into AI can be dangerous to you and me. For example, Warner writes: “the investigative news site Pro Publica found that a criminal justice algorithm used in Florida mislabelled African American defendants as high risk. This was at twice the rate it mislabelled white defendants.” Facial recognition has already been highlighted as an area with shocking ethnic bias, as well as recognition errors.
IBM suggests that researchers are quickly learning how to handle human bias as it infiltrates AI. It has said that researchers are learning to deal with this bias and coming up with new solutions that control and finally free AI systems out of bias.
The ways in which bias can creep in are numerous, but researchers are developing algorithms that can assist with detecting and mitigating hidden biases in the data. Furthermore, scientists at IBM have devised an independent bias rating system with the ability to determine the fairness of an AI system.
One outcome of all this may be that we discover more about how human biases are formed and how we apply them throughout our lives. Some biases are obvious to us, but others tend to sneak around unnoticed, until somebody else points it out. Perhaps we will find that AI can teach us how to handle a variety of biased opinions, and be more fair ourselves.