Coded Bias: a film exploring AI

Have you seen Coded Bias, a new film from Netflix? If you are at all interested in artificial intelligence (AI), I recommend you find a way to watch it, even if you don’t have a Netflix subscription. It takes a deep dive into the state of artificial intelligence, and as Aparna Dhinakaran writes at Forbes, “the issues it confronts are uncomfortably relevant.” What she is referring to is the facial recognition programmes that have a severe algorithmic bias against women and people of colour.

MIT Media Lab researcher Joy Buolamwini says in the film, “The systems weren’t as familiar with faces like mine (as those of mostly men and lighter-skinned people),” and her research showed that Microsoft, IBM, and Amazon’s facial recognition services all have common inaccuracies.

The Netflix film examines how these inaccuracies can spread discrimination across all spheres of daily life. It also touches on other spheres of AI use where bias occurs, such as “who gets hired, what kind of medical treatment someone receives, who goes to college, the financial credit we get, and the length of a prison term someone serves.” This is not the aim of AI, which should be used to enhance opportunities and advancement for all people, particularly those most vulnerable to discrimination.

 Coded Bias tries to show us how these issues with AI can be removed, with the onus lying in looking at how AI leverages data and how it is developed, deployed, and used. The film’s director, Shalini Kantayya said, “This is a moment where we are all in a lot of pain. This moment is asking us to drop into a deeper place in our humanity to lead.” She also attempts to shine a light on why better AI solutions should be focusing on protecting those communities that stand to be harmed the most by it.

One way forward is to look at the innovations in AI/ML (ML is machine learning). These will change how AI models are observed and measured to ensure fair, ethical, and absent bias. There is also a need to deliver AI systems that have better tools for accountability.

We live in a time when socio-economic inequities based on ethnicity are in the spotlight, therefore we need AI that makes it easier for marginalized populations to benefit from improved technology, rather than the technology pushing them further into the margins. When that happens we will all experience a better world.

Is AI more dangerous than nukes?

Elon Musk says AI (artificial intelligence) is far more dangerous than nuclear weapons, but then he is known for making controversial statements that take us all by surprise. But in what ways might AI be dangerous and what should we be aware of?

Glenn Gow, an expert consultant on AI Strategy for businesses, says that firms can use AI to reduce risk to “business partners, employees and customers,” before regulatory bodies step in to force them to take specific steps. As he says, most firms “have regulations to govern how we manage risks to ensure safety,” yet there are very few regulations around the use of AI regarding safety, although there are regulations about its use in relation to privacy. Therefore, there is a need to find ways to manage the risk presented by AI systems.

For example, Singapore has created a Model AI Governance Framework that is a good starting place to understand the risks in relation to two key issues: 1. The level of human involvement in AI;

2. The potential harm caused by AI.

The level of human involvement

Where are the dangers here? First, we have to remember that sometimes AI works alone, and at other times it requires human input. When there is no human involvement, the AI runs on its own and a human can’t override the AI. When a human is involved, the AI only offers suggestions, such as in medical diagnostics and treatment. The third type of interaction is the AI that is designed to “let the human intercede if the human disagrees or determines the AI has failed.” AI-based traffic prediction systems are an example of this.

In the case of the third example, which Gow calls ‘human-over-the-loop’, the risk of harm is low, but the severity of harm is high.

In a ‘human-in-the-loop’ situation, the risk of both probability and severity of harm is high. Gow gives the following example: “Your corporate development team uses AI to identify potential acquisition targets for the company. Also, they use AI to conduct financial due diligence on the various targets. Both the probability and severity of harm are high in this decision.”

When humans are not involved at all, the probability of harm is high, but the severity of harm is low.

As Gow suggests, The Modern AI Governance Framework gives boards and management a starting place to manage risk with AI projects. Whilst AI could be dangerous in several scenarios, by managing when and how humans will be in control, we can greatly reduce a company’s risk factors, and ensure AI is safer than nukes.

Can XAI in banking help small businesses?

Small businesses (SMEs) are no longer as well served by traditional banks, yet this is one niche sector where they have an opportunity to shine.

To date, banks have provided SMEs with a mix of retail and corporate services, however, as a Finextra blog explains, “this no longer fits the evolving needs of small businesses.”

Services for this business sector need to think about more holistic solutions. These may include more collaboration with a range of digital service providers if they are to retain the confidence of SME clients by addressing their pressing needs.

Temenos, a firm specialising in enterprise software for banks and financial services has been reimagining how banks could better serve SMEs using the available technology. For example, “banks can implement innovative design-centric and data-driven products, as well as services that can transform the SME customer experience.”

The customer’s digital experience is now critical, as is the use of data, because these will be the driving force in future SME banking services. And this is where artificial intelligence (AI) can be of enormous help. It can enable banks to leverage data from multiple sources “to make faster, and more accurate decisions and provide individualised, frictionless customer experiences.”

Utilising AI, or XAI (explainable AI) would be another major step, primarily because “one of the key issues for banks using AI applications that there is little if any discernible insight into how they reach their decisions.” Transparency is required for customer confidence, especially concerning lending.

If banks looked at more than an SME’s credit score, and took a more holistic approach by viewing a range of attributes, they would be able to make more “nuanced and fully explainable decisions that lead to 20% more positive credit decisions and fewer false positives.” Furthermore,  this can be done in real-time using APIs to connect to third-party data sources.

Banks using XAI can show how the decision was made and then suggest alternative products or provide advice about how to improve the chances of getting a loan. In this particular period of time, with the Covid-19 pandemic having negatively affected so many small businesses, there has been an increased need for SME loans. As a result, banks need to support this with more digitisation and smarter decision-making. Using XAI seems like a good place to start.

Mastercard introduces AI-powered cybersecurity

Cybersecurity remains one of the hottest topics around. While browsing today’s media I noted one article said that cyber attacks rose by 250% during the pandemic. Apparently it was the perfect time for scammers and hackers to wield their weapons.

This may be one of the things that prompted Mastercard to launch Cyber Secure, “a first-of-its-kind, AI-powered suite of tools that allows banks to assess cyber risk across their ecosystem and prevent potential breaches.”

 

It all comes down to the fact that the digital economy is expanding rapidly and is more complex. Alongside this positive news, comes the less appealing revelation that the growth creates a vulnerability that some are delighted to take advantage of.  For example,it is estimated that one business will fall victim to a ransomware attack every 11 seconds by next year.

 

Ajay Bhalla, president, Cyber & Intelligence, Mastercard said:

“The world today faces a $5.2 trillion cyber breach problem. This is one of the biggest threats to consumer trust. At Mastercard, we aim to stay ahead of fraudsters and to continually evolve and enhance our protection of cyber environments for our bank and merchant customers. With Cyber Secure, we have a suite of AI-powered cyber capabilities that allows us to do just that, ensuring trust across every experience, for businesses and consumers.” 

 

Cyber Secure will enable banks “to continuously monitor and track their cyber posture,” writes Polly Harrison. It will allow banks to be more proactive in managing and preventing data compromise, as well as protecting the integrity of the payment ecosystem and consumer data. It should also, of course, prevent financial loss caused by attacks.

Mastercard has based its new product on the AI capapbilities of RickRecon, which it purchased in 2020. It uses advanced AI for risk assessment, which evaluates multiple public and proprietary data sources and checks it against 40 security and infrastructure criteria.

Harrison writes, “In 2019, Mastercard saved stakeholders $20bn of fraud through its AI-enabled cyber systems,” so it is to be hoped that Cyber Secure prevents even more theft in 2021 and beyond.