The financial sector needs AI

How can artificial intelligence (AI) support banking services, and are banks looking at it favourably, are two of the questions that Insider Intelligence’s AI in Banking report sets out to answer. In response to the second of those questions, it appears that 80% of banks are highly aware of the potential benefits presented by AI.

The scope of possible uses for AI and machine learning in finance stretches across business functions and sectors. At present, the technology is being widely used in upgrading customer services, looking at new ways of segmenting clients in order to offer more bespoke products, as well as fraud prevention and loan assessment, and there are many more opportunities to expand it.

In customer services, banks now use AI-based chatbots in order to provide customer services and support on a 24/7 basis. The bots, as many of you have probably experienced, have been ‘taught’ to answer basic customer questions via an instant messenger interface. They are able to provide fast and relevant information and support to each user and drive tailored interactions, and the more sophisticated the chatbots become, it is anticipated that customer satisfaction will rise in tandem.

Client segmentation is an interesting one. It divides bank customers into groups based on common characteristics, such as demographics or behaviours. Here, AI can seek out patterns within client data quickly and on a huge scale, creating outputs that would otherwise be unachievable through manual means, or at least would take an exceedingly long time to process if humans were performing the task.

In loan assessment and fraud prevention, AI also uses its pattern recognition skills to search out irregular transactions that would otherwise go unnoticed by humans but may indicate the presence of fraud. In this respect, AI is a great tool for banks to assess loan risks, detect and prevent payments fraud and improve processes for anti-money laundering. For example, Mastercard’s Identity Check solution developed in Dublin, uses machine learning capabilities to verify more than 150 variables as part of the transaction process to help reduce fraud, thus giving merchants more confidence.

These examples are just the beginning of how AI can benefit finance, although it is important to consider that with increased use, it is imperative that controls on how AI is set up and applied are put in place to ensure systems are robust, fair and safe. For example, an AI tool that has not received the necessary guidance and proper training can output responses that lead to unknowingly biased decisions, with potentially damaging consequences.

It is essential that responsible governance of solutions plays an important role in the successful deployment of AI. It is only by keeping models tight to their tasks and free of bias and error that banks can be sure of the best results for all.

Will a boom follow Covid-19?

Many people must be wondering what the rest of this decade might look like after such a disastrous start to the 2020s. Can we look back at history and see a trend? For example, the 1920s that followed World War I and the Spanish flu epidemic was a Golden Age when economic growth surged, society relaxed a lot of its restrictions, women cut their hair for the first time, all of it captured and portrayed in F. Scott Fitzgerald’s ‘The Great Gatsby’. Now there’s a book that has never gone out of fashion.

This week I read an article by Rich Karlgard in Forbes that is written from the optimist’s viewpoint. He believes there are four possible reasons that the 2020s might be another 1920s, although he does so with caution.

Digital tech will accelerate

There is no doubt that the pandemic made all things digital vastly more important. Otherwise, we wouldn’t have seen so much change in such a short time. Karlgard first points to the fact that back in 2017, Diane Greene, then the Google Cloud CEO, told a Forbes audience that the rate of digital technology progress was accelerating rapidly. But that does not necessarily bring productivity along with it, because the business model needs to change for that to happen. Then along came Covid-19, which forced a rapid business change. Microsoft CEO Satya Nadella has said that five years of digital transformation had taken place in six months, all because businesses needed to work smarter, faster and be more nimble.

Artificial Intelligence is now scalable

A number of digital technologies reached maturity at the same time: cheaper cloud computing, universal digital computing and faster telecoms with the arrival of 5G. And then there is Artificial Intelligence (AI). Karlgard says this will be the decade of enterprise AI, and that’s spot on. Prior to 2020 using AI was hard, labour-intensive, expensive work, but now it is very much easier to use and it is going to transform many areas of industry, from logistics to customer service.

We’re awash with capital

Although it may not always be obvious the world is swimming in capital right now, which makes it easier for start-ups to get the funding they need. Investors have their eyes on digital technology and AI products, because as Karlgard says, “thy know they are game changers.” He adds, “These will disrupt business models and markets, and power enormous fortunes.”

Revolutions in the physical world

We are not talking about physical revolutions here; but revolutions in the way aspects of the physical world are being changed by technology. For example, autonomous trucks don’t need to take rest breaks, drone cameras can improve agricultural crop yields, and gene sequencing combined with AI can create personalised medical treatments.

However, whilst all these may lead to a boom after what feels like a bust, if we look back at the 1920s, we must note that while cities grew and grew, rural areas were not invited to the party, creating a divide that lingers to this day. The 1920s also experienced a stock crash in 1921 that almost buried the decade, followed by the more famous Wall St crash of 1929. So, even the Golden Age had its downsides. Still, after our experience of 2020-21, one that has been globally shared, let’s focus on optimism and the way in which the Covid-19 pandemic has accelerated digital technology for our benefit and forced us to be more agile.

Coded Bias: a film exploring AI

Have you seen Coded Bias, a new film from Netflix? If you are at all interested in artificial intelligence (AI), I recommend you find a way to watch it, even if you don’t have a Netflix subscription. It takes a deep dive into the state of artificial intelligence, and as Aparna Dhinakaran writes at Forbes, “the issues it confronts are uncomfortably relevant.” What she is referring to is the facial recognition programmes that have a severe algorithmic bias against women and people of colour.

MIT Media Lab researcher Joy Buolamwini says in the film, “The systems weren’t as familiar with faces like mine (as those of mostly men and lighter-skinned people),” and her research showed that Microsoft, IBM, and Amazon’s facial recognition services all have common inaccuracies.

The Netflix film examines how these inaccuracies can spread discrimination across all spheres of daily life. It also touches on other spheres of AI use where bias occurs, such as “who gets hired, what kind of medical treatment someone receives, who goes to college, the financial credit we get, and the length of a prison term someone serves.” This is not the aim of AI, which should be used to enhance opportunities and advancement for all people, particularly those most vulnerable to discrimination.

 Coded Bias tries to show us how these issues with AI can be removed, with the onus lying in looking at how AI leverages data and how it is developed, deployed, and used. The film’s director, Shalini Kantayya said, “This is a moment where we are all in a lot of pain. This moment is asking us to drop into a deeper place in our humanity to lead.” She also attempts to shine a light on why better AI solutions should be focusing on protecting those communities that stand to be harmed the most by it.

One way forward is to look at the innovations in AI/ML (ML is machine learning). These will change how AI models are observed and measured to ensure fair, ethical, and absent bias. There is also a need to deliver AI systems that have better tools for accountability.

We live in a time when socio-economic inequities based on ethnicity are in the spotlight, therefore we need AI that makes it easier for marginalized populations to benefit from improved technology, rather than the technology pushing them further into the margins. When that happens we will all experience a better world.

Is AI more dangerous than nukes?

Elon Musk says AI (artificial intelligence) is far more dangerous than nuclear weapons, but then he is known for making controversial statements that take us all by surprise. But in what ways might AI be dangerous and what should we be aware of?

Glenn Gow, an expert consultant on AI Strategy for businesses, says that firms can use AI to reduce risk to “business partners, employees and customers,” before regulatory bodies step in to force them to take specific steps. As he says, most firms “have regulations to govern how we manage risks to ensure safety,” yet there are very few regulations around the use of AI regarding safety, although there are regulations about its use in relation to privacy. Therefore, there is a need to find ways to manage the risk presented by AI systems.

For example, Singapore has created a Model AI Governance Framework that is a good starting place to understand the risks in relation to two key issues: 1. The level of human involvement in AI;

2. The potential harm caused by AI.

The level of human involvement

Where are the dangers here? First, we have to remember that sometimes AI works alone, and at other times it requires human input. When there is no human involvement, the AI runs on its own and a human can’t override the AI. When a human is involved, the AI only offers suggestions, such as in medical diagnostics and treatment. The third type of interaction is the AI that is designed to “let the human intercede if the human disagrees or determines the AI has failed.” AI-based traffic prediction systems are an example of this.

In the case of the third example, which Gow calls ‘human-over-the-loop’, the risk of harm is low, but the severity of harm is high.

In a ‘human-in-the-loop’ situation, the risk of both probability and severity of harm is high. Gow gives the following example: “Your corporate development team uses AI to identify potential acquisition targets for the company. Also, they use AI to conduct financial due diligence on the various targets. Both the probability and severity of harm are high in this decision.”

When humans are not involved at all, the probability of harm is high, but the severity of harm is low.

As Gow suggests, The Modern AI Governance Framework gives boards and management a starting place to manage risk with AI projects. Whilst AI could be dangerous in several scenarios, by managing when and how humans will be in control, we can greatly reduce a company’s risk factors, and ensure AI is safer than nukes.