Will a boom follow Covid-19?

Many people must be wondering what the rest of this decade might look like after such a disastrous start to the 2020s. Can we look back at history and see a trend? For example, the 1920s that followed World War I and the Spanish flu epidemic was a Golden Age when economic growth surged, society relaxed a lot of its restrictions, women cut their hair for the first time, all of it captured and portrayed in F. Scott Fitzgerald’s ‘The Great Gatsby’. Now there’s a book that has never gone out of fashion.

This week I read an article by Rich Karlgard in Forbes that is written from the optimist’s viewpoint. He believes there are four possible reasons that the 2020s might be another 1920s, although he does so with caution.

Digital tech will accelerate

There is no doubt that the pandemic made all things digital vastly more important. Otherwise, we wouldn’t have seen so much change in such a short time. Karlgard first points to the fact that back in 2017, Diane Greene, then the Google Cloud CEO, told a Forbes audience that the rate of digital technology progress was accelerating rapidly. But that does not necessarily bring productivity along with it, because the business model needs to change for that to happen. Then along came Covid-19, which forced a rapid business change. Microsoft CEO Satya Nadella has said that five years of digital transformation had taken place in six months, all because businesses needed to work smarter, faster and be more nimble.

Artificial Intelligence is now scalable

A number of digital technologies reached maturity at the same time: cheaper cloud computing, universal digital computing and faster telecoms with the arrival of 5G. And then there is Artificial Intelligence (AI). Karlgard says this will be the decade of enterprise AI, and that’s spot on. Prior to 2020 using AI was hard, labour-intensive, expensive work, but now it is very much easier to use and it is going to transform many areas of industry, from logistics to customer service.

We’re awash with capital

Although it may not always be obvious the world is swimming in capital right now, which makes it easier for start-ups to get the funding they need. Investors have their eyes on digital technology and AI products, because as Karlgard says, “thy know they are game changers.” He adds, “These will disrupt business models and markets, and power enormous fortunes.”

Revolutions in the physical world

We are not talking about physical revolutions here; but revolutions in the way aspects of the physical world are being changed by technology. For example, autonomous trucks don’t need to take rest breaks, drone cameras can improve agricultural crop yields, and gene sequencing combined with AI can create personalised medical treatments.

However, whilst all these may lead to a boom after what feels like a bust, if we look back at the 1920s, we must note that while cities grew and grew, rural areas were not invited to the party, creating a divide that lingers to this day. The 1920s also experienced a stock crash in 1921 that almost buried the decade, followed by the more famous Wall St crash of 1929. So, even the Golden Age had its downsides. Still, after our experience of 2020-21, one that has been globally shared, let’s focus on optimism and the way in which the Covid-19 pandemic has accelerated digital technology for our benefit and forced us to be more agile.

Coded Bias: a film exploring AI

Have you seen Coded Bias, a new film from Netflix? If you are at all interested in artificial intelligence (AI), I recommend you find a way to watch it, even if you don’t have a Netflix subscription. It takes a deep dive into the state of artificial intelligence, and as Aparna Dhinakaran writes at Forbes, “the issues it confronts are uncomfortably relevant.” What she is referring to is the facial recognition programmes that have a severe algorithmic bias against women and people of colour.

MIT Media Lab researcher Joy Buolamwini says in the film, “The systems weren’t as familiar with faces like mine (as those of mostly men and lighter-skinned people),” and her research showed that Microsoft, IBM, and Amazon’s facial recognition services all have common inaccuracies.

The Netflix film examines how these inaccuracies can spread discrimination across all spheres of daily life. It also touches on other spheres of AI use where bias occurs, such as “who gets hired, what kind of medical treatment someone receives, who goes to college, the financial credit we get, and the length of a prison term someone serves.” This is not the aim of AI, which should be used to enhance opportunities and advancement for all people, particularly those most vulnerable to discrimination.

 Coded Bias tries to show us how these issues with AI can be removed, with the onus lying in looking at how AI leverages data and how it is developed, deployed, and used. The film’s director, Shalini Kantayya said, “This is a moment where we are all in a lot of pain. This moment is asking us to drop into a deeper place in our humanity to lead.” She also attempts to shine a light on why better AI solutions should be focusing on protecting those communities that stand to be harmed the most by it.

One way forward is to look at the innovations in AI/ML (ML is machine learning). These will change how AI models are observed and measured to ensure fair, ethical, and absent bias. There is also a need to deliver AI systems that have better tools for accountability.

We live in a time when socio-economic inequities based on ethnicity are in the spotlight, therefore we need AI that makes it easier for marginalized populations to benefit from improved technology, rather than the technology pushing them further into the margins. When that happens we will all experience a better world.

Is AI more dangerous than nukes?

Elon Musk says AI (artificial intelligence) is far more dangerous than nuclear weapons, but then he is known for making controversial statements that take us all by surprise. But in what ways might AI be dangerous and what should we be aware of?

Glenn Gow, an expert consultant on AI Strategy for businesses, says that firms can use AI to reduce risk to “business partners, employees and customers,” before regulatory bodies step in to force them to take specific steps. As he says, most firms “have regulations to govern how we manage risks to ensure safety,” yet there are very few regulations around the use of AI regarding safety, although there are regulations about its use in relation to privacy. Therefore, there is a need to find ways to manage the risk presented by AI systems.

For example, Singapore has created a Model AI Governance Framework that is a good starting place to understand the risks in relation to two key issues: 1. The level of human involvement in AI;

2. The potential harm caused by AI.

The level of human involvement

Where are the dangers here? First, we have to remember that sometimes AI works alone, and at other times it requires human input. When there is no human involvement, the AI runs on its own and a human can’t override the AI. When a human is involved, the AI only offers suggestions, such as in medical diagnostics and treatment. The third type of interaction is the AI that is designed to “let the human intercede if the human disagrees or determines the AI has failed.” AI-based traffic prediction systems are an example of this.

In the case of the third example, which Gow calls ‘human-over-the-loop’, the risk of harm is low, but the severity of harm is high.

In a ‘human-in-the-loop’ situation, the risk of both probability and severity of harm is high. Gow gives the following example: “Your corporate development team uses AI to identify potential acquisition targets for the company. Also, they use AI to conduct financial due diligence on the various targets. Both the probability and severity of harm are high in this decision.”

When humans are not involved at all, the probability of harm is high, but the severity of harm is low.

As Gow suggests, The Modern AI Governance Framework gives boards and management a starting place to manage risk with AI projects. Whilst AI could be dangerous in several scenarios, by managing when and how humans will be in control, we can greatly reduce a company’s risk factors, and ensure AI is safer than nukes.

Can XAI in banking help small businesses?

Small businesses (SMEs) are no longer as well served by traditional banks, yet this is one niche sector where they have an opportunity to shine.

To date, banks have provided SMEs with a mix of retail and corporate services, however, as a Finextra blog explains, “this no longer fits the evolving needs of small businesses.”

Services for this business sector need to think about more holistic solutions. These may include more collaboration with a range of digital service providers if they are to retain the confidence of SME clients by addressing their pressing needs.

Temenos, a firm specialising in enterprise software for banks and financial services has been reimagining how banks could better serve SMEs using the available technology. For example, “banks can implement innovative design-centric and data-driven products, as well as services that can transform the SME customer experience.”

The customer’s digital experience is now critical, as is the use of data, because these will be the driving force in future SME banking services. And this is where artificial intelligence (AI) can be of enormous help. It can enable banks to leverage data from multiple sources “to make faster, and more accurate decisions and provide individualised, frictionless customer experiences.”

Utilising AI, or XAI (explainable AI) would be another major step, primarily because “one of the key issues for banks using AI applications that there is little if any discernible insight into how they reach their decisions.” Transparency is required for customer confidence, especially concerning lending.

If banks looked at more than an SME’s credit score, and took a more holistic approach by viewing a range of attributes, they would be able to make more “nuanced and fully explainable decisions that lead to 20% more positive credit decisions and fewer false positives.” Furthermore,  this can be done in real-time using APIs to connect to third-party data sources.

Banks using XAI can show how the decision was made and then suggest alternative products or provide advice about how to improve the chances of getting a loan. In this particular period of time, with the Covid-19 pandemic having negatively affected so many small businesses, there has been an increased need for SME loans. As a result, banks need to support this with more digitisation and smarter decision-making. Using XAI seems like a good place to start.