Is AI more dangerous than nukes?

Elon Musk says AI (artificial intelligence) is far more dangerous than nuclear weapons, but then he is known for making controversial statements that take us all by surprise. But in what ways might AI be dangerous and what should we be aware of?

Glenn Gow, an expert consultant on AI Strategy for businesses, says that firms can use AI to reduce risk to “business partners, employees and customers,” before regulatory bodies step in to force them to take specific steps. As he says, most firms “have regulations to govern how we manage risks to ensure safety,” yet there are very few regulations around the use of AI regarding safety, although there are regulations about its use in relation to privacy. Therefore, there is a need to find ways to manage the risk presented by AI systems.

For example, Singapore has created a Model AI Governance Framework that is a good starting place to understand the risks in relation to two key issues: 1. The level of human involvement in AI;

2. The potential harm caused by AI.

The level of human involvement

Where are the dangers here? First, we have to remember that sometimes AI works alone, and at other times it requires human input. When there is no human involvement, the AI runs on its own and a human can’t override the AI. When a human is involved, the AI only offers suggestions, such as in medical diagnostics and treatment. The third type of interaction is the AI that is designed to “let the human intercede if the human disagrees or determines the AI has failed.” AI-based traffic prediction systems are an example of this.

In the case of the third example, which Gow calls ‘human-over-the-loop’, the risk of harm is low, but the severity of harm is high.

In a ‘human-in-the-loop’ situation, the risk of both probability and severity of harm is high. Gow gives the following example: “Your corporate development team uses AI to identify potential acquisition targets for the company. Also, they use AI to conduct financial due diligence on the various targets. Both the probability and severity of harm are high in this decision.”

When humans are not involved at all, the probability of harm is high, but the severity of harm is low.

As Gow suggests, The Modern AI Governance Framework gives boards and management a starting place to manage risk with AI projects. Whilst AI could be dangerous in several scenarios, by managing when and how humans will be in control, we can greatly reduce a company’s risk factors, and ensure AI is safer than nukes.

Can XAI in banking help small businesses?

Small businesses (SMEs) are no longer as well served by traditional banks, yet this is one niche sector where they have an opportunity to shine.

To date, banks have provided SMEs with a mix of retail and corporate services, however, as a Finextra blog explains, “this no longer fits the evolving needs of small businesses.”

Services for this business sector need to think about more holistic solutions. These may include more collaboration with a range of digital service providers if they are to retain the confidence of SME clients by addressing their pressing needs.

Temenos, a firm specialising in enterprise software for banks and financial services has been reimagining how banks could better serve SMEs using the available technology. For example, “banks can implement innovative design-centric and data-driven products, as well as services that can transform the SME customer experience.”

The customer’s digital experience is now critical, as is the use of data, because these will be the driving force in future SME banking services. And this is where artificial intelligence (AI) can be of enormous help. It can enable banks to leverage data from multiple sources “to make faster, and more accurate decisions and provide individualised, frictionless customer experiences.”

Utilising AI, or XAI (explainable AI) would be another major step, primarily because “one of the key issues for banks using AI applications that there is little if any discernible insight into how they reach their decisions.” Transparency is required for customer confidence, especially concerning lending.

If banks looked at more than an SME’s credit score, and took a more holistic approach by viewing a range of attributes, they would be able to make more “nuanced and fully explainable decisions that lead to 20% more positive credit decisions and fewer false positives.” Furthermore,  this can be done in real-time using APIs to connect to third-party data sources.

Banks using XAI can show how the decision was made and then suggest alternative products or provide advice about how to improve the chances of getting a loan. In this particular period of time, with the Covid-19 pandemic having negatively affected so many small businesses, there has been an increased need for SME loans. As a result, banks need to support this with more digitisation and smarter decision-making. Using XAI seems like a good place to start.

Mastercard introduces AI-powered cybersecurity

Cybersecurity remains one of the hottest topics around. While browsing today’s media I noted one article said that cyber attacks rose by 250% during the pandemic. Apparently it was the perfect time for scammers and hackers to wield their weapons.

This may be one of the things that prompted Mastercard to launch Cyber Secure, “a first-of-its-kind, AI-powered suite of tools that allows banks to assess cyber risk across their ecosystem and prevent potential breaches.”

 

It all comes down to the fact that the digital economy is expanding rapidly and is more complex. Alongside this positive news, comes the less appealing revelation that the growth creates a vulnerability that some are delighted to take advantage of.  For example,it is estimated that one business will fall victim to a ransomware attack every 11 seconds by next year.

 

Ajay Bhalla, president, Cyber & Intelligence, Mastercard said:

“The world today faces a $5.2 trillion cyber breach problem. This is one of the biggest threats to consumer trust. At Mastercard, we aim to stay ahead of fraudsters and to continually evolve and enhance our protection of cyber environments for our bank and merchant customers. With Cyber Secure, we have a suite of AI-powered cyber capabilities that allows us to do just that, ensuring trust across every experience, for businesses and consumers.” 

 

Cyber Secure will enable banks “to continuously monitor and track their cyber posture,” writes Polly Harrison. It will allow banks to be more proactive in managing and preventing data compromise, as well as protecting the integrity of the payment ecosystem and consumer data. It should also, of course, prevent financial loss caused by attacks.

Mastercard has based its new product on the AI capapbilities of RickRecon, which it purchased in 2020. It uses advanced AI for risk assessment, which evaluates multiple public and proprietary data sources and checks it against 40 security and infrastructure criteria.

Harrison writes, “In 2019, Mastercard saved stakeholders $20bn of fraud through its AI-enabled cyber systems,” so it is to be hoped that Cyber Secure prevents even more theft in 2021 and beyond.

Will AI be superior to humans? Elon Musk thinks so!

Elon Musk, the maverick entrepreneur behind Tesla and SpaceX has made yet another of his predictions. He says that Artificial Intelligence (AI) will be superior to humans in five years.

His prediction is also a warning. Musk has been outspoken about the dangers of AI in the past: in 2018 he claimed AI could become “an immortal dictator from which we would never escape” and even said he thought the technology was more dangerous than nuclear weapons.

Ryan Daws writing for AI News, refers to a recent New York Times interview given by Musk in which he said current trends in AI suggested it would “overtake” humans by 2025. Musk then added, ““that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.” Well, we’ve already experienced that in 2020 and it had nothing to do with AI.

Ray Kurzweil, an eminent futurist has previously estimated that machine intelligence overtaking human intelligence would occur around 2045.

It is ironic perhaps that Musk’s companies are all heavy users of AI, but as Daws says, it isn’t the case that Musk is against the technology: he simply thinks it should be more regulated – an ethical AI if you like.

Indeed, Musk formed OpenAI in 2015 to research and promote ethical artificial intelligence, although he left it in 2018 due to internal disagreements. In February this year, he said that OpenAI should be more “open” and that all organisations “developing advanced AI should be regulated, including Tesla.”

AI and fake news

It was interesting to read in Daws’ article that OpenAI had developed a text generator but decided not to release it, citing its dangers in a word that is already really struggling with the surge in fake news. However, two graduates created something similar, claiming it “allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses.”

Now OPenAI has allowed a select band of researchers to try out its AI text tech. Called GPT-3, it has been grabbing attention due to the “incredible things it can achieve with limited input.” For example, one researcher tweeted: “Playing with GPT-3 feels like seeing the future. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s shockingly good.”

That all sounds very exciting, although song writers and PR agencies may not feel the same level of thrill about it, until they discover how much easier it makes their work. Will human intelligence be overtaken by 2025? It’s more likely that Musk’s prediction is simply attention grabbing, something that he excels at. Perhaps he’s wondering if AI might overtake his ability to stay in the headlines?