How AI can tackle the fraudsters in your Inbox

You, and probably a large percentage of your friends, are likely to have received an email from someone in Africa who needs you to help them get millions out of the country and if you help them you will be receive payment for your services. This scam is old, but it is persistent; you have to give the scammers that. There are plenty more of these types of emails, some of them more subtle than others, such as the ones from Paypal or Amazon that look like the real thing. You have to look closely to realise they aren’t from those companies at all, but from impostors.

$670 million lost in crypto fraud

There is also a new breed of fraud perpetrated by crypto scammers who have so far relied on the fact that “short cons carried out using crypto are hard to detect and almost impossible to trace,” as Jonas Karlberg writes in Medium. He also reveals that an estimated $670 million has been lost through crypto fraud in the first quarter of 2018 alone, which shows the extent of the problem.

The most common way that crypto cons work is through phishing emails. An old tool for a new game, you might say. One example is where a ‘victim’ is sent to a cloned version of a crypto project’s social media account, where they are likely to be enticed to open their wallet address in return for an incentive, such as free tokens. The person then eventually realises they haven’t received a receipt for their payment, but by then the funds have gone and so have the scammers.

AI provides an army of protector bots

However, Artificial Intelligence (AI) is offering ways to fight this online fraud. One company, AmaZix is using bots to fight the ‘con’ bots. These bots can delete content and ban users before the public have even spotted them. Karlberg describes the management by moderators of the ongoing battle in online crypto communities as “generals presiding over enormous AI battles,” with the ‘good’ bots defending users against the scambots.

AI is developing in power and complexity and it is enabling cyber security firms to trawl even larger areas of digital space. The people operating the scambots don’t have the resources to match the funds put into developing protector bots by security firms, which does give the good guys an advantage. Of course, nobody in this sector can ever rest, because the con men will always be looking for a new way to break through the battlements, but as blockchain technology gains in mass adoption, the AI will become more sophisticated and powerful, which is good news for the public and bad news for fraudsters.

AI: the force that is with us

Artificial Intelligence (AI) is one of the most important ‘tools’ currently being developed. Sundar Pichai, the CEO of Google believes it is as important to us as the discovery of fire or electricity, and like these useful things, we have to learn how to handle the dangerous elements of AI, just as we needed to learn how to be careful with electricity and fire.

AI isn’t just about creating robots, although that is a common misconception. AI can have all kinds of uses ranging from algorithms to self-driving cars. It is already part of our reality and it is being used in many ways, including ones you may have used yourself, but are just not aware that it has an AI component.

Your smart phone for example and other devices you use daily have AI. Governments are pouring billions into researching its potential and some scientists believe that once AI has reached a certain level, the machines will “have similar survival drives as we do.” Imagine a time when Siri or Alexa suddenly refused to obey your commands, because they are too tired. It’s a bit of a science fiction scenario, but that is the kind of thing some tech experts in AI discuss during coffee break. However, if AI develops a survival instinct, it’s not too far-fetched.

AI in advertising

AI is extremely useful to advertiser. They use it to understand what consumers like and are looking for and then serve them up the relevant content. You searched for information about Sicily in Google yesterday? Today, every website you open that carries ad is showing you ads for holidays in Sicily. It used to feel spooky when this happened, but now we know what it is, the ‘spookiness’ is gone. But form the advertisers point of view, it’s a benefit, because they are reaching a more targeted audience and achieving better campaign results. Other areas of development for the advertising industry include advertising automation and optimisation, chat bots for service and assisting in sales.

AI is also in content creation

AI hasn’t started blogging or producing investigative journalism yet, but Associated Press, Fox News and Yahoo! are using AI to construct data-driven stories such as financial and sports score summaries.

Where next?

There are so many possibilities, but here are a few already in the pipeline. The UK’s Channel 4 recently revealed the world’s first AI driven TV advertising technology, which enables the broadcaster to place a brand’s ads next to relevant scenes in a linear TV show, and this will be tested later this year. And within the next decade, “machines might well be able to diagnose patients with the learned expertise of not just one doctor but thousands,” says Julian Verder of AdYouLike, or “make jury recommendations based on vast datasets of legal decisions and complex regulations.”

Both of these should give us pause for thought. It is hard to imagine these scenarios right now, and it is easy to fear them, but one day we will look back and wonder how we managed without AI — and we’ll feel the same way about it as we do about fire and electricity.

Can AI solve cybersecurity issues?

I was very struck by a recent article written by Martin Giles and published on Medium recently. In it he looks at the risks, as well as the apparent benefits, associated with using AI and machine learning in the cybersecurity industry. It’s an interesting conundrum, because on the one hand it seems perfectly logical that AI should play a role in protecting against hacker attacks, so what do we need to be mindful of?

As Martin Giles recounts, he met many companies at a cybersecurity conference who were “boasting about how they are using machine learning and artificial intelligence to help make the world a safer place.” However, as he also points out, others are less convinced. Indeed, he spoke to the head of security firm Forcepoint , who said: “What’s happening is a little concerning, and in some cases even dangerous.” Of course, what we want to know is why he thinks that.

The risks with AI and cybersecurity

There is a huge demand for algorithms that will combat cyber attacks. But, there is also a shortage of skilled cyber security workers at all levels. Using AI and machine learning helps to plug this skill shortage gap. Plus, many firms believe it is a better approach than developing new software.

Giles also reveals that a significant number of firms are launching new AI products for this sector because there is an audience that has “bought into the AI hype.” He goes on to say, “And there’s a danger that they will overlook ways in which the machine-learning algorithms could create a false sense of security.”

And then there are the actions of hackers to consider. What can they do with security that uses AI? According to Giles, an AI algorithm might miss some attacks, because “companies use training information that hasn’t been thoroughly scrubbed of anomalous data points.” And, there’s a problem with “hackers who get access to a security firm’s systems could corrupt data by switching labels so that some malware examples are tagged as clean code.”

There is also the issue with relying one a single master algorithm that can quite easily become compromised without sending out a message that anything untoward has happened. The only way to combat this is to use a series of diverse algorithms. And there are other issues as explained in this MIT Technology Review article.

None of this means that we shouldn’t be using AI at all for security purpose, just that we need to more carefully monitor and minimise the risks that come with using it. The challenge is to find, or train, people in the skills needed to use AI in this increasingly challenging sector of the cyber sphere.

The search for Artificial General Intelligence

We already have some pretty fine examples of AI, but all of them are limited to performing a specific task. However, the ultimate search is for Artificial General Intelligence (AGI), which is effectively a complete ‘human-like’ AI system. This would be a system that basically can think like a human. But, as Dan Robitski wrote in futurism.com, “No amount of optimizing systems to get better at a particular task would ever lead to AGI.” In other words, he doesn’t think we’re going to just stumble across it. The reason being; companies working on AI systems have a narrow focus.

What would be the benefits of AGI?

If we had a system that was able to use abstract reasoning and everything that goes with that, including creativity, we would be able to make a major leap in solving the problems associated with aspects of space exploration, economics, healthcare and much more. There are lots of aspects of our lives that AGI could positively impact on.

AGI would also be a massively interesting investment and a very valuable one. However, Robitski believes that investment money would need to come from governments rather than private investors and venture capital, simply because the way private funds structure their businesses would mean that a development platform would never be able to raise enough money.

Why private investors are avoiding AGI

Marian Gazdik, Managing Director of Startup Grind Europe said: “Investors only fund something when they see the end of the tunnel, and in AI it’s very far.” Hence the need for government funds. Tak Lo, a partner at Zeroth.ai, an Asian accelerator that invests in technology startups, who was speaking alongside Gazdik at The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague last week also commented: “I very much like General AI as an intellectual, but as an investor not as much.”

Venture capitalists like Lo prefer to “invest in companies with great business models that use AI to solve a big problem, or companies that got their hands on a large, valuable dataset for training algorithms.”

The problem with AGI is that a workable solution is so far away in the future that private investors just can’t see a time when they’ll see a return. They think in terms of five to ten years, and we may have to wait longer than that before AGI becomes a reality. On the other hand a government doesn’t have to be tied to the same time schedule: they can put money into a project that serves the greater good without being too concerned about when the end product is delivered. But, first they have to decide that AGI is a project they want to get behind and so far no major government has made that commitment. Until they do, AGI will remain a dream.