AI: the force that is with us

Artificial Intelligence (AI) is one of the most important ‘tools’ currently being developed. Sundar Pichai, the CEO of Google believes it is as important to us as the discovery of fire or electricity, and like these useful things, we have to learn how to handle the dangerous elements of AI, just as we needed to learn how to be careful with electricity and fire.

AI isn’t just about creating robots, although that is a common misconception. AI can have all kinds of uses ranging from algorithms to self-driving cars. It is already part of our reality and it is being used in many ways, including ones you may have used yourself, but are just not aware that it has an AI component.

Your smart phone for example and other devices you use daily have AI. Governments are pouring billions into researching its potential and some scientists believe that once AI has reached a certain level, the machines will “have similar survival drives as we do.” Imagine a time when Siri or Alexa suddenly refused to obey your commands, because they are too tired. It’s a bit of a science fiction scenario, but that is the kind of thing some tech experts in AI discuss during coffee break. However, if AI develops a survival instinct, it’s not too far-fetched.

AI in advertising

AI is extremely useful to advertiser. They use it to understand what consumers like and are looking for and then serve them up the relevant content. You searched for information about Sicily in Google yesterday? Today, every website you open that carries ad is showing you ads for holidays in Sicily. It used to feel spooky when this happened, but now we know what it is, the ‘spookiness’ is gone. But form the advertisers point of view, it’s a benefit, because they are reaching a more targeted audience and achieving better campaign results. Other areas of development for the advertising industry include advertising automation and optimisation, chat bots for service and assisting in sales.

AI is also in content creation

AI hasn’t started blogging or producing investigative journalism yet, but Associated Press, Fox News and Yahoo! are using AI to construct data-driven stories such as financial and sports score summaries.

Where next?

There are so many possibilities, but here are a few already in the pipeline. The UK’s Channel 4 recently revealed the world’s first AI driven TV advertising technology, which enables the broadcaster to place a brand’s ads next to relevant scenes in a linear TV show, and this will be tested later this year. And within the next decade, “machines might well be able to diagnose patients with the learned expertise of not just one doctor but thousands,” says Julian Verder of AdYouLike, or “make jury recommendations based on vast datasets of legal decisions and complex regulations.”

Both of these should give us pause for thought. It is hard to imagine these scenarios right now, and it is easy to fear them, but one day we will look back and wonder how we managed without AI — and we’ll feel the same way about it as we do about fire and electricity.

Can AI solve cybersecurity issues?

I was very struck by a recent article written by Martin Giles and published on Medium recently. In it he looks at the risks, as well as the apparent benefits, associated with using AI and machine learning in the cybersecurity industry. It’s an interesting conundrum, because on the one hand it seems perfectly logical that AI should play a role in protecting against hacker attacks, so what do we need to be mindful of?

As Martin Giles recounts, he met many companies at a cybersecurity conference who were “boasting about how they are using machine learning and artificial intelligence to help make the world a safer place.” However, as he also points out, others are less convinced. Indeed, he spoke to the head of security firm Forcepoint , who said: “What’s happening is a little concerning, and in some cases even dangerous.” Of course, what we want to know is why he thinks that.

The risks with AI and cybersecurity

There is a huge demand for algorithms that will combat cyber attacks. But, there is also a shortage of skilled cyber security workers at all levels. Using AI and machine learning helps to plug this skill shortage gap. Plus, many firms believe it is a better approach than developing new software.

Giles also reveals that a significant number of firms are launching new AI products for this sector because there is an audience that has “bought into the AI hype.” He goes on to say, “And there’s a danger that they will overlook ways in which the machine-learning algorithms could create a false sense of security.”

And then there are the actions of hackers to consider. What can they do with security that uses AI? According to Giles, an AI algorithm might miss some attacks, because “companies use training information that hasn’t been thoroughly scrubbed of anomalous data points.” And, there’s a problem with “hackers who get access to a security firm’s systems could corrupt data by switching labels so that some malware examples are tagged as clean code.”

There is also the issue with relying one a single master algorithm that can quite easily become compromised without sending out a message that anything untoward has happened. The only way to combat this is to use a series of diverse algorithms. And there are other issues as explained in this MIT Technology Review article.

None of this means that we shouldn’t be using AI at all for security purpose, just that we need to more carefully monitor and minimise the risks that come with using it. The challenge is to find, or train, people in the skills needed to use AI in this increasingly challenging sector of the cyber sphere.

The search for Artificial General Intelligence

We already have some pretty fine examples of AI, but all of them are limited to performing a specific task. However, the ultimate search is for Artificial General Intelligence (AGI), which is effectively a complete ‘human-like’ AI system. This would be a system that basically can think like a human. But, as Dan Robitski wrote in, “No amount of optimizing systems to get better at a particular task would ever lead to AGI.” In other words, he doesn’t think we’re going to just stumble across it. The reason being; companies working on AI systems have a narrow focus.

What would be the benefits of AGI?

If we had a system that was able to use abstract reasoning and everything that goes with that, including creativity, we would be able to make a major leap in solving the problems associated with aspects of space exploration, economics, healthcare and much more. There are lots of aspects of our lives that AGI could positively impact on.

AGI would also be a massively interesting investment and a very valuable one. However, Robitski believes that investment money would need to come from governments rather than private investors and venture capital, simply because the way private funds structure their businesses would mean that a development platform would never be able to raise enough money.

Why private investors are avoiding AGI

Marian Gazdik, Managing Director of Startup Grind Europe said: “Investors only fund something when they see the end of the tunnel, and in AI it’s very far.” Hence the need for government funds. Tak Lo, a partner at, an Asian accelerator that invests in technology startups, who was speaking alongside Gazdik at The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague last week also commented: “I very much like General AI as an intellectual, but as an investor not as much.”

Venture capitalists like Lo prefer to “invest in companies with great business models that use AI to solve a big problem, or companies that got their hands on a large, valuable dataset for training algorithms.”

The problem with AGI is that a workable solution is so far away in the future that private investors just can’t see a time when they’ll see a return. They think in terms of five to ten years, and we may have to wait longer than that before AGI becomes a reality. On the other hand a government doesn’t have to be tied to the same time schedule: they can put money into a project that serves the greater good without being too concerned about when the end product is delivered. But, first they have to decide that AGI is a project they want to get behind and so far no major government has made that commitment. Until they do, AGI will remain a dream.

AI enhanced humans

Could you use an extra hand?

Somebody may have said to you at sometime: “I’ve only got two hands,” indicating that whatever you asked them to do just isn’t possible. But in the field of prosthetics and what are called “enhanced humans” it may well be possible now to have additional limbs to carry out tasks.

We have devices that wearers can control with their thoughts — these help people with prosthetic limbs to feel more whole, but now researchers are setting out to see if such devices could make humans more than whole.

Advanced Telecommunications Research Institute in Japan wanted to know if giving someone a supernumerary robotic limb (SRL), a mind-controlled robotic limb that worked alongside the person’s two biological ones, could give that person multitasking abilities beyond those of the average human.

The study

The researchers asked 15 volunteers to sit in a chair with an SRL positioned as if it were a third arm coming from their own body. Each volunteer wore a cap that that tracked the brain’s electrical activity and the cap transmitted this activity to a computer that then turned it into movement in the SRL. All the volunteers had to do to get the SRL to work was think about what they wanted it to do.

The next stage was to ask the volunteers to complete two tasks. The first one — balancing a ball on a board –they did using their natural arms and hands. The second one — picking up and putting down a bottle –they did using the SRL. They then asked the volunteers to do both the tasks separately and simultaneously.

The results

The results of 20 trials showed that the volunteers were able to complete both tasks successfully using the three limbs about 75 percent of the time. What this means is that they were able to complete two tasks simultaneously that they couldn’t have done if they had been limited to using their natural limbs.

The researchers also think that by operating this brain-machine interface, we have an idea that we may be able to train the brain itself and their future research will try to establish if we might be able to enhance our minds by temporarily enhancing our bodies.