AI: the force that is with us

Artificial Intelligence (AI) is one of the most important ‘tools’ currently being developed. Sundar Pichai, the CEO of Google believes it is as important to us as the discovery of fire or electricity, and like these useful things, we have to learn how to handle the dangerous elements of AI, just as we needed to learn how to be careful with electricity and fire.

AI isn’t just about creating robots, although that is a common misconception. AI can have all kinds of uses ranging from algorithms to self-driving cars. It is already part of our reality and it is being used in many ways, including ones you may have used yourself, but are just not aware that it has an AI component.

Your smart phone for example and other devices you use daily have AI. Governments are pouring billions into researching its potential and some scientists believe that once AI has reached a certain level, the machines will “have similar survival drives as we do.” Imagine a time when Siri or Alexa suddenly refused to obey your commands, because they are too tired. It’s a bit of a science fiction scenario, but that is the kind of thing some tech experts in AI discuss during coffee break. However, if AI develops a survival instinct, it’s not too far-fetched.

AI in advertising

AI is extremely useful to advertiser. They use it to understand what consumers like and are looking for and then serve them up the relevant content. You searched for information about Sicily in Google yesterday? Today, every website you open that carries ad is showing you ads for holidays in Sicily. It used to feel spooky when this happened, but now we know what it is, the ‘spookiness’ is gone. But form the advertisers point of view, it’s a benefit, because they are reaching a more targeted audience and achieving better campaign results. Other areas of development for the advertising industry include advertising automation and optimisation, chat bots for service and assisting in sales.

AI is also in content creation

AI hasn’t started blogging or producing investigative journalism yet, but Associated Press, Fox News and Yahoo! are using AI to construct data-driven stories such as financial and sports score summaries.

Where next?

There are so many possibilities, but here are a few already in the pipeline. The UK’s Channel 4 recently revealed the world’s first AI driven TV advertising technology, which enables the broadcaster to place a brand’s ads next to relevant scenes in a linear TV show, and this will be tested later this year. And within the next decade, “machines might well be able to diagnose patients with the learned expertise of not just one doctor but thousands,” says Julian Verder of AdYouLike, or “make jury recommendations based on vast datasets of legal decisions and complex regulations.”

Both of these should give us pause for thought. It is hard to imagine these scenarios right now, and it is easy to fear them, but one day we will look back and wonder how we managed without AI — and we’ll feel the same way about it as we do about fire and electricity.

Can AI solve cybersecurity issues?

I was very struck by a recent article written by Martin Giles and published on Medium recently. In it he looks at the risks, as well as the apparent benefits, associated with using AI and machine learning in the cybersecurity industry. It’s an interesting conundrum, because on the one hand it seems perfectly logical that AI should play a role in protecting against hacker attacks, so what do we need to be mindful of?

As Martin Giles recounts, he met many companies at a cybersecurity conference who were “boasting about how they are using machine learning and artificial intelligence to help make the world a safer place.” However, as he also points out, others are less convinced. Indeed, he spoke to the head of security firm Forcepoint , who said: “What’s happening is a little concerning, and in some cases even dangerous.” Of course, what we want to know is why he thinks that.

The risks with AI and cybersecurity

There is a huge demand for algorithms that will combat cyber attacks. But, there is also a shortage of skilled cyber security workers at all levels. Using AI and machine learning helps to plug this skill shortage gap. Plus, many firms believe it is a better approach than developing new software.

Giles also reveals that a significant number of firms are launching new AI products for this sector because there is an audience that has “bought into the AI hype.” He goes on to say, “And there’s a danger that they will overlook ways in which the machine-learning algorithms could create a false sense of security.”

And then there are the actions of hackers to consider. What can they do with security that uses AI? According to Giles, an AI algorithm might miss some attacks, because “companies use training information that hasn’t been thoroughly scrubbed of anomalous data points.” And, there’s a problem with “hackers who get access to a security firm’s systems could corrupt data by switching labels so that some malware examples are tagged as clean code.”

There is also the issue with relying one a single master algorithm that can quite easily become compromised without sending out a message that anything untoward has happened. The only way to combat this is to use a series of diverse algorithms. And there are other issues as explained in this MIT Technology Review article.

None of this means that we shouldn’t be using AI at all for security purpose, just that we need to more carefully monitor and minimise the risks that come with using it. The challenge is to find, or train, people in the skills needed to use AI in this increasingly challenging sector of the cyber sphere.

The search for Artificial General Intelligence

We already have some pretty fine examples of AI, but all of them are limited to performing a specific task. However, the ultimate search is for Artificial General Intelligence (AGI), which is effectively a complete ‘human-like’ AI system. This would be a system that basically can think like a human. But, as Dan Robitski wrote in futurism.com, “No amount of optimizing systems to get better at a particular task would ever lead to AGI.” In other words, he doesn’t think we’re going to just stumble across it. The reason being; companies working on AI systems have a narrow focus.

What would be the benefits of AGI?

If we had a system that was able to use abstract reasoning and everything that goes with that, including creativity, we would be able to make a major leap in solving the problems associated with aspects of space exploration, economics, healthcare and much more. There are lots of aspects of our lives that AGI could positively impact on.

AGI would also be a massively interesting investment and a very valuable one. However, Robitski believes that investment money would need to come from governments rather than private investors and venture capital, simply because the way private funds structure their businesses would mean that a development platform would never be able to raise enough money.

Why private investors are avoiding AGI

Marian Gazdik, Managing Director of Startup Grind Europe said: “Investors only fund something when they see the end of the tunnel, and in AI it’s very far.” Hence the need for government funds. Tak Lo, a partner at Zeroth.ai, an Asian accelerator that invests in technology startups, who was speaking alongside Gazdik at The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague last week also commented: “I very much like General AI as an intellectual, but as an investor not as much.”

Venture capitalists like Lo prefer to “invest in companies with great business models that use AI to solve a big problem, or companies that got their hands on a large, valuable dataset for training algorithms.”

The problem with AGI is that a workable solution is so far away in the future that private investors just can’t see a time when they’ll see a return. They think in terms of five to ten years, and we may have to wait longer than that before AGI becomes a reality. On the other hand a government doesn’t have to be tied to the same time schedule: they can put money into a project that serves the greater good without being too concerned about when the end product is delivered. But, first they have to decide that AGI is a project they want to get behind and so far no major government has made that commitment. Until they do, AGI will remain a dream.

Bill Gates: economics fit for a tech future

Basic economics always taught us that ‘supply and demand’ was a central feature of understanding a market. However, Bill Gates wrote in his blogrecently that “supply and demand is over” and he argued that it simply doesn’t apply to today’s economy. He also stated that politicians aren’t paying enough attention to this economic shift.

Why does he make this claim?

His reasoning is based on the fact that companies are no longer only making money by selling tangible products. Companies that supply software being one example. To develop new software, Gates points out, all of the cost is upfront, whereas a traditional manufacturer has to pay for parts and labour. When Microsoft launches a new version of a software programme, it can be copied, sold, and downloaded indefinitely for the relatively minimal costs of distribution and server space.

Gates claims that more large companies are operating without tangible products and says that digital products, which are a so-called “intangible investment,” carry new risks for businesses and investors and that this is not being accounted for in economic thinking, which still relies too much on an old model.

Capitalism without Capital

In his book “Capitalism without Capital” Gates presents the idea that developing software is a “sunk cost” because developers can’t recoup their losses the way other companies might. If you manufacture tangible products and go bust, you can sell off machinery, but a tech company doesn’t have any such assets to sell.

Gates also points out, the Gross Domestic Product (GDP), the sum of all goods and services sold in a country that is often used as a benchmark for an economy’s well-being doesn’t factor in the investment in intangible elements needed to make a product marketable, such as research and development or market research. He also suggests that didn’t matter two decades ago, but now it does because tech companies make up a bigger slice of a country’s GDP these days. And governments haven’t caught up with this fact.

Gates doesn’t offer a new economic model, but as he says: “The idea today that anyone would need to be pitched on why software is a legitimate investment seems unimaginable, but a lot has changed since the 1980s. It’s time the way we think about the economy does, too.”