The everyday uses of AI

When it comes to Artificial Intelligence (AI), many of the people I talk to think that it is either something that is coming in the future, or interest in it is limited to geeks. Some see it as a negative tool that will destroy employment for people. And they are surprised when I tell them that they are probably using AI in their everyday lives already — they just aren’t aware that something like a Google search is AI based. And those adverts you keep seeing on social media because one day last week you searched for ‘holidays in the Maldives’ — that’s all down to AI.

Here are some of the everyday uses of AI that you may not be aware of. They have been compiled by 12 experts from Forbes Technology Council.

1. Customer Service

Data analytics and AI help brands anticipate what their customers want and deliver more intelligent customer experiences — better than the old call centre one anyway.

2. Personalised Shopping

When you shop online and you visit a site and look at a product, you may find you suddenly get recommendations for similar products — that’s AI.

3. Protecting Finances

For credit card companies and banks, AI is indispensible, especially in detecting fraudulent activity on your account. It saves all of us from the pain.

4. Drive Safer

You don’t need a self-driving car to use AI. For example, lane-departure warnings, adaptive cruise control and automated emergency braking are all AI functions.

5. Improving Agriculture

Agriculture is an important element of our lives, because we all want and need to eat. AI is improving this important sector with the following examples: satellites scanning farm fields to monitor crop and soil health; machine learning models that track and predict environmental impacts, like droughts; and big data to differentiate between plants and weeds for pesticide control.

6. Our Trust in Information

Trust in information is one of the most critical issues of our current times. We are bombarded with images and articles that we just don’t know if they are telling the truth or not. Experts say that AI will change how we learn and the level of trust we place in information. AI will help us identify the deep fakes and all those methods of sharing ‘fake’ information, and that is very important.

The ways in which we use AI are growing all the time — and if you think you’re not using it, you almost certainly already are.

The problem with AI bias

Ai has come a long way. Just after WW2, there was a preconception that developing Artificial Intelligence would lead to something like an ‘Attack of the Zombie Robots’, and that AI could only be a bad thing for humanity. Fortunately we have come a long way from the old sci-fi view of AI, and we even have robotics used in surgery, but there is still a lingering feeling that AI and robotics are threatening in some way, and one of those ways is ‘bias’.

AI is very much part of the fourth industrial revolution, which also includes cyber-physical systems powered by technologies like machine learning, blockchain, genome editing and decentralised governance. The challenges that we face in developing our use of AI, are tricky ethical ones for the most part, which need a sensitive approach.

So, what is the issue? As James Warner writes in his article on AI and bias,

“AI is aiding the decision making from all walks of life. But, the point is that the foundation of AI is laid by the way humans function.” And as we all know — humans have bias, unfortunately. Warner says, “It is the result of a mandatory limited view of the world that any single person or group can achieve. This social bias just as it reflects in humans in sensitive areas can also reflect in artificial intelligence.”

And this human bias, as it cascades down into AI can be dangerous to you and me. For example, Warner writes: “the investigative news site Pro Publica found that a criminal justice algorithm used in Florida mislabelled African American defendants as high risk. This was at twice the rate it mislabelled white defendants.” Facial recognition has already been highlighted as an area with shocking ethnic bias, as well as recognition errors.

IBM suggests that researchers are quickly learning how to handle human bias as it infiltrates AI. It has said that researchers are learning to deal with this bias and coming up with new solutions that control and finally free AI systems out of bias.

The ways in which bias can creep in are numerous, but researchers are developing algorithms that can assist with detecting and mitigating hidden biases in the data. Furthermore, scientists at IBM have devised an independent bias rating system with the ability to determine the fairness of an AI system.

One outcome of all this may be that we discover more about how human biases are formed and how we apply them throughout our lives. Some biases are obvious to us, but others tend to sneak around unnoticed, until somebody else points it out. Perhaps we will find that AI can teach us how to handle a variety of biased opinions, and be more fair ourselves.

What is the point of a robot tax?

While browsing articles on Artificial Intelligence, I stumbled across a piece by Milton Ezrati at Forbes. Discussing the possibility of a robot tax? This idea had been proposed by Bill de Blasio before he gave up his bid to gain the Democratic presidential nomination. Ezrati thinks it is a dreadful idea, but he is aware that both Silicon Valley leaders and current government progressives are quite keen on it.

According to the article, a robot tax would have four parts: First, it would apply to any company introducing labour saving automation. Second, it would insist that this employer either find new jobs for the displaced workers at their same pay level or pay them a severance. Third, the tax would require a new federal agency, the Federal Automation and Worker Protection Agency (FAWPA) and fourth, it would require Washington eliminate all tax incentives for any innovation that leads to automation.

The assumption appears to be that workers displaced by automation will never again find work at a comparable wage. Elon Musk for one, Bill Gates and Mark Zuckerberg are amongst those who are worried about this aspect of it, as is Democratic candidate, Andrew Yang, who suggests the introduction of a universal basic income, “to substitute, he claims, for the incomes lost to robots and artificial intelligence generally.”

However, it is not proven that the introduction of AI and robots will disadvantage workers so substantially. As Ezrati say, “innovation, if it initially displaces some workers, always eventually creates many more new jobs even as it boosts overall productivity and increases output.”

And, as he also points out, “since the industrial revolution began more than 250 years ago, business and industry have actively applied wave after wave of innovation and yet economies have nonetheless continued to employ on average some 95 percent of those who want to work.”

In my opinion, and in this respect I am in agreement with Ezrati, we have focused far too much on what will be lost with the introduction of more robotics, and not sufficiently on what is to be gained. His analogy that uses the introduction of email and the Internet regarding typists’ jobs illustrates this. Whilst those working in admin, messenger departments and typing pools no longer had their current job, new forms of employment emerged for them.

Similarly, when the introduction of automatic teller machines threatened to throw thousands of bank clerks out of work, the machines created profits that meant they could employ more tellers, and these tellers, with the assistance of different technologies, could do more interesting, complex, and valuable jobs at higher pay than they received before the ATMs were put in place.

A robot tax would be counter-productive and stunt growth in innovation, hampering the possibility of finding new types of jobs and improving living standards. It’s a proposed tax that simply doesn’t make sense.

Are neuromorphic chips the future of AI and blockchain?

There is no doubt that artificial intelligence (AI) is the driver of a revolution in automation akin to the influence of coal and factory machines on previous industrial revolutions. Jayshree Pandya, writing for Forbes, makes a very interesting point when he suggests that the increasing importance of AI also goes hand in hand with a need for more computing power.

He suggests, “There are indicators that raw computing power is on its way to replacing fossil fuels and will be the most valued fuel in the rapidly emerging intelligence age.” The question of course is — where will that computing power come from?

The need for more computing power

AI also needs massive amounts of data to produce useful tools. One of the sources of both power and data is potentially the blockchain. Alongside the much-needed power, blockchain technology can add structure and accountability to AI algorithms, “and may help in much-needed areas like security, quality, and integrity of the intelligence AI produces,” Pandya says.

What we are really talking about is Big Data. It is the fuel of AI and blockchain produces that fuel, so it is entirely logical that the two have a future together.

However, there is another important question to be answered. Can the current blockchain technology infrastructure support the needs of AI, when it appears to be struggling to meet its own needs?

Prof. Irving Wladawsky-Berger, a Research Affiliate at MIT Sloan School of Management offers some insights into the situation. He points to the environmental concerns about the amount of electric power blockchain technology uses, because of its core process and security, which necessitates that all users require permission to write on the chain. He believes the amount of computing power the blockchain requires is unsustainable, and that it is one of the most critical challenges facing the industry.

But it isn’t only blockchain that is fuelling the need for more computing power: it is AI and all emerging technologies. As these evolve, there needs to be a solution to this issue. As Berger says, “there is a need to not only process computation more efficiently but also to evolve both hardware and software to meet the demand for increased computing power.” The solution he points to, “is a clear need to move away from traditional blockchain chips to low energy, scalable, and sustainable chips.”

Neuromorphic chips

The answer may be neuromorphic chips. These do all the processing and functioning without having to send message back and forth to the cloud etc. In fact, they function in a similar way to the human brain, conserving energy by only functioning when needed. Berger believes, “neuromorphic computing chips will likely be the future of not only artificial intelligence but also of the blockchain, as they give us an ability to develop low energy consuming cryptocurrency as well as distributed systems.”

What he is also suggesting is that in recent years there has been more emphasis on developing software than hardware. He says, “Neuromorphic computing and chips bring the much-needed evolution in computer hardware,” and that if we follow through with developing this, then AI and blockchain can have a sustainable future together.

We know that the demand for AI is increasing rapidly, and we need to find a power source to feed that demand. It seems the answer is neuromorphic chips!