Siri is witty, but knows her limits!

Related image

Back in 1956, a man called John McCarthy coined the term AI for artificial intelligence. However it is only in recent years that we have personally witnessed the benefits of AI, and its mass scale adoption by larger enterprises. One of the things that has encouraged the use of AI is the need to understand data patterns, because companies want to know much more about their target audience and Ai allows them to gain useful insights into consumer behaviour.

There is much to be gained by understanding AI, including the fact that it is segmented into ‘weak’ and ‘strong’ sectors.

WEAK AI
Weak AI is also known as Narrow AI. This covers systems set up to accomplish simple tasks or solve specific problems. Weak AI works according to the rules that are set and is bound by it. However, just because it is labelled ‘weak’ doesn’t mean it is inferior: it is extremely good at the tasks it is made for. Siri is an example of ‘Weak AI. Siri is able t hold conversations, sometimes even quite witty ones, but essentially it operates in a predefined manner. And you can experience its ‘narrowness’ when you try to make it perform a task it is not programmed to do.

Company chatbots are similar. They respond appropriately when customers ask questions, and they are accurate. The AI is even capable of managing situations that are extremely complex, but the intelligence level is restricted to providing solutions to problems that are already programmed into the system.
STRONG AI
As you can imagine, ‘Strong AI’ has much more potential, because it is set up to try to mimic the human brain.  It is so powerful that the actions performed by the system are exactly similar to the actions and decisions of a human being. It also has the understanding power and consciousness.

However, the difficulty lies in defining intelligence accurately. It is almost impossible or highly difficult to determine success or set boundaries to intelligence as far as strong AI is concerned. And that is why people still prefer the ‘weak’ version, because it does not fully encompass intelligence, instead it focuses on completing a particular task it is assigned to complete. As a result it has become tremendously popular in the finance industry.
Finance and AI
The finance industry has benefited more than many by the introduction of AI. It is used in risk assessment, fraud detection, giving financial advice, investment trading, and finance management.

Artificial Intelligence can be used in processes that involve auditing financial transactions, and it can analyse complicated tax changes.

In the future, we may find companies basing business decisions on AI, as well as forecasting consumer behaviour and adapting a business to those changes at a much faster pace.

Artificial Intelligence is going to help people and businesses make smarter decisions, but as always we need to remain mindlful of finding the right balance between humans and machines.

AI and information crime

Artificial Intelligence (AI) is moving at speed into the mainstream. Almost every day we are learning about new uses for it, and discovering the ways in which it is already a part of our daily lives. Just this week medical researchers have revealed that the use of AI in testing for breast cancer is more effective than the work of expert radiologists. As one doctor interviewed said, this use of AI frees up radiologists to do other important work. And that’s a great benefit, as it takes roughly 10 years to train in radiology.

On the other hand, people are concerned that AI may take over almost every area of our lives, from self-driving cars to fighting wars. And it may do our laundry as well. Basically it comes down to this — will AI replace humans? That’s the great fear, but one which is largely exaggerated. As Kathleen Walch writes: “However, it’s becoming increasingly clear that AI is not a job killer, but rather, a job category killer.” I have also written about this aspect of AI before, pointing to the fact that “jobs are not destroyed, but rather employment shifts from one place to another and entirely new categories of employment are created.”

Indeed, as Walch says, “companies will be freed up to put their human resources to much better, higher value tasks instead of taking orders, fielding simple customer service requests or complaints, or data entry related tasks.” What businesses must do is have honest conversations with their employees about the use of AI and show how it can allow humans to be more creative by giving the dull, routine tasks to AI.

The one area where AI is causing us issue is in the generation of fake news in a range of formats. It is already almost impossible to tell if an image is real or AI-generated, or if you’re talking to a bot or a real person? AI-generated ‘disinformation’ is not necessarily generated by criminals: as we all now know, State actors are perhaps the worst offenders, and we have plenty of example to look at coming from Russia, the USA and the UK. Lies are fed to the citizens using social media accounts that appear to be reputable government sources, and the social media companies collude with these sources, as Facebook has shown us. Walch says, “Now all it takes is a few malicious actors spreading false claims to traumatically alter public opinion and quickly shift the public’s view.” Brexit and the election of Trump are good examples of this in play.

And it is in this area that we must consider the ethics of using AI most closely right now. As Walch says, “Governments and corporations alike will have to think about how they will reign in the potential damage done by AI-enabled content creation,” and she adds, “we must encourage companies and governments to consider fake content to be as malicious as cybersecurity threats and respond appropriately.”

What we are talking about is essentially propaganda. There are those of us who can see through the smoke and mirrors, but many can’t, and these citizens need protection from the malicious acts of the information criminals.

The everyday uses of AI

When it comes to Artificial Intelligence (AI), many of the people I talk to think that it is either something that is coming in the future, or interest in it is limited to geeks. Some see it as a negative tool that will destroy employment for people. And they are surprised when I tell them that they are probably using AI in their everyday lives already — they just aren’t aware that something like a Google search is AI based. And those adverts you keep seeing on social media because one day last week you searched for ‘holidays in the Maldives’ — that’s all down to AI.

Here are some of the everyday uses of AI that you may not be aware of. They have been compiled by 12 experts from Forbes Technology Council.

1. Customer Service

Data analytics and AI help brands anticipate what their customers want and deliver more intelligent customer experiences — better than the old call centre one anyway.

2. Personalised Shopping

When you shop online and you visit a site and look at a product, you may find you suddenly get recommendations for similar products — that’s AI.

3. Protecting Finances

For credit card companies and banks, AI is indispensible, especially in detecting fraudulent activity on your account. It saves all of us from the pain.

4. Drive Safer

You don’t need a self-driving car to use AI. For example, lane-departure warnings, adaptive cruise control and automated emergency braking are all AI functions.

5. Improving Agriculture

Agriculture is an important element of our lives, because we all want and need to eat. AI is improving this important sector with the following examples: satellites scanning farm fields to monitor crop and soil health; machine learning models that track and predict environmental impacts, like droughts; and big data to differentiate between plants and weeds for pesticide control.

6. Our Trust in Information

Trust in information is one of the most critical issues of our current times. We are bombarded with images and articles that we just don’t know if they are telling the truth or not. Experts say that AI will change how we learn and the level of trust we place in information. AI will help us identify the deep fakes and all those methods of sharing ‘fake’ information, and that is very important.

The ways in which we use AI are growing all the time — and if you think you’re not using it, you almost certainly already are.

The problem with AI bias

Ai has come a long way. Just after WW2, there was a preconception that developing Artificial Intelligence would lead to something like an ‘Attack of the Zombie Robots’, and that AI could only be a bad thing for humanity. Fortunately we have come a long way from the old sci-fi view of AI, and we even have robotics used in surgery, but there is still a lingering feeling that AI and robotics are threatening in some way, and one of those ways is ‘bias’.

AI is very much part of the fourth industrial revolution, which also includes cyber-physical systems powered by technologies like machine learning, blockchain, genome editing and decentralised governance. The challenges that we face in developing our use of AI, are tricky ethical ones for the most part, which need a sensitive approach.

So, what is the issue? As James Warner writes in his article on AI and bias,

“AI is aiding the decision making from all walks of life. But, the point is that the foundation of AI is laid by the way humans function.” And as we all know — humans have bias, unfortunately. Warner says, “It is the result of a mandatory limited view of the world that any single person or group can achieve. This social bias just as it reflects in humans in sensitive areas can also reflect in artificial intelligence.”

And this human bias, as it cascades down into AI can be dangerous to you and me. For example, Warner writes: “the investigative news site Pro Publica found that a criminal justice algorithm used in Florida mislabelled African American defendants as high risk. This was at twice the rate it mislabelled white defendants.” Facial recognition has already been highlighted as an area with shocking ethnic bias, as well as recognition errors.

IBM suggests that researchers are quickly learning how to handle human bias as it infiltrates AI. It has said that researchers are learning to deal with this bias and coming up with new solutions that control and finally free AI systems out of bias.

The ways in which bias can creep in are numerous, but researchers are developing algorithms that can assist with detecting and mitigating hidden biases in the data. Furthermore, scientists at IBM have devised an independent bias rating system with the ability to determine the fairness of an AI system.

One outcome of all this may be that we discover more about how human biases are formed and how we apply them throughout our lives. Some biases are obvious to us, but others tend to sneak around unnoticed, until somebody else points it out. Perhaps we will find that AI can teach us how to handle a variety of biased opinions, and be more fair ourselves.