Siri is witty, but knows her limits!

Related image

Back in 1956, a man called John McCarthy coined the term AI for artificial intelligence. However it is only in recent years that we have personally witnessed the benefits of AI, and its mass scale adoption by larger enterprises. One of the things that has encouraged the use of AI is the need to understand data patterns, because companies want to know much more about their target audience and Ai allows them to gain useful insights into consumer behaviour.

There is much to be gained by understanding AI, including the fact that it is segmented into ‘weak’ and ‘strong’ sectors.

WEAK AI
Weak AI is also known as Narrow AI. This covers systems set up to accomplish simple tasks or solve specific problems. Weak AI works according to the rules that are set and is bound by it. However, just because it is labelled ‘weak’ doesn’t mean it is inferior: it is extremely good at the tasks it is made for. Siri is an example of ‘Weak AI. Siri is able t hold conversations, sometimes even quite witty ones, but essentially it operates in a predefined manner. And you can experience its ‘narrowness’ when you try to make it perform a task it is not programmed to do.

Company chatbots are similar. They respond appropriately when customers ask questions, and they are accurate. The AI is even capable of managing situations that are extremely complex, but the intelligence level is restricted to providing solutions to problems that are already programmed into the system.
STRONG AI
As you can imagine, ‘Strong AI’ has much more potential, because it is set up to try to mimic the human brain.  It is so powerful that the actions performed by the system are exactly similar to the actions and decisions of a human being. It also has the understanding power and consciousness.

However, the difficulty lies in defining intelligence accurately. It is almost impossible or highly difficult to determine success or set boundaries to intelligence as far as strong AI is concerned. And that is why people still prefer the ‘weak’ version, because it does not fully encompass intelligence, instead it focuses on completing a particular task it is assigned to complete. As a result it has become tremendously popular in the finance industry.
Finance and AI
The finance industry has benefited more than many by the introduction of AI. It is used in risk assessment, fraud detection, giving financial advice, investment trading, and finance management.

Artificial Intelligence can be used in processes that involve auditing financial transactions, and it can analyse complicated tax changes.

In the future, we may find companies basing business decisions on AI, as well as forecasting consumer behaviour and adapting a business to those changes at a much faster pace.

Artificial Intelligence is going to help people and businesses make smarter decisions, but as always we need to remain mindlful of finding the right balance between humans and machines.

Free phones – but NO privacy!

Image result for Free phones – but NO privacy!

When I spotted an article in Forbes by Thomas Brewster, I was immediately intrigued. The headline is U.S. Funds Program With Free Android Phones For The Poor — But With Permanent Chinese Malware. It surely must strike anyone reading it as a giving with one hand and taking away with the other gesture. So, I had to check out what it was about.

As I live outside the USA, I was not aware that low income households in the States have been able to get cheap cell service and even free smartphones via the U.S. government-funded Lifeline Assistance program. And there is one provider of this service called Assurance Wireless that offers a free Android device along with free data, texts and minutes. It sounds good on the face of it.

But according to security researchers at Malware Bytes there is a significant drawback to the distribution of this largesse. The Android phones come with preinstalled Chinese malware, which effectively opens up a backdoor onto the device and endangers the users’ private data. And, the researchers say that one of the types of malware is impossible to remove.

Malware Bytes informed Assurance Wireless about the issue. Assurance is a Virgin Mobile company, just as a matter of interest. So far Malware Bytes have not received a response from the service provider. So, users should be aware that their devices are vulnerable. Interestingly, after Forbes published the article a spokesperson for Sprint, which owns Virgin Mobile and Assurance Wireless, said: “We are aware of this issue and are in touch with the device manufacturer Unimax to understand the root cause. However, after our initial testing we do not believe the applications described in the media are malware.”

The FCC, which runs Lifeline Assistance, confirmed to Forbes that the law requires “its fund not be used by partner carriers for spending on devices.”

As a result questions are being asked. Senator Ron Wyden asked the FCC why these phones are being distributed to low-income citizens: “It is outrageous that taxpayer money may be going to companies providing insecure, malware-ridden phones to low-income families. I’ll be asking the FCC to ensure Americans that depend on Lifeline Assistance aren’t paying the price with their privacy and security.”

According to the Forbes article, the affected device is a UMX phone shipped by Assurance Wireless, and one of the bits of malware is the creation of a Chinese entity known as Adups. It basically auto-installs apps and the user has no way of controlling that. Furthermore Adups tools have been caught siphoning off private data in the past. This included the full-body of text messages, contact lists and call histories with full telephone numbers.

All this begs the question that Thomas Brewster asks – is privacy only for the rich?

How can Asset Intelligence improve cybersecurity?

CIOs are always looking for ways to improve network security. And according to a recent article by Louis Columbus in Forbes, they are “finding new ways to further improve network security by capitalizing on each IT assets’ intelligence.”

IT assets ideally need to capture real time data, as that is how organisations grow. CIOs and their teams, “are energized by the opportunity to create secured perimeterless networks that can flex in real-time as their businesses grow,” Columbus says, and “having a persistent connection to every device across an organizations’ constantly changing perimeter provides invaluable data for achieving this goal.”

What we are all aiming for is real-time, persistent connections to every device in a network, because that is the foundation of a strong endpoint security strategy. But how we achieve this?

1. Track lost or stolen devices within an organisation’s network and disable them.

2. Enable every endpoint to autonomously self-heal

3. Set the data foundation for achieving always-on persistence by tracking every devices’ unique attributes, identifiers, communication log history and more.

4. Have a real-time connection to every device on a perimeterless network.

5. Build more Asset Intelligence in an organisation, because the more they can predict and detect malware intrusion attempts, block them and restore any damage to any device on their perimeter.

6. Geofencing is a must-have for every organisation now, especially those with a global presence. IT and cybersecurity teams need to track device location, usage and compliance in real-time.

7. Automate customer and regulatory audits, as well as improve compliance by using Asset Intelligence. This will save time for the IT team.

8. Asset Intelligence creates cleaner data systems and this has a direct effect on the customer experience. As Columbus says, “Improving data hygiene is essential for IT to keep achieving their incentive plans and earning bonuses.”

The key thing to remember are ‘data hygiene’, because that is where the improvements to security are to be found. And the organisations that are most efficient at implementing this, will be the winners of pubic trust; a very important thing these days.

AI and information crime

Artificial Intelligence (AI) is moving at speed into the mainstream. Almost every day we are learning about new uses for it, and discovering the ways in which it is already a part of our daily lives. Just this week medical researchers have revealed that the use of AI in testing for breast cancer is more effective than the work of expert radiologists. As one doctor interviewed said, this use of AI frees up radiologists to do other important work. And that’s a great benefit, as it takes roughly 10 years to train in radiology.

On the other hand, people are concerned that AI may take over almost every area of our lives, from self-driving cars to fighting wars. And it may do our laundry as well. Basically it comes down to this — will AI replace humans? That’s the great fear, but one which is largely exaggerated. As Kathleen Walch writes: “However, it’s becoming increasingly clear that AI is not a job killer, but rather, a job category killer.” I have also written about this aspect of AI before, pointing to the fact that “jobs are not destroyed, but rather employment shifts from one place to another and entirely new categories of employment are created.”

Indeed, as Walch says, “companies will be freed up to put their human resources to much better, higher value tasks instead of taking orders, fielding simple customer service requests or complaints, or data entry related tasks.” What businesses must do is have honest conversations with their employees about the use of AI and show how it can allow humans to be more creative by giving the dull, routine tasks to AI.

The one area where AI is causing us issue is in the generation of fake news in a range of formats. It is already almost impossible to tell if an image is real or AI-generated, or if you’re talking to a bot or a real person? AI-generated ‘disinformation’ is not necessarily generated by criminals: as we all now know, State actors are perhaps the worst offenders, and we have plenty of example to look at coming from Russia, the USA and the UK. Lies are fed to the citizens using social media accounts that appear to be reputable government sources, and the social media companies collude with these sources, as Facebook has shown us. Walch says, “Now all it takes is a few malicious actors spreading false claims to traumatically alter public opinion and quickly shift the public’s view.” Brexit and the election of Trump are good examples of this in play.

And it is in this area that we must consider the ethics of using AI most closely right now. As Walch says, “Governments and corporations alike will have to think about how they will reign in the potential damage done by AI-enabled content creation,” and she adds, “we must encourage companies and governments to consider fake content to be as malicious as cybersecurity threats and respond appropriately.”

What we are talking about is essentially propaganda. There are those of us who can see through the smoke and mirrors, but many can’t, and these citizens need protection from the malicious acts of the information criminals.