How can Asset Intelligence improve cybersecurity?

CIOs are always looking for ways to improve network security. And according to a recent article by Louis Columbus in Forbes, they are “finding new ways to further improve network security by capitalizing on each IT assets’ intelligence.”

IT assets ideally need to capture real time data, as that is how organisations grow. CIOs and their teams, “are energized by the opportunity to create secured perimeterless networks that can flex in real-time as their businesses grow,” Columbus says, and “having a persistent connection to every device across an organizations’ constantly changing perimeter provides invaluable data for achieving this goal.”

What we are all aiming for is real-time, persistent connections to every device in a network, because that is the foundation of a strong endpoint security strategy. But how we achieve this?

1. Track lost or stolen devices within an organisation’s network and disable them.

2. Enable every endpoint to autonomously self-heal

3. Set the data foundation for achieving always-on persistence by tracking every devices’ unique attributes, identifiers, communication log history and more.

4. Have a real-time connection to every device on a perimeterless network.

5. Build more Asset Intelligence in an organisation, because the more they can predict and detect malware intrusion attempts, block them and restore any damage to any device on their perimeter.

6. Geofencing is a must-have for every organisation now, especially those with a global presence. IT and cybersecurity teams need to track device location, usage and compliance in real-time.

7. Automate customer and regulatory audits, as well as improve compliance by using Asset Intelligence. This will save time for the IT team.

8. Asset Intelligence creates cleaner data systems and this has a direct effect on the customer experience. As Columbus says, “Improving data hygiene is essential for IT to keep achieving their incentive plans and earning bonuses.”

The key thing to remember are ‘data hygiene’, because that is where the improvements to security are to be found. And the organisations that are most efficient at implementing this, will be the winners of pubic trust; a very important thing these days.

AI and information crime

Artificial Intelligence (AI) is moving at speed into the mainstream. Almost every day we are learning about new uses for it, and discovering the ways in which it is already a part of our daily lives. Just this week medical researchers have revealed that the use of AI in testing for breast cancer is more effective than the work of expert radiologists. As one doctor interviewed said, this use of AI frees up radiologists to do other important work. And that’s a great benefit, as it takes roughly 10 years to train in radiology.

On the other hand, people are concerned that AI may take over almost every area of our lives, from self-driving cars to fighting wars. And it may do our laundry as well. Basically it comes down to this — will AI replace humans? That’s the great fear, but one which is largely exaggerated. As Kathleen Walch writes: “However, it’s becoming increasingly clear that AI is not a job killer, but rather, a job category killer.” I have also written about this aspect of AI before, pointing to the fact that “jobs are not destroyed, but rather employment shifts from one place to another and entirely new categories of employment are created.”

Indeed, as Walch says, “companies will be freed up to put their human resources to much better, higher value tasks instead of taking orders, fielding simple customer service requests or complaints, or data entry related tasks.” What businesses must do is have honest conversations with their employees about the use of AI and show how it can allow humans to be more creative by giving the dull, routine tasks to AI.

The one area where AI is causing us issue is in the generation of fake news in a range of formats. It is already almost impossible to tell if an image is real or AI-generated, or if you’re talking to a bot or a real person? AI-generated ‘disinformation’ is not necessarily generated by criminals: as we all now know, State actors are perhaps the worst offenders, and we have plenty of example to look at coming from Russia, the USA and the UK. Lies are fed to the citizens using social media accounts that appear to be reputable government sources, and the social media companies collude with these sources, as Facebook has shown us. Walch says, “Now all it takes is a few malicious actors spreading false claims to traumatically alter public opinion and quickly shift the public’s view.” Brexit and the election of Trump are good examples of this in play.

And it is in this area that we must consider the ethics of using AI most closely right now. As Walch says, “Governments and corporations alike will have to think about how they will reign in the potential damage done by AI-enabled content creation,” and she adds, “we must encourage companies and governments to consider fake content to be as malicious as cybersecurity threats and respond appropriately.”

What we are talking about is essentially propaganda. There are those of us who can see through the smoke and mirrors, but many can’t, and these citizens need protection from the malicious acts of the information criminals.

The everyday uses of AI

When it comes to Artificial Intelligence (AI), many of the people I talk to think that it is either something that is coming in the future, or interest in it is limited to geeks. Some see it as a negative tool that will destroy employment for people. And they are surprised when I tell them that they are probably using AI in their everyday lives already — they just aren’t aware that something like a Google search is AI based. And those adverts you keep seeing on social media because one day last week you searched for ‘holidays in the Maldives’ — that’s all down to AI.

Here are some of the everyday uses of AI that you may not be aware of. They have been compiled by 12 experts from Forbes Technology Council.

1. Customer Service

Data analytics and AI help brands anticipate what their customers want and deliver more intelligent customer experiences — better than the old call centre one anyway.

2. Personalised Shopping

When you shop online and you visit a site and look at a product, you may find you suddenly get recommendations for similar products — that’s AI.

3. Protecting Finances

For credit card companies and banks, AI is indispensible, especially in detecting fraudulent activity on your account. It saves all of us from the pain.

4. Drive Safer

You don’t need a self-driving car to use AI. For example, lane-departure warnings, adaptive cruise control and automated emergency braking are all AI functions.

5. Improving Agriculture

Agriculture is an important element of our lives, because we all want and need to eat. AI is improving this important sector with the following examples: satellites scanning farm fields to monitor crop and soil health; machine learning models that track and predict environmental impacts, like droughts; and big data to differentiate between plants and weeds for pesticide control.

6. Our Trust in Information

Trust in information is one of the most critical issues of our current times. We are bombarded with images and articles that we just don’t know if they are telling the truth or not. Experts say that AI will change how we learn and the level of trust we place in information. AI will help us identify the deep fakes and all those methods of sharing ‘fake’ information, and that is very important.

The ways in which we use AI are growing all the time — and if you think you’re not using it, you almost certainly already are.

The problem with AI bias

Ai has come a long way. Just after WW2, there was a preconception that developing Artificial Intelligence would lead to something like an ‘Attack of the Zombie Robots’, and that AI could only be a bad thing for humanity. Fortunately we have come a long way from the old sci-fi view of AI, and we even have robotics used in surgery, but there is still a lingering feeling that AI and robotics are threatening in some way, and one of those ways is ‘bias’.

AI is very much part of the fourth industrial revolution, which also includes cyber-physical systems powered by technologies like machine learning, blockchain, genome editing and decentralised governance. The challenges that we face in developing our use of AI, are tricky ethical ones for the most part, which need a sensitive approach.

So, what is the issue? As James Warner writes in his article on AI and bias,

“AI is aiding the decision making from all walks of life. But, the point is that the foundation of AI is laid by the way humans function.” And as we all know — humans have bias, unfortunately. Warner says, “It is the result of a mandatory limited view of the world that any single person or group can achieve. This social bias just as it reflects in humans in sensitive areas can also reflect in artificial intelligence.”

And this human bias, as it cascades down into AI can be dangerous to you and me. For example, Warner writes: “the investigative news site Pro Publica found that a criminal justice algorithm used in Florida mislabelled African American defendants as high risk. This was at twice the rate it mislabelled white defendants.” Facial recognition has already been highlighted as an area with shocking ethnic bias, as well as recognition errors.

IBM suggests that researchers are quickly learning how to handle human bias as it infiltrates AI. It has said that researchers are learning to deal with this bias and coming up with new solutions that control and finally free AI systems out of bias.

The ways in which bias can creep in are numerous, but researchers are developing algorithms that can assist with detecting and mitigating hidden biases in the data. Furthermore, scientists at IBM have devised an independent bias rating system with the ability to determine the fairness of an AI system.

One outcome of all this may be that we discover more about how human biases are formed and how we apply them throughout our lives. Some biases are obvious to us, but others tend to sneak around unnoticed, until somebody else points it out. Perhaps we will find that AI can teach us how to handle a variety of biased opinions, and be more fair ourselves.