6 Tech Predictions for 2020

The tech world is constantly changing, and as we enter 2020 and a new decade, we will see even greater differences than we have seen over the previous ten years. Tech experts, who have their finger on the pulse, and are astute when it comes to making predictions about the coming decade in technology, have been discussing the key changes at various conference events worldwide, and I’ve selected six that I think are the most interesting, and significant for those working in tech.

We want more privacy

Privacy has been a major issue over the last two years, highlighted by the Facebook/Cambridge Analytica scandal. Consumers are going to pay more attention to how their data is collected, and how it is stored. As a consequence, more businesses will be looking for cloud-based solutions with privacy features that fully comply with the law and fair consumer practices.

Biometrics will produces more wearables

People are already wearing FitBits, but in the next few years we will probably see more interactive data tracking using heart rate and brainwaves for example, and using them to power personal experiences. One suggestion is that when you lower your heart rate, you’ll see a scene on your screen change colour and sharpness. Positive thoughts may do the same. In other words, we will be using more augmented or virtual reality. Sarah Hill, CEO at HEALium, sees it as a new form of meditation. She says, “These new kinds of meditation are harnessing the power of your body’s own electricity via your wearables to allow the user to feel content in ways that have never been done before.”

Recession-proofed credit

Few people will ever forget the last recession, so as rumours of another one filter through, more people are trying to prevent slipping into a bad credit rating situation by using credit-building fintech tools to bolster their credit scores in advance.”

More AI in publishing

Monetising content has always been an issue for online publishers, and this decade should present them with new solutions, such using machine learning and AI to predict readers’ specific interests and how likely they are to subscribe.

The advance of 5G

Many are agreed that this is going to be a 5G decade. It will probably evolve rapidly and we will see more enterprise applications, plus investment in 5G technology will rise significantly.

The assistant in your car

Niko Vuori, CEO of Drivetime, says, “It is estimated that there will be eight billion digital voice assistants in use by 2023. As voice assistants continue to dominate the home, the in-vehicle usage has remained relatively limited to navigation, despite being one of the only environments that truly requires a hands-free experience.” Expect to have much more voice-assistant technology in your car.

There are many more tech changes to come. What prediction have you seen that appeals to you the most?

Siri is witty, but knows her limits!

Related image

Back in 1956, a man called John McCarthy coined the term AI for artificial intelligence. However it is only in recent years that we have personally witnessed the benefits of AI, and its mass scale adoption by larger enterprises. One of the things that has encouraged the use of AI is the need to understand data patterns, because companies want to know much more about their target audience and Ai allows them to gain useful insights into consumer behaviour.

There is much to be gained by understanding AI, including the fact that it is segmented into ‘weak’ and ‘strong’ sectors.

WEAK AI
Weak AI is also known as Narrow AI. This covers systems set up to accomplish simple tasks or solve specific problems. Weak AI works according to the rules that are set and is bound by it. However, just because it is labelled ‘weak’ doesn’t mean it is inferior: it is extremely good at the tasks it is made for. Siri is an example of ‘Weak AI. Siri is able t hold conversations, sometimes even quite witty ones, but essentially it operates in a predefined manner. And you can experience its ‘narrowness’ when you try to make it perform a task it is not programmed to do.

Company chatbots are similar. They respond appropriately when customers ask questions, and they are accurate. The AI is even capable of managing situations that are extremely complex, but the intelligence level is restricted to providing solutions to problems that are already programmed into the system.
STRONG AI
As you can imagine, ‘Strong AI’ has much more potential, because it is set up to try to mimic the human brain.  It is so powerful that the actions performed by the system are exactly similar to the actions and decisions of a human being. It also has the understanding power and consciousness.

However, the difficulty lies in defining intelligence accurately. It is almost impossible or highly difficult to determine success or set boundaries to intelligence as far as strong AI is concerned. And that is why people still prefer the ‘weak’ version, because it does not fully encompass intelligence, instead it focuses on completing a particular task it is assigned to complete. As a result it has become tremendously popular in the finance industry.
Finance and AI
The finance industry has benefited more than many by the introduction of AI. It is used in risk assessment, fraud detection, giving financial advice, investment trading, and finance management.

Artificial Intelligence can be used in processes that involve auditing financial transactions, and it can analyse complicated tax changes.

In the future, we may find companies basing business decisions on AI, as well as forecasting consumer behaviour and adapting a business to those changes at a much faster pace.

Artificial Intelligence is going to help people and businesses make smarter decisions, but as always we need to remain mindlful of finding the right balance between humans and machines.

How can Asset Intelligence improve cybersecurity?

CIOs are always looking for ways to improve network security. And according to a recent article by Louis Columbus in Forbes, they are “finding new ways to further improve network security by capitalizing on each IT assets’ intelligence.”

IT assets ideally need to capture real time data, as that is how organisations grow. CIOs and their teams, “are energized by the opportunity to create secured perimeterless networks that can flex in real-time as their businesses grow,” Columbus says, and “having a persistent connection to every device across an organizations’ constantly changing perimeter provides invaluable data for achieving this goal.”

What we are all aiming for is real-time, persistent connections to every device in a network, because that is the foundation of a strong endpoint security strategy. But how we achieve this?

1. Track lost or stolen devices within an organisation’s network and disable them.

2. Enable every endpoint to autonomously self-heal

3. Set the data foundation for achieving always-on persistence by tracking every devices’ unique attributes, identifiers, communication log history and more.

4. Have a real-time connection to every device on a perimeterless network.

5. Build more Asset Intelligence in an organisation, because the more they can predict and detect malware intrusion attempts, block them and restore any damage to any device on their perimeter.

6. Geofencing is a must-have for every organisation now, especially those with a global presence. IT and cybersecurity teams need to track device location, usage and compliance in real-time.

7. Automate customer and regulatory audits, as well as improve compliance by using Asset Intelligence. This will save time for the IT team.

8. Asset Intelligence creates cleaner data systems and this has a direct effect on the customer experience. As Columbus says, “Improving data hygiene is essential for IT to keep achieving their incentive plans and earning bonuses.”

The key thing to remember are ‘data hygiene’, because that is where the improvements to security are to be found. And the organisations that are most efficient at implementing this, will be the winners of pubic trust; a very important thing these days.

AI and information crime

Artificial Intelligence (AI) is moving at speed into the mainstream. Almost every day we are learning about new uses for it, and discovering the ways in which it is already a part of our daily lives. Just this week medical researchers have revealed that the use of AI in testing for breast cancer is more effective than the work of expert radiologists. As one doctor interviewed said, this use of AI frees up radiologists to do other important work. And that’s a great benefit, as it takes roughly 10 years to train in radiology.

On the other hand, people are concerned that AI may take over almost every area of our lives, from self-driving cars to fighting wars. And it may do our laundry as well. Basically it comes down to this — will AI replace humans? That’s the great fear, but one which is largely exaggerated. As Kathleen Walch writes: “However, it’s becoming increasingly clear that AI is not a job killer, but rather, a job category killer.” I have also written about this aspect of AI before, pointing to the fact that “jobs are not destroyed, but rather employment shifts from one place to another and entirely new categories of employment are created.”

Indeed, as Walch says, “companies will be freed up to put their human resources to much better, higher value tasks instead of taking orders, fielding simple customer service requests or complaints, or data entry related tasks.” What businesses must do is have honest conversations with their employees about the use of AI and show how it can allow humans to be more creative by giving the dull, routine tasks to AI.

The one area where AI is causing us issue is in the generation of fake news in a range of formats. It is already almost impossible to tell if an image is real or AI-generated, or if you’re talking to a bot or a real person? AI-generated ‘disinformation’ is not necessarily generated by criminals: as we all now know, State actors are perhaps the worst offenders, and we have plenty of example to look at coming from Russia, the USA and the UK. Lies are fed to the citizens using social media accounts that appear to be reputable government sources, and the social media companies collude with these sources, as Facebook has shown us. Walch says, “Now all it takes is a few malicious actors spreading false claims to traumatically alter public opinion and quickly shift the public’s view.” Brexit and the election of Trump are good examples of this in play.

And it is in this area that we must consider the ethics of using AI most closely right now. As Walch says, “Governments and corporations alike will have to think about how they will reign in the potential damage done by AI-enabled content creation,” and she adds, “we must encourage companies and governments to consider fake content to be as malicious as cybersecurity threats and respond appropriately.”

What we are talking about is essentially propaganda. There are those of us who can see through the smoke and mirrors, but many can’t, and these citizens need protection from the malicious acts of the information criminals.