How hackers steal millions from bank accounts

The latest information from IBM Security Trusteer’s mobile security research team indicatesthat hackers have been using ‘mobile emulators’ to steal millions from financial institutions in Europe and the USA.

How they did it?

They set up a network of mobile device emulators that were behind thousands of spoof devices able to access thousands of compromised accounts. A set of set of mobile device identifiers was used to spoof an actual account holder’s device, and in each case it is likely that these accounts had been infected by malware, or collected via phishing.

The hackers have the victim’s username and password, and using an automatic process are able to “script the assessment of account balances.” They can then automate large numbers of fraudulent transfers. These are never large enough to trigger bank scrutiny at the time.

How does an emulator work?

It mimics the characteristics of several mobile devices. They are often used by developers to test applications, but in the wrong hands they are a crime tool.

According to Finextra: “IBM Trusteer says that the scale of the operation is one that has never been seen before, in some cases, over 20 emulators were used in the spoofing of well over 16,000 compromised devices.”

IBM added, “”The attackers use these emulators to repeatedly access thousands of customer accounts and end up stealing millions of dollars in a matter of just a few days in each case. After one spree, the attackers shut down the operation, wipe traces, and prepare for the next attack.”

IBM Trusteer’s intelligence team has also observed a trending fraud-as-a-service offer in underground venues, promising access to this type of operation to anyone willing to pay for it, with or without the required skill.

“This lowers the entry bar for would-be criminals or those who plan to transition into the mobile fraud realm,” says IBM, and is likely to become a growing trend amongst cybercriminals.

AI and information crime

Artificial Intelligence (AI) is moving at speed into the mainstream. Almost every day we are learning about new uses for it, and discovering the ways in which it is already a part of our daily lives. Just this week medical researchers have revealed that the use of AI in testing for breast cancer is more effective than the work of expert radiologists. As one doctor interviewed said, this use of AI frees up radiologists to do other important work. And that’s a great benefit, as it takes roughly 10 years to train in radiology.

On the other hand, people are concerned that AI may take over almost every area of our lives, from self-driving cars to fighting wars. And it may do our laundry as well. Basically it comes down to this — will AI replace humans? That’s the great fear, but one which is largely exaggerated. As Kathleen Walch writes: “However, it’s becoming increasingly clear that AI is not a job killer, but rather, a job category killer.” I have also written about this aspect of AI before, pointing to the fact that “jobs are not destroyed, but rather employment shifts from one place to another and entirely new categories of employment are created.”

Indeed, as Walch says, “companies will be freed up to put their human resources to much better, higher value tasks instead of taking orders, fielding simple customer service requests or complaints, or data entry related tasks.” What businesses must do is have honest conversations with their employees about the use of AI and show how it can allow humans to be more creative by giving the dull, routine tasks to AI.

The one area where AI is causing us issue is in the generation of fake news in a range of formats. It is already almost impossible to tell if an image is real or AI-generated, or if you’re talking to a bot or a real person? AI-generated ‘disinformation’ is not necessarily generated by criminals: as we all now know, State actors are perhaps the worst offenders, and we have plenty of example to look at coming from Russia, the USA and the UK. Lies are fed to the citizens using social media accounts that appear to be reputable government sources, and the social media companies collude with these sources, as Facebook has shown us. Walch says, “Now all it takes is a few malicious actors spreading false claims to traumatically alter public opinion and quickly shift the public’s view.” Brexit and the election of Trump are good examples of this in play.

And it is in this area that we must consider the ethics of using AI most closely right now. As Walch says, “Governments and corporations alike will have to think about how they will reign in the potential damage done by AI-enabled content creation,” and she adds, “we must encourage companies and governments to consider fake content to be as malicious as cybersecurity threats and respond appropriately.”

What we are talking about is essentially propaganda. There are those of us who can see through the smoke and mirrors, but many can’t, and these citizens need protection from the malicious acts of the information criminals.