Free phones – but NO privacy!

Image result for Free phones – but NO privacy!

When I spotted an article in Forbes by Thomas Brewster, I was immediately intrigued. The headline is U.S. Funds Program With Free Android Phones For The Poor — But With Permanent Chinese Malware. It surely must strike anyone reading it as a giving with one hand and taking away with the other gesture. So, I had to check out what it was about.

As I live outside the USA, I was not aware that low income households in the States have been able to get cheap cell service and even free smartphones via the U.S. government-funded Lifeline Assistance program. And there is one provider of this service called Assurance Wireless that offers a free Android device along with free data, texts and minutes. It sounds good on the face of it.

But according to security researchers at Malware Bytes there is a significant drawback to the distribution of this largesse. The Android phones come with preinstalled Chinese malware, which effectively opens up a backdoor onto the device and endangers the users’ private data. And, the researchers say that one of the types of malware is impossible to remove.

Malware Bytes informed Assurance Wireless about the issue. Assurance is a Virgin Mobile company, just as a matter of interest. So far Malware Bytes have not received a response from the service provider. So, users should be aware that their devices are vulnerable. Interestingly, after Forbes published the article a spokesperson for Sprint, which owns Virgin Mobile and Assurance Wireless, said: “We are aware of this issue and are in touch with the device manufacturer Unimax to understand the root cause. However, after our initial testing we do not believe the applications described in the media are malware.”

The FCC, which runs Lifeline Assistance, confirmed to Forbes that the law requires “its fund not be used by partner carriers for spending on devices.”

As a result questions are being asked. Senator Ron Wyden asked the FCC why these phones are being distributed to low-income citizens: “It is outrageous that taxpayer money may be going to companies providing insecure, malware-ridden phones to low-income families. I’ll be asking the FCC to ensure Americans that depend on Lifeline Assistance aren’t paying the price with their privacy and security.”

According to the Forbes article, the affected device is a UMX phone shipped by Assurance Wireless, and one of the bits of malware is the creation of a Chinese entity known as Adups. It basically auto-installs apps and the user has no way of controlling that. Furthermore Adups tools have been caught siphoning off private data in the past. This included the full-body of text messages, contact lists and call histories with full telephone numbers.

All this begs the question that Thomas Brewster asks – is privacy only for the rich?

AI and information crime

Artificial Intelligence (AI) is moving at speed into the mainstream. Almost every day we are learning about new uses for it, and discovering the ways in which it is already a part of our daily lives. Just this week medical researchers have revealed that the use of AI in testing for breast cancer is more effective than the work of expert radiologists. As one doctor interviewed said, this use of AI frees up radiologists to do other important work. And that’s a great benefit, as it takes roughly 10 years to train in radiology.

On the other hand, people are concerned that AI may take over almost every area of our lives, from self-driving cars to fighting wars. And it may do our laundry as well. Basically it comes down to this — will AI replace humans? That’s the great fear, but one which is largely exaggerated. As Kathleen Walch writes: “However, it’s becoming increasingly clear that AI is not a job killer, but rather, a job category killer.” I have also written about this aspect of AI before, pointing to the fact that “jobs are not destroyed, but rather employment shifts from one place to another and entirely new categories of employment are created.”

Indeed, as Walch says, “companies will be freed up to put their human resources to much better, higher value tasks instead of taking orders, fielding simple customer service requests or complaints, or data entry related tasks.” What businesses must do is have honest conversations with their employees about the use of AI and show how it can allow humans to be more creative by giving the dull, routine tasks to AI.

The one area where AI is causing us issue is in the generation of fake news in a range of formats. It is already almost impossible to tell if an image is real or AI-generated, or if you’re talking to a bot or a real person? AI-generated ‘disinformation’ is not necessarily generated by criminals: as we all now know, State actors are perhaps the worst offenders, and we have plenty of example to look at coming from Russia, the USA and the UK. Lies are fed to the citizens using social media accounts that appear to be reputable government sources, and the social media companies collude with these sources, as Facebook has shown us. Walch says, “Now all it takes is a few malicious actors spreading false claims to traumatically alter public opinion and quickly shift the public’s view.” Brexit and the election of Trump are good examples of this in play.

And it is in this area that we must consider the ethics of using AI most closely right now. As Walch says, “Governments and corporations alike will have to think about how they will reign in the potential damage done by AI-enabled content creation,” and she adds, “we must encourage companies and governments to consider fake content to be as malicious as cybersecurity threats and respond appropriately.”

What we are talking about is essentially propaganda. There are those of us who can see through the smoke and mirrors, but many can’t, and these citizens need protection from the malicious acts of the information criminals.

The problem with AI bias

Ai has come a long way. Just after WW2, there was a preconception that developing Artificial Intelligence would lead to something like an ‘Attack of the Zombie Robots’, and that AI could only be a bad thing for humanity. Fortunately we have come a long way from the old sci-fi view of AI, and we even have robotics used in surgery, but there is still a lingering feeling that AI and robotics are threatening in some way, and one of those ways is ‘bias’.

AI is very much part of the fourth industrial revolution, which also includes cyber-physical systems powered by technologies like machine learning, blockchain, genome editing and decentralised governance. The challenges that we face in developing our use of AI, are tricky ethical ones for the most part, which need a sensitive approach.

So, what is the issue? As James Warner writes in his article on AI and bias,

“AI is aiding the decision making from all walks of life. But, the point is that the foundation of AI is laid by the way humans function.” And as we all know — humans have bias, unfortunately. Warner says, “It is the result of a mandatory limited view of the world that any single person or group can achieve. This social bias just as it reflects in humans in sensitive areas can also reflect in artificial intelligence.”

And this human bias, as it cascades down into AI can be dangerous to you and me. For example, Warner writes: “the investigative news site Pro Publica found that a criminal justice algorithm used in Florida mislabelled African American defendants as high risk. This was at twice the rate it mislabelled white defendants.” Facial recognition has already been highlighted as an area with shocking ethnic bias, as well as recognition errors.

IBM suggests that researchers are quickly learning how to handle human bias as it infiltrates AI. It has said that researchers are learning to deal with this bias and coming up with new solutions that control and finally free AI systems out of bias.

The ways in which bias can creep in are numerous, but researchers are developing algorithms that can assist with detecting and mitigating hidden biases in the data. Furthermore, scientists at IBM have devised an independent bias rating system with the ability to determine the fairness of an AI system.

One outcome of all this may be that we discover more about how human biases are formed and how we apply them throughout our lives. Some biases are obvious to us, but others tend to sneak around unnoticed, until somebody else points it out. Perhaps we will find that AI can teach us how to handle a variety of biased opinions, and be more fair ourselves.

What has the Internet of Things changed?

First of all, let me give you a definition of the Internet of Things. Wikipedia describes it thus: “The Internet of Things (IoT) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.”

The IoT has a lot of applications, including the smart home and care of the elderly, as well as in healthcare, transport, manufacturing and agriculture, as well as on military battlefields. The ‘smart city’ is another IoT driven creation. Drones are an IoT baby, as are some of the latest artificial organs. The possibilities are seemingly endless, but let’s take a look at some of the areas where it has already had an impact.

Healthcare

The Proteus Pill tracks the influence of each pill taken: the time, the content, and a body’s specific reaction. It allows doctors to discover which medications work, or don’t, with individual patients, making for more accurate prescribing.

Logistics

International courier company DHL uses IoT tools to track and monitor deliveries. It uses sensors to track shipment containers, protect them, and collect data on workers and the adopted tools. In return, the company is more efficient and costs are reduced.

Transport

Virgin Atlantic launched and IoT connection with its Boeing 787 plane to predict possible health and equipment problems and improve flight safety. It shouldn’t be too long before other airlines adopt it.

Agriculture

Drones have great potential in agriculture, and they are the most multifunctional and reliable Internet of Things technologies. In particular, they are capable of taking pictures of huge areas of land, and can analyse soil composition and watering problems, as well as detecting plant diseases. Some believe that the use of IoT in agriculture will be one of its most important uses.

Education

Smart learning is on its way thanks to IoT. From adjusting the space within a university campus to creating a personalized study plan, IoT in combination with AI and machine learning changes the level of satisfaction with learning significantly.

Wildlife Conservation

LionGuardians is an example of IoT at work in nature. Its technology is an open source wildlife tracking collar system designed specifically for saving animals threatened with extinction. Currently being used in southern Kenya, it is hoping to protect and save 2,000 lions left in the area — by tracking their location and sending notifications to coordinators via SMS in case assistance is needed.

Cities

The Smart City is another fascinating use of IoT. Barcelona is already on board with it and has 500 km of optical fibre network, Wi-Fi routed in street lighting, air quality monitoring and water consumption sensors, smart parking and smart waste management. It makes life more comfortable for citizens and more cost effective as well.

The IoT is already changing our world, and it has much further to go.