Google offers $25 million for AI challenge

Related image

It’s today’s big story: Google is offering $25 million in grants to nonprofits, universities and other organisations working on AI projects that will benefit society,

as part of its AI for Social Good initiative. Google will open the application process this coming Monday and will announce winners next spring at Google’s annual I/O developer conference.

Details of the challenge explain that Google.org is issuing an open call to organizations around the world to submit their ideas for how they could use AI to help address societal challenges. Selected organizations will receive customized support to help bring their ideas to life: coaching from Google’s AI experts, Google.org grant funding from a $25M pool, credit and consulting from Google Cloud, and more.

Google says the programme is meant to help solve the world’s most pressing problems, such as crisis relief, environmental conservation and sex trafficking.

However, it is also clear that this ‘competition’ comes at a time when Google’s own use of artificial intelligence is under increasing scrutiny, including “in controversial military work or reported efforts to build a censored search engine in China,” as CNET says. There has also been Project Maven, a U.S. Defense Department initiative aimed at developing better AI for the military that resulted in a rebellion by Google’s own employees and some 4,000 of them petitioned the executives to stop the project, which the company duly did and promised not to engage in similar projects again.

At the press announcement, Google’s head of AI, Jeff Dean, avoided discussing these issues, although he did mention Google’s ethical principles that outline how it will and will not use the technology.

Yossi Matias, vice president of engineering, said in an interview last week, “The gist of the program is to encourage people to leverage our technology. Google can’t work on everything. There are many problems out there we may not even be aware of.”

It is going to be interesting to see what initiatives come out of this global challenge. Hopefully we will see a diverse range of ideas for AI use that can improve the world when it is so badly in need of repairs in all areas of existence.

 

 

 

 

 

 

 

 

How companies use machine learning

The machine learning market is growing at pace. According to Research and Markets it should reach $40 billion by 2025. Currently it is already over the $1 billion mark, but to reach the estimated value it will have to make a major leap in growth.

What will cause it to grow? Every company will start using it once they have identified a use case, and that is one of the barriers to adoption at the moment, but we can learn from the ways in which major companies are already using machine learning.

Apple

Apple is working on a cross-device personalisation tool and has already applied for the patent. It is rumoured that what this will do is allow your Apple Watch to connect with your iTunes playlist and find a piece of music to match your heart rate.

Twitter

Twitter is working on visibility problems with thumbnail images. It is using neural networks to find a scalable, cost-effective way to crop users’ photos into compelling, low-resolution preview images.

AliBaba

This Chinese retail giant has 500 million customers and each of them uses the store in a distinct way. So Alibaba is using machine learning to track every customer’s journey. Furthermore, all Alibaba’s online storefronts are customised for each shopper and searches will bring customers the products they want to see. There’s also a chatbot who handles most of the spoken and written customer service inquiries. Every element of Alibaba’s business has been built for engagement with the shopper, and every action the shopper takes teaches the machine more about what the shopper wants. It’s extremely effective.

Target

American retailing giant, Target, is using machine learning to reach and respond to its pregnant customers. In fact, Target’s model is so precise that it can reliably guess which trimester a pregnant woman is in based on what she’s bought.

Typically companies have been driven by the seasons, but machine learning can help businesses respond to ‘seasons’ in people’s lives. For example, a person who has just bought a car doesn’t want to see car ads, but motor insurance ads are appropriate. Basically, machine learning can pick up on those rhythms, helping companies recommend their products to customers when the timing is just right.

Pepper the Robot talks to UK parliament

Резултат с изображение за Pepper the Robot talks to UK parliament

The title does not refer to the ‘Maybot’, the well-used nickname for the UK’s present prime minister, but to Pepper, a robot who will be the first non-human to testify in front of the British parliament about the fourth industrial revolution.

Pepper, who has been created by Softbank Robotics, will be explaining topics such as AI and robotics to The Commons Education Select Committee. Robert Halfon, Chair of the Committee told the Times Educational Supplement: “If we’ve got the march of the robots, we perhaps need the march of the robots to our select committee to give evidence.” And he added, “The fourth industrial revolution is possibly the most important challenge facing our nation over the next 10, 20, to 30 years.”

The Education Select Committee wants to understand exactly what the impact of this fourth industrial revolution is likely to be, both in terms of the positives and negatives. For example, as we know, when robots and artificial intelligences are discussed, the talk usually comes around to how this new technology will negatively impact on people’s jobs. As has been pointed out by many AI critics, low-skilled workers are most likely to be hit by the introduction of robotics in the workplace, particularly on factory production lines with repetitive processes. Although, as I have discussed in a previous blog post, there are strong arguments in favour of AI freeing up those workers to do more meaningful tasks.

The Committee hopes that by allowing Pepper to have the floor and talk to members, they will gain some insights into the future. Halfon responded to those who think this ‘performance’ is a bit of a gimmick by saying, “This is not about someone bringing an electronic toy robot and doing a demonstration; it’s about showing the potential of robotics and artificial intelligence and the impact it has on skills.”

Pepper is equipped with four microphones, two HD cameras, and a touchscreen on its chest for displaying information when needed and has been speaking at conferences around the world, so it has some experience in speaking to CEOs and industry leaders.

 

 

‘Five Eyes’ tests AI in battlefield scenario

Резултат с изображение за ‘Five Eyes’ tests AI in battlefield scenario

The militaristic use of AI is somewhat controversial, so this story in Artificial Intelligence News sparked my interest. According to the story, the British military has been trialling the use of AI to “scan for hidden attackers in a mock battlefield environment.” The testing ground is actually in Montreal, Canada rather than the UK, as Canada and the UK, along with the US, Australia and New Zealand are members of what is called ‘Five Eyes’, which is a security partnership.

The AI tool is called SAPIENT (Sensors for Asset Protection using Integrated Electronic Network Technology)

and it has been developed with the aim of using sensors to detect battlefield hazards while freeing up human soldiers for other operational activities. To some it may sound like something you might find while playing “Call of Duty,” except this too will not be deployed in a virtual reality.

 Keeping pace with Russia

Not everyone welcomes the introduction of AI to warfare, but those that do support it point to Russia and China’s heavy investment in military AI and say we must keep pace with them. Russia’s bellicose president Vladimir Putin has already said, “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Google’s Project Maven

Perhaps that is a chilling enough statement to send most nations scurrying off to work on AI and why the ‘Five Eyes’ are continuing to test news tools extensively. Yet others are wary. For example, earlier in 2018 you may remember the furore that erupted at Google’s offices over its Project Maven contract with the Pentagon to develop AI technology for drones for the Pentagon. Google was forced to drop the contract following an internal backlash and staff resignations.  Some 4,000 staff members demanded that Google would never again “build warfare technology.” Google CEO Sundar Pichai wrote in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights.”

SAPIENT development

But others are undeterred by such considerations, and it is a extremely interesting area of debate, especially for ‘civilian’ companies like Google. When companies typically associated with the military work on technology, you can expect a different response from their employees, because this is simply what they do. That is the case with SAPIENT, which was developed by the Defence Science and Technology Laboratory (DSTL) and industry partners, and co-funded initially by Innovate UK. Since 2016, the programme has been funded solely by DSTL, which is part of the Ministry of Defence.

 

The UK’s Defence Minister Stuart Andrew said of SAPIENT:

“This British system can act as autonomous eyes in the urban battlefield. This technology can scan streets for enemy movements so troops can be ready for combat with quicker, more reliable information on attackers hiding around the corner.

Investing millions in advanced technology like this will give us the edge in future battles.”

Sapience, by the way, means “the ability to act with judgement.” The question is, can an AI tool that monitors people approaching a checkpoint, or changes in people’s behaviour, rally replace human intelligence? There are many stories about human errors and innocent people have been mistakenly shot. Will AI deliver better responses? Sadly, we will only know after it has been deployed.