Somebody may have said to you at sometime: “I’ve only got two hands,” indicating that whatever you asked them to do just isn’t possible. But in the field of prosthetics and what are called “enhanced humans” it may well be possible now to have additional limbs to carry out tasks.
We have devices that wearers can control with their thoughts — these help people with prosthetic limbs to feel more whole, but now researchers are setting out to see if such devices could make humans more than whole.
Advanced Telecommunications Research Institute in Japan wanted to know if giving someone a supernumerary robotic limb (SRL), a mind-controlled robotic limb that worked alongside the person’s two biological ones, could give that person multitasking abilities beyond those of the average human.
The study
The researchers asked 15 volunteers to sit in a chair with an SRL positioned as if it were a third arm coming from their own body. Each volunteer wore a cap that that tracked the brain’s electrical activity and the cap transmitted this activity to a computer that then turned it into movement in the SRL. All the volunteers had to do to get the SRL to work was think about what they wanted it to do.
The next stage was to ask the volunteers to complete two tasks. The first one — balancing a ball on a board –they did using their natural arms and hands. The second one — picking up and putting down a bottle –they did using the SRL. They then asked the volunteers to do both the tasks separately and simultaneously.
The results
The results of 20 trials showed that the volunteers were able to complete both tasks successfully using the three limbs about 75 percent of the time. What this means is that they were able to complete two tasks simultaneously that they couldn’t have done if they had been limited to using their natural limbs.
The researchers also think that by operating this brain-machine interface, we have an idea that we may be able to train the brain itself and their future research will try to establish if we might be able to enhance our minds by temporarily enhancing our bodies.
The 3D-printed artificial neural network can be used in medicine, robotics and security
The network, composed of a series of polymer layers, works using light that travels through it. Each layer is 8 centimeters square.
A team of UCLA electrical and computer engineers has created a physical artificial neural network — a device modeled on how the human brain works — that can analyze large volumes of data and identify objects at the actual speed of light. The device was created using a 3D printer at the UCLA Samueli School of Engineering.
Numerous devices in everyday life today use computerized cameras to identify objects — think of automated teller machines that can “read” handwritten dollar amounts when you deposit a check, or internet search engines that can quickly match photos to other similar images in their databases. But those systems rely on a piece of equipment to image the object, first by “seeing” it with a camera or optical sensor, then processing what it sees into data, and finally using computing programs to figure out what it is.
The UCLA-developed device gets a head start. Called a “diffractive deep neural network,” it uses the light bouncing from the object itself to identify that object in as little time as it would take for a computer to simply “see” the object. The UCLA device does not need advanced computing programs to process an image of the object and decide what the object is after its optical sensors pick it up. And no energy is consumed to run the device because it only uses diffraction of light.
New technologies based on the device could be used to speed up data-intensive tasks that involve sorting and identifying objects. For example, a driverless car using the technology could react instantaneously — even faster than it does using current technology — to a stop sign. With a device based on the UCLA system, the car would “read” the sign as soon as the light from the sign hits it, as opposed to having to “wait” for the car’s camera to image the object and then use its computers to figure out what the object is.
Technology based on the invention could also be used in microscopic imaging and medicine, for example, to sort through millions of cells for signs of disease.
“This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects,” said Aydogan Ozcan, the study’s principal investigator and the UCLA Chancellor’s Professor of Electrical and Computer Engineering. “This optical artificial neural network device is intuitively modeled on how the brain processes information. It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential.”
The process of creating the artificial neural network began with a computer-simulated design. Then, the researchers used a 3D printer to create very thin, 8 centimeter-square polymer wafers. Each wafer has uneven surfaces, which help diffract light coming from the object in different directions. The layers look opaque to the eye but submillimeter-wavelength terahertz frequencies of light used in the experiments can travel through them. And each layer is composed of tens of thousands of artificial neurons — in this case, tiny pixels that the light travels through.
Together, a series of pixelated layers functions as an “optical network” that shapes how incoming light from the object travels through them. The network identifies an object because the light coming from the object is mostly diffracted toward a single pixel that is assigned to that type of object.
The researchers then trained the network using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produces as the light from that object passes through the device. The “training” used a branch of artificial intelligence called deep learning, in which machines “learn” through repetition and over time as patterns emerge.
“This is intuitively like a very complex maze of glass and mirrors,” Ozcan said. “The light enters a diffractive network and bounces around the maze until it exits. The system determines what the object is by where most of the light ends up exiting.”
In their experiments, the researchers demonstrated that the device could accurately identify handwritten numbers and items of clothing — both of which are commonly used tests in artificial intelligence studies. To do that, they placed images in front of a terahertz light source and let the device “see” those images through optical diffraction.
They also trained the device to act as a lens that projects the image of an object placed in front of the optical network to the other side of it — much like how a typical camera lens works, but using artificial intelligence instead of physics.
Because its components can be created by a 3D printer, the artificial neural network can be made with larger and additional layers, resulting in a device with hundreds of millions of artificial neurons. Those bigger devices could identify many more objects at the same time or perform more complex data analysis. And the components can be made inexpensively — the device created by the UCLA team could be reproduced for less than $50.
While the study used light in the terahertz frequencies, Ozcan said it would also be possible to create neural networks that use visible, infrared or other frequencies of light. A network could also be made using lithography or other printing techniques, he said.
###
The study’s others authors, all from UCLA Samueli, are postdoctoral scholars Xing Lin, Yair Rivenson, and Nezih Yardimci; graduate students Muhammed Veli and Yi Luo; and Mona Jarrahi, UCLA professor of electrical and computer engineering.
The research was supported by the National Science Foundation and the Howard Hughes Medical Institute. Ozcan also has UCLA faculty appointments in bioengineering and in surgery at the David Geffen School of Medicine at UCLA. He is the associate director of the UCLA California NanoSystems Institute and an HHMI professor.
Since the idea of robots doing jobs that a human can do there has been a widespread fear of what this might mean for the working population in the more advanced economies, where they are more likely to appear in greater numbers first. However, a new report by PricewaterhouseCooper in the UK has brought hope, because it claims that AI will actually create more jobs and compensate for those lost to automation.
The PwC report actually sticks a number on new employment opportunities. It says AI will deliver 7.2 million new jobs in healthcare, science and education by 2037. Of course, one has to balance this against the 7 million jobs lost to automation, but as PwC points out, AI is the winner and will boost economic growth.
It also estimates that around 20% of jobs in the UK will be automated over the next 20 years and that every economic sector will be affected. PwC said: “AI and related technologies such as robotics, drones and driverless vehicles will replace human workers in some areas, but it will also create many additional jobs as productivity and real incomes rise and new and better products are developed.”
AI can boost number of healthcare jobs
Fears among employees have already been raised by the use of robots like Pepper, made by Japanese firm Softbank Robotics. Pepper is already in use in banks, shops and social care, the latter being a major concern for Britain at the moment, as endless reports indicate the system is failing. However, the good news for all those healthcare and social workers is that PwcC claims that AI could make these two sectors amongst the biggest winners and generate one million new jobs, which is 20% more than the existing number of jobs in the sector.
Manufacturing, transport and logistics may lose out
On the other hand, as more driverless vehicles arrive and factories and warehouses become more automated, this employment sector could see a reduction in job opportunities, perhaps as much as 22%, or 400,000. The report also says clerical tasks in the public sector are likely to be replaced by algorithms while in the defence industry humans will increasingly be replaced by drones and other technologies.
Does AI offer hope post-Brexit?
This report may lift some spirits at a moment in British politics where things have never looked more unstable for the UK economy, if only for the reason that the business of exiting the European Union has raised more questions marks about the future of British trade and industry than it has been able to answer. However, if AI can create new jobs for working people and at least match the loss of jobs to automation, there’s a hope that the fallout from whatever the negotiations bring over the next few months will not hurt as much as many in business fear.
The global supply chain moves $64 trillion annually. It’s an unbelievably complex network and even small businesses are reliant on the logistics of this chain operating smoothly. In reality, the global trade network is a challenge for every business, regardless of size and anything that makes it more efficient is to be welcomed. Not only would more efficiency help businesses and their customers, it would also help the environment.
You might think that the information aspect of the Internet would have contributed to simplifying the global trade network, but it hasn’t. Add in the fact that the network has a vast number of intermediaries who establish trust between vendors, but don’t add value to the network, and the fact that it is usually up to humans to identify weaknesses in the supply chain and you can see that there is a lot of inefficiency already. However, there is a solution to this – the blockchain!
Blockchain and AI
If you combine the blockchain with artificial intelligence (AI) it is possible to build an autonomous and decentralised supply chain that can ‘think’ for itself. It would be able to identify areas of inefficiency and correct them. Some think this could be one of the most important developments in blockchain usage and the Blockchain Research Institute has commissioned a research study into the intersection of blockchain and supply chain management.
Don Tapscott, a founder and executive director at the Blockchain Research Institute has commented on the research study “Introducing Asset Chains” on LinkedIn Pulse and shared some of the high level conclusions with readers. The key points is: “Assets all over the world are extracted, designed, combined, transported, and sold every day through the supply chains that underpin global commerce.“ This system has not been overhauled for years and blockchain has the potential to decentralise these traditional supply chains. Add in AI and the Internet of Things and we can have a completely new approach to scaling the network.
Logistics on the blockchain
This new network would have the ability to self-regulate and adjust to improve efficiency. It would also be able to establish “machine trust.” And asset chains are an essential element of machine trust. As Tapscott says: “Asset chainsprovide a framework for machines to participate autonomously in supply chains and the markets they serve. They allow us to unlock the trading capability of machines without human intermediaries.”
That’s why some of the largest supply chain logistics firms are looking at blockchain to transform their operations. This will of course, give birth to a new group of companies providing these blockchain solutions. Tapscott cites the example of Sweetbridge, an Arizona-based firm that is, “leveraging the value of stranded, underutilized assets within those networks – like trucks or shipping containers – into more liquid tokens.” This means that a supply chain can basically fund itself.
This is an exciting way forward for the global trade network, and it is one in which human intermediaries will have fewer functions, making the entire supply chain less expensive, faster and more trustworthy thanks to cryptography and clever coding.