How Data Science Is Helping in Robotics and Artificial Intelligence

Big data and data science are set to bring in a digital revolution with groundbreaking technologies like artificial intelligence (AI), machine learning (ML), and deep learning. The essence of data science is to dive into massive datasets to extract meaningful information from them. The insights that data scientists and data analysts obtain from large volumes of data is the secret sauce that’s rapidly transforming everything around us. Institutions and organizations across various sectors of the industry are now leveraging data science technologies to power innovation and technology-driven change. In fact, nearly 53 percent of companies have adopted big data analytics in 2017, which is an enormous growth from the 17 percent in 2015.

As more and more companies are inclining towards data science to transform their organizational infrastructure for the better, it is giving rise to new and exciting career opportunities such as data scientists, data analysts, ML engineers, data architects, big data engineers, and so on. Thus, if you wish to start a career in data science, the time is now. There are plenty of resources available today to help you get started in data science and online platforms offering specialized data science courses are an excellent option. The advantage of online courses is that you can master data science concepts at your own pace and convenience.

Data Science and Artificial Intelligence

The fields of data science, AI, and ML are intrinsically linked to one another. While artificial intelligence is a broad umbrella that includes a wide range of applications, right from text analysis to robotics, machine learning, is a subset of artificial intelligence that focuses on training machines how to ‘learn’ via advanced algorithms and perform specific tasks while simultaneously improving performance through experience. Data science is a branch of computer science that deals in the extraction of valuable insights from vast datasets through a combination of disciplines such as mathematics, ML, statistics, and data engineering.

Today, AI and ML technologies are transforming the industrial landscape, and this is possible only because these technologies are backed by data science. While AI is about creating “intelligent and smart” machines, it cannot do without ML. As mentioned before, machine learning algorithms are required to train machines to learn from behavior patterns and cues. Then again, ML cannot function without analytics, which in turn cannot function without data infrastructure. Harvard Business Review maintains that, companies with strong basic analytics — such as sales data and market trends — make breakthroughs in complex and critical areas after layering in artificial intelligence.” However, for AI to create a true impact, you require the right data and a team of experienced and trained data science professionals who know where to look for the data and how to integrate it with AI and ML tools.

Let’s take the example of smart personal assistants like Siri, Alexa, and Cortana. These smart assistants represent the ‘Narrow AI’ and can interact with you and perform a limited number of tasks such as play songs for you, tell you about the day’s weather, or even do a little shopping for you. But, as we said, they can only perform ‘limited’ tasks since they have been exclusively ‘trained’ to do so. As data science continues to evolve in the future, data scientists might be able to tweak the algorithms of these assistants into more advanced ones (General AI) and then, maybe intelligent assistants can perform more complex tasks with much more precision than humans.

Data Science and Robotics

With the advance in data science, the field of robotics has definitely improved to a great extent. During the initial days of development, scientists were faced with two major challenges -one, predicting every action of a robot, and two, reducing the computational complexity in real-time vision tasks.

While robots could perform specific functions, it was impossible for scientists to predict their next move. For every new functionality, a robot would have to be reprogrammed every time, which made the task a tedious one. Another major obstacle with robots is that unlike humans who use their unique sense of vision to make sense of the world around them, robots can only visualize the world in a series of zeros and ones. Thus, accomplishing real-time vision tasks for robots would mean a fresh set of zeros and ones every time a new trend emerges, thereby increasing the computational complexity.

Enter machine learning to solve these issues in robotics. With ML, robots can acquire new behavior patterns through labeled data. Handwriting recognition is an excellent example. In handwriting recognition, computers are fed with labeled data — both positive and negative. Once the computer has successfully learned to differentiate between positive and negative examples, it is presented with new data. Based on the previous experience (during the training phase), the computer can predict the qualified classifiers for recognizing the handwriting. Thanks to advanced ML algorithms powered by tons of data that computers are now able to perform handwriting recognition much more accurately than they were ten years ago.

Furthermore, reinforcement learning, the branch of ML that is “the closest that machine learning can get to the way how humans learn” teaches computers and robots to perform specific functions according to their environment to generate outcomes that fetch either rewards or penalties. Thus, every time the robots lead to penalties, they can learn from their mistakes and know what course of action to take to fetch awards. Personalized recommendation lists of online portals such as Amazon and Netflix are the best examples of reinforcement learning. This wasn’t possible ten years ago!

As data scientists continue to leverage AI and ML to develop smart machines, in the process, they are gaining a deeper insight into the world of data science itself. Using AI and ML, data scientists and analysts can process, analyze, and interpret vast datasets much faster than ever. For instance, the MIT Data Science Machine can process large volumes of data and produce better predictive models anywhere between two to twelve hours, while the same would take months if done manually by data scientists. Another excellent case in point is that of California’s NuMedii Labs. Data scientists at the NuMedii Labs used network-based data mining algorithms to identify the correlations between the disease information and the drug composition to estimate the drug efficacy accurately. In this way, NuMedii aims to reduce the amount of time and risk associated with the process of drug development by bringing effective drugs into the market much faster than would happen through traditional methods.

Thus, data science, AI, and robotics have a pretty much symbiotic relationship. Each enhances the other to power innovative machines and technologies that are making our lives more convenient than ever. The collaboration between data science, AI, and ML has given us things like self-driving cars, smart assistants, robo-surgeons and nurses, and so much more. In the future, more is to come!

How artificial intelligence is shaping our future

The future we want is up to us and it’s time to take a side. Here, GQ explores the different ways artificial intelligence proves the best (and worst) is ours to control

Science-fiction guru William Gibson, who coined the term cyberspace, famously pointed out that the future is already here — it is just not very evenly distributed. We veer from anticipating a new dawn to prepping for the end of days. But the future we want is up to us and it’s time to take a side. Herewith, six digital advances that prove the best (and worst) is ours to control.

1. Beneficial artificial intelligence

Where we are: You can’t move for pundits telling us what artificial intelligence is going to do for us. It is transforming our world, but it has been for the past 50 years. Every smartphone is powered by AI research, giving us information retrieval via voice recognition or apps that spot our friends’ faces in the photos we take.

Where we’re headed: A world fuelled and enhanced by AI is one to look forward to. Autonomous cars will mean efficient and safe transport. Real-time translation buds that will enable you to speak one language and hear another will transform our travel experiences. Despite the cries of alarmists, there is little reason to believe that our AIs are going to “wake up” and decide to do away with us.

2. The datasphere

Where we are: Data changes everything: our personal lives, businesses and public services. One dramatic application is digitally powered precision medicine. Our bodies are constructed according to information encoded in our genes. Understanding how these instructions make proteins, build cells, repair damage and repel viruses is all driven by data.

Where we’re headed: New drugs, therapies and treatments will produce a revolution in the delivery of healthcare. What’s true for health is true for education, leisure, finance and travel. Every aspect of how individuals, corporations and governments function can be more effectively managed with the right application of the right data.

3. New companions

Where we are: They are already in our homes and in our lives, we know them as Alexa and Siri. These intelligent assistants will assume more and more of a role. As they learn from our interests and habits, they will become more informed of the information we need. They will not have any actual interest or awareness — but that won’t matter. We will increasingly treat them as our companions.

Where we’re headed: Humans will come to confide, trust and rely on our new companions. They will support us for better or worse, in our prime and our decline. Powered by AI and abundant data, they may assume the characteristics of those dear or near to you. Imagine your late grandmother or your favourite rock star chatting helpfully in your living room.

4. Unthinking artificial intelligence

Where we are: In the Terminator film series, Skynet, a defence network, suddenly becomes self-aware and launches a full-scale thermonuclear attack to get rid of humans. Nothing of that sort is about to happen. It is not AI we should fear but our own natural stupidity.

Where we’re headed: As we give systems control of our decision-making, we must not abdicate our responsibility. The danger is the unthinking digital system, without proper restraints, that launches an unthinking attack. We need to take great care whenever we take the human out of the decision loop when it comes to matters of life and death. AI systems lack moral sense and the broader contextual judgements of humans. A future of weaponised AI is one to fear.

5. Uncontrolled data analytics

Where we are: The widespread availability of high-quality data will be a boon. But data can be used to corrupt, misdirect and misinform. Recent events around Cambridge Analytica and its use of Facebook data to profile and target particular groups have caused a furore. Data used with the express aim of achieving a desired effect without the knowledge of the subjects themselves should make us very uneasy.

Where we’re headed: We have examples of data being used to make decisions on everything from prison sentences to credit ratings. The data can and often does encode bias. Courts in the US already use AI software to inform sentencing and the results are mixed. The AI notices the prejudices in its vast database of previous sentences and hands them down again as the usual “right” answer.

Algorithms can make decisions, but they can’t be accountable. Data collected from each of us every day and increasingly in the future can invade our privacy and reveal features of our lives that we ourselves are unaware of. Consumers and citizens should be empowered, not oppressed by data and its analysis.

6. Rampant cyber warfare

Where we are: Every second of every day computer networks around the world are under threat. Hackers and nation states all launch software to attack and subvert our digital systems. We are living in an age of increasingly frenzied but undeclared wars.

Where we’re headed: Because they are in virtual space, we don’t see the physical damage. But one day soon an airport will go dark or there will be a crash of the banking system. Countries, businesses and individuals are engaged in a digital arms race, desperately building new defences in the face of cyber attack. Computing is a dual-use technology. The same innovations that enhance our world can be used for harm. History points the way to a better future. For physical weapons, we have engineered and enforced conventions, treaties and limitations. We urgently need equivalents for our digital world.

Artificial intelligence key stats:

2.3m new jobs created by the AI industry by 2020

£5.4bn amount spent on AI by 2022

  1. 8m jobs lost as a result of the AI industry by 2020

Banks and FinTechs collaborate via different engagement approaches

Each collaboration uniquely satisfies the specific needs of its participants and is built on models…

It wasn’t that long ago that traditional banks viewed FinTech firms, or Fintechs, as competitors. Those days are way behind us. Nowadays, incumbents and FinTechs have become best friends, or at least collaborators, to leverage each other’s complementary strengths and focus on shared goals.

FinTechs are constantly disruptively unbundling many traditional banking services by simplifying and improving customer experience (CX), and customers aren’t shy about showing their appreciation. From the start, FinTechs have focused on resolving financial services’ inefficiencies and high-friction approach to customer service. Bolstered by operational advantages including a lower cost base, no burden of legacy systems, emerging technology adeptness and a culture of taking risk to best serve the end customer, FinTechs hit the ground running to provide an engaging, contemporary CX.

Traditional financial institutions have a vast customer base and deep pockets. Due to legacy systems, their solution to innovation is to adopt an API culture. As such, they are now very open to collaboration with smaller players rather than building the new blocks. They are now forging long-term relationships and commit necessary resources to a FinTech collaboration. Traditional financial institutions have been actively trying to adopt the FinTech approach to customer experience, either by buying startups or by partnering with them. Though, the scalable and industrialized results have been limited, mainly due to finding the right partner or right approach to jointly fuel growth.

Retail banks have felt the impact from this disruption and heightened competition, with stricter regulations and sporadic economic growth adding to their challenges. However, customers don’t seem to be looking back as they embrace digital platforms, particularly early-adopter millennials[1] who came of age with digital financial products and services.

So, it’s no surprise that traditional banks are looking to develop new revenue streams, reduce costs, and meet rising customer expectations. Even though many incumbent banks are now strategically focused on innovation and agility, they are plagued by archaic legacy systems, and in-house digital innovation efforts have not been very successful.

Bank–FinTech collaboration

Source: Capgemini Financial Services Analysis, 2018, Capgemini Top-10 Technology Trends in Retail Banking: 2018

Therefore, more and more traditional banks are collaborating with FinTechs, with 91% of bank executives saying they would like to work with FinTechs, and 86% voicing concerns that a lack of collaboration could hurt business within the fast-evolving digital ecosystem. Moreover, 42% of bank executives said FinTech collaboration would help them lower their cost base. Regulations that involve customer data sharing — such as Europe’s Revised Payment Service Directive (PSD2) and Open Banking Standards in the UK — are also encouraging bank/FinTech partnerships.

Each collaboration uniquely satisfies the specific needs of its participants and is built on models that include bank acquisition, investment, or partnership with a FinTech. Engagement approaches — to suit business models and combined goals, can take the form of incubators/accelerators, hackathons, or use of application programming interfaces (APIs) to open systems to third parties.

  • Already in 2013, Citibank launched an annual four-month accelerator program and currently offers “Citi Mobile Challenge,” a “virtual” accelerator program that combines a virtual hackathon with an incubator. Mentored participants learn through virtual and on-site boot camp curriculum.[2]
  • Frankfurt, Germany-based “Main Incubator” was launched in partnership with Commerzbank in 2013. It supports FinTechs through dedicated VC funding, office space, and expert know-how.[3]
  • DCB in India is promoting financial technology through its Innovation Carnival Hackathon, in which FinTechs develop financial products and create new technologies for DCB Bank. FinTech experts and entrepreneurs mentor the participants.[4]
  • Santander, headquartered in Spain, and peer-to-peer lending marketplace, Funding Circle, have teamed up in the UK, wherein Santander refers small business customers seeking a loan to Funding Circle.[5]

It appears that collaboration is the way of the future for the financial services industry, as new technologies and standardization create a better, more integrated landscape leading to a better customer experience. Partnership that encompasses mutual business goals and leverages both entities’ strengths will ultimately provide differentiated products and services, and leverage technology for more profound consumer insights.

The industry will likely converge towards open banking and an API-based ecosystem that enables a connected network of banks and FinTechs. Although collaboration can be fraught with challenges (cultural differences, IT incompatibility or sluggish agile implementation), a commitment to mutual understanding and an enhanced customer experience will foster success. Find out more in Top-10 Trends in Retail Banking 2018, a report from Capgemini Financial Services.

UCLA-developed artificial intelligence device identifies objects at the speed of light

The 3D-printed artificial neural network can be used in medicine, robotics and security

IMAGE
The network, composed of a series of polymer layers, works using light that travels through it. Each layer is 8 centimeters square.

A team of UCLA electrical and computer engineers has created a physical artificial neural network — a device modeled on how the human brain works — that can analyze large volumes of data and identify objects at the actual speed of light. The device was created using a 3D printer at the UCLA Samueli School of Engineering.

Numerous devices in everyday life today use computerized cameras to identify objects — think of automated teller machines that can “read” handwritten dollar amounts when you deposit a check, or internet search engines that can quickly match photos to other similar images in their databases. But those systems rely on a piece of equipment to image the object, first by “seeing” it with a camera or optical sensor, then processing what it sees into data, and finally using computing programs to figure out what it is.

The UCLA-developed device gets a head start. Called a “diffractive deep neural network,” it uses the light bouncing from the object itself to identify that object in as little time as it would take for a computer to simply “see” the object. The UCLA device does not need advanced computing programs to process an image of the object and decide what the object is after its optical sensors pick it up. And no energy is consumed to run the device because it only uses diffraction of light.

New technologies based on the device could be used to speed up data-intensive tasks that involve sorting and identifying objects. For example, a driverless car using the technology could react instantaneously — even faster than it does using current technology — to a stop sign. With a device based on the UCLA system, the car would “read” the sign as soon as the light from the sign hits it, as opposed to having to “wait” for the car’s camera to image the object and then use its computers to figure out what the object is.

Technology based on the invention could also be used in microscopic imaging and medicine, for example, to sort through millions of cells for signs of disease.

The study was published online in Science on July 26.

“This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects,” said Aydogan Ozcan, the study’s principal investigator and the UCLA Chancellor’s Professor of Electrical and Computer Engineering. “This optical artificial neural network device is intuitively modeled on how the brain processes information. It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential.”

The process of creating the artificial neural network began with a computer-simulated design. Then, the researchers used a 3D printer to create very thin, 8 centimeter-square polymer wafers. Each wafer has uneven surfaces, which help diffract light coming from the object in different directions. The layers look opaque to the eye but submillimeter-wavelength terahertz frequencies of light used in the experiments can travel through them. And each layer is composed of tens of thousands of artificial neurons — in this case, tiny pixels that the light travels through.

Together, a series of pixelated layers functions as an “optical network” that shapes how incoming light from the object travels through them. The network identifies an object because the light coming from the object is mostly diffracted toward a single pixel that is assigned to that type of object.

The researchers then trained the network using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produces as the light from that object passes through the device. The “training” used a branch of artificial intelligence called deep learning, in which machines “learn” through repetition and over time as patterns emerge.

“This is intuitively like a very complex maze of glass and mirrors,” Ozcan said. “The light enters a diffractive network and bounces around the maze until it exits. The system determines what the object is by where most of the light ends up exiting.”

In their experiments, the researchers demonstrated that the device could accurately identify handwritten numbers and items of clothing — both of which are commonly used tests in artificial intelligence studies. To do that, they placed images in front of a terahertz light source and let the device “see” those images through optical diffraction.

They also trained the device to act as a lens that projects the image of an object placed in front of the optical network to the other side of it — much like how a typical camera lens works, but using artificial intelligence instead of physics.

Because its components can be created by a 3D printer, the artificial neural network can be made with larger and additional layers, resulting in a device with hundreds of millions of artificial neurons. Those bigger devices could identify many more objects at the same time or perform more complex data analysis. And the components can be made inexpensively — the device created by the UCLA team could be reproduced for less than $50.

While the study used light in the terahertz frequencies, Ozcan said it would also be possible to create neural networks that use visible, infrared or other frequencies of light. A network could also be made using lithography or other printing techniques, he said.

###

The study’s others authors, all from UCLA Samueli, are postdoctoral scholars Xing Lin, Yair Rivenson, and Nezih Yardimci; graduate students Muhammed Veli and Yi Luo; and Mona Jarrahi, UCLA professor of electrical and computer engineering.

The research was supported by the National Science Foundation and the Howard Hughes Medical Institute. Ozcan also has UCLA faculty appointments in bioengineering and in surgery at the David Geffen School of Medicine at UCLA. He is the associate director of the UCLA California NanoSystems Institute and an HHMI professor.