The history of the search engine: from index cards to the AI chatbot

One of the greatest improvements brought about by the internet is the ability to find answers to almost anything almost immediately thanks to the search engine. How did we evolve from the days of index cards to AI-powered chatbots?

How did people find answers before the internet?

Even those of us old enough to remember a life pre-web struggle to recall how we did our homework, checked for correct spellings or even resisted the urge to ask those questions we wouldn’t dare ask aloud without the relative safety of a search engine like Google.

And yet humans somehow managed to exist before the World Wide Web was created in 1989. Since then, search has improved exponentially to the point where a personal chatbot to help with our most routine tasks is becoming a reality. But how did we reach this point?

Before the internet:

Searching was far more laborious and in many cases would not even have even taken place before the creation of the search engine.

Index cards were first popularized by Carl Linnaeus to classify more than 12,000 species of plants and animals. In the years following his idea, libraries began to rely on them to index their collections.

Eventually, libraries settled on the Dewey Decimal System which organized all books by subject, author and title — one which is still in place across libraries today.

The first search engine:

With the invention of the internet came the first example of what we are familiar with today. But it was not Google, Yahoo or even Ask Jeeves which were the first to introduce a whole new concept to us.

Archie was written in 1990 by Alan Emtage and indexed all the file lists of as many public FTP servers as possible to allow users to find and download publicly available files.

While it was not on a par with the search technology available today it was indeed better than the alternative — word of mouth.

The web directory:

For a while, it was regarded as the internet’s most important search engine, but that label did not fit the early versions of Yahoo. In fact, it was considered a web directory that relied on humans to summarize websites with short descriptions and to organize them into categories.

Created in 1994, Yahoo became so popular that publishers would delay posting their websites to ensure they would be included. Despite the advancements in search, the Yahoo Directory did manage to survive until 2014 when it was closed for good.

The first web crawler:

1994 also saw the first web crawler released — appropriately titled, WebCrawler. It was the first to index entire pages and became so popular that at one point it could not be used during the day.

Natural language search:

A search engine that Google arguably owes a lot to, Altavista was a pioneer in many of the online search techniques which we are still using today.

Notably, in 1995 Altavista became the first search engine to incorporate natural language technology. Among other achievements it also provided the first searchable full-text database of the web, allowed multi-language search and even translated pages.

Altavista’s move away from streamlined search towards a more complex web portal ultimately led to its demise as users flocked to the up-and-coming Google.

Google:

Which finally takes us to the granddaddy of them all. Google’s success can be attributed to many areas, but its most significant selling point is its famous algorithm which was able to yield within fractions of a second more relevant search results than its competitors.

In 1996 when Larry Page and Sergey Brin launched BackRub — Google’s precursor — they realized that their algorithm knew which webpage was the best for a topic based on accumulated links and, more importantly, citations from the most authoritative websites. It was this focus on the relevancy of a website that made Google so popular.

Semantic search engines:

While Google was able to provide the world with answers to searches instantly, companies were still struggling to do the same on their own websites.

The Inbenta Semantic Search Engine was first created in 2010 and was able to understand searcher intent and the contextual meaning behind customer’s searches rather than rely on keywords. Much of this capability was due to Inbenta’s patented natural language processing which significantly improved companies’ self-service rates.

Voice recognition:

The concept of computers which could understand our voice had been around for the 50 years or so before Apple’s Siri and Google both brought it into the mainstream.

Google added “personalized recognition” to Voice Search on its Android phones in 2010 as well as its Chrome browser in 2011. Its English Voice Search now incorporates 230 billion words from actual queries.

A Stanford Research Institute spin-off was sold to Apple in 2010 and led to Siri and its cloud-based processing. Ironically, its first offering was far more potent than the version embedded on our iPhones today — it was more intuitive, connected to the web and could detect meaning from sentences more effectively.

The artificial intelligent chatbot:

Chatbots have existed since Eliza was billed in 1966 as the world’s first ‘chatterbot’ capable of communicating with humans as a psychotherapist would.

Only now have virtual agents started to make their mark in the search world by providing customers with information across all forms of social media as well as on company websites.

Many of them are powered by artificial intelligence and natural language processing which has provided users with a more personal experience when searching — think of a shop assistant minus the need to step out of your house.

One chatbot to rule them all:

What is the next step in the search world? Chatbots are now starting to combine natural language processing with machine learning. This combination leads to agents that can provide high self-service rates and improve as it gathers more data.

Not only will bots become more accurate but we will soon be able to carry out all our searches as well as any transactions within a single conversation. Regardless of whether it is ordering a pizza, comparing the best energy prices or keeping up to date with the latest in the NBA it will all soon be handled within the same digital space.

The developers behind the search engine Ask Jeeves might have had a point when they decided to make a butler the face of their company. Search technology is doing all it can to adapt to us. It will continue to do so in ways that we cannot even comprehend.

Banks and FinTechs collaborate via different engagement approaches

Each collaboration uniquely satisfies the specific needs of its participants and is built on models…

It wasn’t that long ago that traditional banks viewed FinTech firms, or Fintechs, as competitors. Those days are way behind us. Nowadays, incumbents and FinTechs have become best friends, or at least collaborators, to leverage each other’s complementary strengths and focus on shared goals.

FinTechs are constantly disruptively unbundling many traditional banking services by simplifying and improving customer experience (CX), and customers aren’t shy about showing their appreciation. From the start, FinTechs have focused on resolving financial services’ inefficiencies and high-friction approach to customer service. Bolstered by operational advantages including a lower cost base, no burden of legacy systems, emerging technology adeptness and a culture of taking risk to best serve the end customer, FinTechs hit the ground running to provide an engaging, contemporary CX.

Traditional financial institutions have a vast customer base and deep pockets. Due to legacy systems, their solution to innovation is to adopt an API culture. As such, they are now very open to collaboration with smaller players rather than building the new blocks. They are now forging long-term relationships and commit necessary resources to a FinTech collaboration. Traditional financial institutions have been actively trying to adopt the FinTech approach to customer experience, either by buying startups or by partnering with them. Though, the scalable and industrialized results have been limited, mainly due to finding the right partner or right approach to jointly fuel growth.

Retail banks have felt the impact from this disruption and heightened competition, with stricter regulations and sporadic economic growth adding to their challenges. However, customers don’t seem to be looking back as they embrace digital platforms, particularly early-adopter millennials[1] who came of age with digital financial products and services.

So, it’s no surprise that traditional banks are looking to develop new revenue streams, reduce costs, and meet rising customer expectations. Even though many incumbent banks are now strategically focused on innovation and agility, they are plagued by archaic legacy systems, and in-house digital innovation efforts have not been very successful.

Bank–FinTech collaboration

Source: Capgemini Financial Services Analysis, 2018, Capgemini Top-10 Technology Trends in Retail Banking: 2018

Therefore, more and more traditional banks are collaborating with FinTechs, with 91% of bank executives saying they would like to work with FinTechs, and 86% voicing concerns that a lack of collaboration could hurt business within the fast-evolving digital ecosystem. Moreover, 42% of bank executives said FinTech collaboration would help them lower their cost base. Regulations that involve customer data sharing — such as Europe’s Revised Payment Service Directive (PSD2) and Open Banking Standards in the UK — are also encouraging bank/FinTech partnerships.

Each collaboration uniquely satisfies the specific needs of its participants and is built on models that include bank acquisition, investment, or partnership with a FinTech. Engagement approaches — to suit business models and combined goals, can take the form of incubators/accelerators, hackathons, or use of application programming interfaces (APIs) to open systems to third parties.

  • Already in 2013, Citibank launched an annual four-month accelerator program and currently offers “Citi Mobile Challenge,” a “virtual” accelerator program that combines a virtual hackathon with an incubator. Mentored participants learn through virtual and on-site boot camp curriculum.[2]
  • Frankfurt, Germany-based “Main Incubator” was launched in partnership with Commerzbank in 2013. It supports FinTechs through dedicated VC funding, office space, and expert know-how.[3]
  • DCB in India is promoting financial technology through its Innovation Carnival Hackathon, in which FinTechs develop financial products and create new technologies for DCB Bank. FinTech experts and entrepreneurs mentor the participants.[4]
  • Santander, headquartered in Spain, and peer-to-peer lending marketplace, Funding Circle, have teamed up in the UK, wherein Santander refers small business customers seeking a loan to Funding Circle.[5]

It appears that collaboration is the way of the future for the financial services industry, as new technologies and standardization create a better, more integrated landscape leading to a better customer experience. Partnership that encompasses mutual business goals and leverages both entities’ strengths will ultimately provide differentiated products and services, and leverage technology for more profound consumer insights.

The industry will likely converge towards open banking and an API-based ecosystem that enables a connected network of banks and FinTechs. Although collaboration can be fraught with challenges (cultural differences, IT incompatibility or sluggish agile implementation), a commitment to mutual understanding and an enhanced customer experience will foster success. Find out more in Top-10 Trends in Retail Banking 2018, a report from Capgemini Financial Services.

UCLA-developed artificial intelligence device identifies objects at the speed of light

The 3D-printed artificial neural network can be used in medicine, robotics and security

IMAGE
The network, composed of a series of polymer layers, works using light that travels through it. Each layer is 8 centimeters square.

A team of UCLA electrical and computer engineers has created a physical artificial neural network — a device modeled on how the human brain works — that can analyze large volumes of data and identify objects at the actual speed of light. The device was created using a 3D printer at the UCLA Samueli School of Engineering.

Numerous devices in everyday life today use computerized cameras to identify objects — think of automated teller machines that can “read” handwritten dollar amounts when you deposit a check, or internet search engines that can quickly match photos to other similar images in their databases. But those systems rely on a piece of equipment to image the object, first by “seeing” it with a camera or optical sensor, then processing what it sees into data, and finally using computing programs to figure out what it is.

The UCLA-developed device gets a head start. Called a “diffractive deep neural network,” it uses the light bouncing from the object itself to identify that object in as little time as it would take for a computer to simply “see” the object. The UCLA device does not need advanced computing programs to process an image of the object and decide what the object is after its optical sensors pick it up. And no energy is consumed to run the device because it only uses diffraction of light.

New technologies based on the device could be used to speed up data-intensive tasks that involve sorting and identifying objects. For example, a driverless car using the technology could react instantaneously — even faster than it does using current technology — to a stop sign. With a device based on the UCLA system, the car would “read” the sign as soon as the light from the sign hits it, as opposed to having to “wait” for the car’s camera to image the object and then use its computers to figure out what the object is.

Technology based on the invention could also be used in microscopic imaging and medicine, for example, to sort through millions of cells for signs of disease.

The study was published online in Science on July 26.

“This work opens up fundamentally new opportunities to use an artificial intelligence-based passive device to instantaneously analyze data, images and classify objects,” said Aydogan Ozcan, the study’s principal investigator and the UCLA Chancellor’s Professor of Electrical and Computer Engineering. “This optical artificial neural network device is intuitively modeled on how the brain processes information. It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security or any application where image and video data are essential.”

The process of creating the artificial neural network began with a computer-simulated design. Then, the researchers used a 3D printer to create very thin, 8 centimeter-square polymer wafers. Each wafer has uneven surfaces, which help diffract light coming from the object in different directions. The layers look opaque to the eye but submillimeter-wavelength terahertz frequencies of light used in the experiments can travel through them. And each layer is composed of tens of thousands of artificial neurons — in this case, tiny pixels that the light travels through.

Together, a series of pixelated layers functions as an “optical network” that shapes how incoming light from the object travels through them. The network identifies an object because the light coming from the object is mostly diffracted toward a single pixel that is assigned to that type of object.

The researchers then trained the network using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produces as the light from that object passes through the device. The “training” used a branch of artificial intelligence called deep learning, in which machines “learn” through repetition and over time as patterns emerge.

“This is intuitively like a very complex maze of glass and mirrors,” Ozcan said. “The light enters a diffractive network and bounces around the maze until it exits. The system determines what the object is by where most of the light ends up exiting.”

In their experiments, the researchers demonstrated that the device could accurately identify handwritten numbers and items of clothing — both of which are commonly used tests in artificial intelligence studies. To do that, they placed images in front of a terahertz light source and let the device “see” those images through optical diffraction.

They also trained the device to act as a lens that projects the image of an object placed in front of the optical network to the other side of it — much like how a typical camera lens works, but using artificial intelligence instead of physics.

Because its components can be created by a 3D printer, the artificial neural network can be made with larger and additional layers, resulting in a device with hundreds of millions of artificial neurons. Those bigger devices could identify many more objects at the same time or perform more complex data analysis. And the components can be made inexpensively — the device created by the UCLA team could be reproduced for less than $50.

While the study used light in the terahertz frequencies, Ozcan said it would also be possible to create neural networks that use visible, infrared or other frequencies of light. A network could also be made using lithography or other printing techniques, he said.

###

The study’s others authors, all from UCLA Samueli, are postdoctoral scholars Xing Lin, Yair Rivenson, and Nezih Yardimci; graduate students Muhammed Veli and Yi Luo; and Mona Jarrahi, UCLA professor of electrical and computer engineering.

The research was supported by the National Science Foundation and the Howard Hughes Medical Institute. Ozcan also has UCLA faculty appointments in bioengineering and in surgery at the David Geffen School of Medicine at UCLA. He is the associate director of the UCLA California NanoSystems Institute and an HHMI professor.

Facebook asks banks for YOUR account details

Facebook has not had a good year. First there was the Cambridge Analytica scandal and then its share price fell like a stone from a skyscraper. And yet, just a few days ago The Wall Street Journal reported that the social media megalith is asking major US banks to share detailed financial information about how customers spend their money, apparently to increase user engagement. What could possibly go wrong, as we like to say when we can see all kinds of catastrophes likely to emerge from what is made to appear simple and innocuous?

Hand over your data!

The banks it has approached are well known to all, even if we don’t live in the United States. They are Wells Fargo, JP Morgan Chase and Citigroup. Facebook has, according to the WSJ’s interview with banking insiders, asked them to hand over data, including “everything from customers’ account balances to their credit card transactions.”

What’s in it for the banks?

Why might the banks agree to collaborate? The answer lies in the mobile commerce where apps like PayPal and Venmo dominate the scene. The banks would like a slice of this action and Facebook could help them achieve it. The article reports that Facebook is offering the banks “a presence on its Messenger app”, which has around 1.3 billion users. Messenger users can send and receive money via the app, BUT, at the moment, if a user wants to connect the Messenger app with their bank account they have to ‘opt-in’ to do that. They can also use the app to get in direct contact with Facebook’s credit card partners. The suggestion is that Messenger could also offer the same kind of direct contact with the banks.

Trapped by Facebook’s Messenger app

It’s easy to see the benefit to the banks and to Facebook, which is hoping that such a service will mean users conduct all their financial transactions through Messenger. Apparently the fact that many users leave the app to go and check their account balances at their bank’s online service is annoying Zuckerberg & Co who would prefer its users never wander off. Facebook also promises that it would never use customers’ financial data to improve its ad targeting. There are probably a lot of people reading that and thinking, “If you believe that, you’ll believe anything.”

So far, the banks have declined Facebook’s offer, citing customer privacy as a concern. And they are right to be concerned about it, because Facebook’s track record on use of customer data is covered in mud and it has stuck.

Knowing what you know about Facebook, how would you feel about your bank handing over your data to such a company?