Blog Detail

How Deep Learning Will Change The Way We Interact with Technology

Even though heat and sound are both forms of energy, when you were a kid, you probably didn’t need to be told not to speak in thermal convection. And each time your children come across a stray animal, they likely don’t have to self-consciously rehearse a subroutine of zoological attributes to decide whether it’s a cat or a dog. Computers, in contrast, need step-by-step handholding—in the form of deterministic algorithms—to render even the most basic of judgments. Despite decades of unbroken gains in speed and processing capacity, machines can’t do what the average toddler does without even trying. That is—until now.

Over the last half-dozen years, deep learning, a branch of Artificial Intelligence inspired by the structure of the human brain, has made enormous strides in giving machines the ability to intuit the physical world. At Facebook’s AI lab, they’ve built a deep learning system capable of answering simple questions to which it had never previously been exposed. The Echo, Amazon’s smart speaker, uses deep learning techniques. It now uses the technology to improve voice search on Windows mobile and Bing.

All the big tech companies in the world have been quietly deploying deep learning to improve their products and services, and none has invested more than Google. It has “bet the company” on AI, says the New York Times, committing vast resources and scooping up many of the leading researchers in the field. And its efforts have borne fruit. A few years ago, a Google deep learning network was shown 10 million unlabeled images from YouTube and proved to be nearly twice as accurate at identifying the objects within the images (cats, human faces, flowers, various species of fish, and thousands of others) as any previous method. When Google deployed deep learning on its Android voice search, errors dropped by 25% overnight. At the beginning of this year, another Google deep learning system defeated one of the best players of Go—the world’s most complex board game.

Entirely new business lines and markets will spring up, which will, in turn, give rise to still more innovation. Deep learning systems will become easier to use and more widely available. And I predict that deep learning will change the way people interact with technology as radically as operating systems transformed ordinary people’s access to computers.

 

Deep Learning

Historically, computers performed tasks by being programmed with deterministic algorithms, which detailed every step that had to be taken. This worked well in many situations, from performing elaborate calculations to defeating chess grandmasters. But it hasn’t worked as well in situations where providing an explicit algorithm wasn’t possible—such as recognizing faces or emotions, or answering novel questions.

Trying to approach those challenges by hand-coding the myriad attributes of a face or phoneme was too labor-intensive and left machines unable to process data that didn’t fit within the explicit parameters provided by the programmers. By contrast, deep learning-based systems make sense of data for themselves without an algorithm. Loosely inspired by the human brain, these machines learn, in a real sense, from their experience. And some are now about as good at object and speech recognition as people.

So how does Deep Learning work?

In the brain, a neuron is a cell that transmits electrical or chemical information. When connected with other neurons, it forms a neural network. In machines, the neurons are virtual—basic bits of code running statistical regressions. The string of these virtual neurons together and you get a virtual neural network. Think of every neuron in the network below as a simple statistical model: it takes in some inputs and passes along some output.

For a neural network to be useful, though, it requires training. To train a neural network, a set of virtual neurons is mapped out and assigned a random numerical “weight,” which determines how the neurons respond to new data (digitized objects or sounds). Like in any statistical or machine learning, the machines get to see the correct answers too. So if the network doesn’t accurately identify the input – doesn’t see a face in an image, for example — then the system adjusts the weights—i.e., how much attention each neuron paid to the data—to produce the right answer. Eventually, after sufficient training, the neural network will consistently recognize the correct patterns in speech or images.

First, Geoffrey Hinton and other researchers at the University of Toronto developed a breakthrough method for software neurons to teach themselves by layering their training. (Hinton now splits his time between the University of Toronto and Google.) The first layer of neurons will learn how to distinguish basic features, say, an edge or a contour, by being blasted with millions of data points. Once the layer learns how to recognize these things accurately, it gets fed to the next layer, which trains itself to identify more complex features, say, a nose or an ear. Then that layer gets fed to another layer, which trains itself to recognize still greater levels of abstraction, and so on, layer after layer—hence the “deep” in deep learning—until the system can reliably recognize the very complex phenomenon, like a human face. Healthcare

The second development responsible for recent advancements in AI is the sheer amount of data that is now available. Rapid digitization has resulted in the production of large-scale data, and that data is oxygen for training deep learning systems. Children can pick something up after being shown how to do it just a few times. AI-powered machines, however, need to be exposed to countless examples. Deep learning is essentially a brute-force process for teaching machines how a thing is done or what a thing is. Show a deep learning neural network 19 million pictures of cats and probabilities emerge, inclinations are ruled out, and the software neurons eventually figure out what statistically significant factors equate to the feline. It learns how to spot a cat. That’s why Big Data is so important—without it, deep learning just doesn’t work.

Finally, a team at Stanford led by Andrew Ng (now at Baidu) made a breakthrough when they realized that graphics processing unit chips, or GPUs, which were invented for the visual processing demands of video games, could be repurposed for deep learning. Until recently, typical computer chips could only process one event at a time, but GPUs were designed for parallel computing. Using these chips to run neural networks, with their millions of connections, in parallel sped up the training and abilities of deep learning systems by several orders of magnitude. It made it possible for a machine to learn in a day something that had previously taken many weeks.

The most advanced deep learning networks today are made up of millions of simulated neurons, with billions of connections between them, and can be trained through unsupervised learning. It is the most effective practical application of artificial intelligence that has been devised. For some tasks, the best deep learning systems are pattern recognizers on par with people. And the technology is moving aggressively from the research lab into the industry.

Contact Six Industries Inc today to get started.

 

Copyright 2022 Six Industries Inc. Designed By Matech Solutions BPO