Geoffrey Hinton – The Godfather Of Modern Ai And Deep Learning

Recently, Artificial Intelligence has become the hottest topic in the technological field. Much of the attention is because of the breakthroughs in image, language, and speech recognition brought about by deep learning.

In recent years, deep learning has resulted in approaches that have surpassed human levels of comprehension. According to Microsoft, Deep Learning is a sub-field of machine learning inspired by the brain’s structure and how neurons function.  

According to Geoffrey Hinton, people need to understand that deep learning and neural networks make many things possible and more effortless, mostly behind the scenes. Geoffrey Hinton is among the geniuses who helped transform traditional machine learning techniques into modern deep learning. If you follow recent trends in AI, you will find quotes from the “Godfathers of AI.” This article will look at one of the top pioneers of Deep Learning, Geoffrey Hinton.

Deep Learning versus Traditional Machine Learning

Most of Deep Learning’s core ideas were introduced in the 80s and 90s. However, because of the lack of digitally available data to feed to the algorithms and low computational power, the application of deep learning was impractical at that time. As the internet went mainstream and the computer’s processing power increased, global data rapidly expanded and became more accessible. The deep learning algorithms could now learn from large data sets, perform better, and surpass traditional methods.

Traditional machine learning techniques require experts to introduce the variables and set the parameters for the algorithms. Hence the reason they were widely applied since the 90s. Contrary to this, deep learning algorithms do not need an expert to set the parameters. The algorithms accomplish this automatically by learning high-level features from data incrementally. As the algorithms get access to more data, they become more intelligent and perform tasks just like humans do.

But how did all these start?

Picture of old computer

How Did it Start and What Role Did Geoffrey Hinton Play

There are several deep learning heroes to mention. But most of the time, you will hear three names came up the most- Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. The three are commonly known as the Godfathers of AI. Additionally, they were recently recognized with the one million annual prizes for their AI space achievements, particularly deep learning.  

The three heroes’ concepts and techniques in the 90s and 2000s enabled tremendous breakthroughs in computer vision and speech recognition. Currently, their work forms the basis of the current applications of AI technologies like autonomous vehicles and automated medical practices.

In this article, we will focus on Hinton and how he has contributed to deep learning.

Who is Geoffrey Hinton

Geoffrey Hinton - The Godfather Of Modern Ai And Deep Learning

Geoffrey Hinton is a British-born Canadian cognitive psychologist and computer scientist. He is most known for his work on artificial neural networks. Hinton received his BA in Experiment Psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in1978. From 2004 to 2013, Hinton was the director of the program Neural Computation and Adaptive Perception. Hinton also coauthors several relevant fields like; Back propagation, Boltzman Machine, Deep Learning, recently Capsule neural network

Highlights of Hinton’s Achievements

  • Hinton is a fellow of the Royal Society of Canada and the Association for the Advancement of AI.
  • He received honorary doctorate awards from the University of Edinburgh, the University of Sussex, and the University of Sherbrooke.
  • He was awarded; The 2018 Turing Award, known as the Nobel Prize for Computing, 
  • The first David E. Rumelhart Prize in 2001, 
  • The Killam Prize for Engineering in 2001,
  • The IEE James Clerk Maxwell Gold Medal in 2016
  • The IJCAI Award for research excellence in 2005, and 
  • The NEC C&C prize in 2016, among others.

In 2013, Google bought his neural network startup, DNNResearch, which he developed while at the University of Toronto. As of 2015, Hinton divides his time working for Google as the Vice President and Engineering Fellow, and he manages the Brain team at the University of Toronto.

Geoffrey’s Contribution to Neural Networks

digital graphic of human brain

Hinton’s Initial Research

When writing his early research papers, Hinton faced a lot of doubt from some of his colleagues. Although most of them agreed that the idea was smart, they also thought of it as an impractical way to design computers. However, Hinton stuck to his initial vision. He was convinced that if you want to make a device stronger, you could either program it or learn. Referencing the human brain, it was clear learning was the right way to go since, as humans we do not program our brains to function. With this concept in mind, Hinton understood that neural networks are designed to mimic how the brain uses Neural Networks.

According to Investopedia, a neural network is a series of algorithms that endeavor to recognize underlying relationships in a data set through a process that mimics how the human brain operates. In this sense, neural networks refer to systems of neurons, either organic or artificial.

Hinton’s whole idea was to have a learning device that did so as the brain learns. Although this was not his original idea, Hinton believed that Turing (the British Mathematician Alan), among others, was right as they believed in the same idea.

During the 80s, Hinton and his research team went through a rough time. Their algorithms were not functioning as intended because of the data scale. The data sets were small, and only support vector machines worked better on the data sets. Support vector machines used supervised learning where all the data had to be labelled.

However, this wasn’t what Hinton had in mind. He believed that as computers got faster, learning algorithms were better off with unsupervised learning, where they could learn automatically. With time these algorithms could learn more quickly from fewer models.

Hinton’s Breakthrough in 2005

Geoffery Hinton with journalist

In 2005, Hinton had a mathematical breakthrough that allowed unsupervised training of deep nets. This breakthrough was revolutionary because if the neural networks in the 80s had a lot of hidden layers, it was hard to train them in complex tasks since they mostly needed human assistance. 

Hinton designed a way that learning algorithms take in input and learn a bunch of feature detectors. For example, if you have an image, the algorithms learn why pixels were the way they were and then treated the features as data. The algorithms then learn another group of feature detectors and find a correlation between the two sets. The different sets of detectors are considered being the layers in the neural networks. 

Now, Hinton could do some math to prove that the model did not get notably better at each layer but instead create a band on how well the model (algorithm) was. It is like how the brain neurons are connected when learning or trying to solve a complicated task. Hinton realized that every time he added a layer of feature detectors, he got a better band. He aimed to add layers until the model could analyze input and return a correct output. Simultaneously, CPU (Central Processing Units) companies started producing GPUs (Graphics Processing Unit), which made working on neural networks a lot easier and faster.

In 2007, Hinton’s students started using his concept. One of his students began using the now released GPU to find roads in aerial images, while another student used it to recognize phonemes in speech. Their projects were successful and, in sort, surpassed the benchmark for speech recognition. 

Hinton’s Deep Learning Concept Goes Mainstream

Geoffery Hinton giving a lecture

As people realized that Hinton’s model was beating standard models that took almost three decades to develop, people started deploying them. Hinton’s students went off to large tech companies like IMB, Microsoft, and Google. Google was the first to turn the model into a production speech recognizer, and by 2012 it was incorporated into Android, which got much better at speech recognition.

Large companies like Amazon, Apple, and Tesla are currently using deep learning to innovate new products while also improving their existing products.

Hinton is a pioneer in Deep Learning, and his research helps make strides in the field. His contribution has been mainly seen in;

  1. Back-Propagation algorithm
  2. Boltzmann Machines
  3. Distributed Representation
  4. Time delay Neural Nets
  5. Mixture of Experts
  6. Variational Learning
  7. Products of Experts
  8. Deep Belief Nets

A Look into Some of Hinton’s Top Interviews

Coursera: Neural Networks and Deep Learning

In this video, Dr. Hinton summarizes what is Artificial Intelligence and Deep Learning, how he got interested in deep learning and the brain, and some of his insights on Deep Learning.

In this conversation-like interview, Dr. Hinton briefly explains how things have evolved since the first idea of deep learning was introduced in the 80s and 90s. 

Heroes of Deep Learning: Andrew Ng and Geoffrey Hinton

In another interview with Dr. Andrew NG, Dr. Hinton provides a more technical approach to some core concepts of deep learning. He also goes through some early published papers that revolutionized deep learning.

Currently, Dr. Hinton is working on Capsules. A theory of how humans do visual perception using reconstruction and how they route information to the right places in the brain. The concept behind capsules is deciding where this information is sent to a particular area rather than automatically.

If deep learning continues on an upward trajectory, then new methods need to be developed that are as foundational and revolutionary as those Geoffrey Hinton developed.

Table of Contents
    Add a header to begin generating the table of contents