The Rise of AI
Ever since computers were first built, the creators of these machines dreamt of pushing the technology to its limits by giving it a mind of its own. The rise of AI that started in the early 2000’s is making this dream a reality.
This mind of a machine is what we call Artificial intelligence, also known as AI. Artificial intelligence as a discipline was founded in 1956 with the purpose of getting machines to perform tasks in a way that emulates humans, and at its core, AI is a computer that is able to mimic or simulate human thought or behaviour.
The field of AI began to experience rapid innovation and growth in the early 2000s and out of this emerged an AI ecosystem with AI subsets known as machine learning and deep learning. Machine learning algorithms are engineered to learn and respond by ingesting aggregated labelled data, whereas deep learning algorithms learn from the experience of a computer and are designed to solve problems of even greater complexity. Deep learning has in turn birthed intelligent automation, a process found in autonomous vehicles, games, speech recognition applications and other solutions.
Artificial intelligence and intelligent automation are now transforming the human experience and will continue to do so for decades to come.
In this article, we will explore the rise of AI, machine learning and intelligent automation. We’ll also consider the pros and cons and look at where innovation in AI is likely heading.
This is part one of two articles that explores the rise of AI in our world. In part two, I will attempt to highlight some understated or overlooked impacts that AI may have on our world.
Artificial Intelligence, machine learning, deep learning the engine under the hood
Machine learning allows computers to solve problems on their own. Advancements in machine learning have contributed to the groundbreaking progress in AI and automation.
Technology such as natural language processing (NLP) is one of the reasons why computers are able to decipher human voices and also makes the interaction between man and machine more natural.
Natural language processing relies on machine learning to derive meaning from human language. The goal is to make sense of human text or speech in a way that is deemed valuable to influence and/or trigger response, decision or actions.
NLP can be found in applications such as Grammarly, which checks for grammatical errors or accuracy within the body of the text.
Google translate and Microsoft Word also adopt the use of NLP. Other common applications that use NLP include Apple Siri, Amazon Alexa, Cortna and OK Google.
There are other areas in daily life and business operation where AI adoption is increasing to the extent that it will only be a matter of time before robot humanoids that possess human attributes will be scientific facts. Just ask Elon Musk about the progress of his Neural Link project
What is the Rise of AI, and where did it come from
Humans are a progressive, ever-evolving species. This fact begs the question: is the progressive rise of AI a good or a bad thing for humanity? Before answering this, let’s start by looking at the roots of AI and how it reached the state we know it to be in today.
Sci-fi novels and movies of past decades depicted a future that is partly already here and is coming closer to reality each day. We’re now witnessing innovation occurring across the board in a range of technologies.
Furthermore, if we take a forward-thinking approach and consider that an entire generation of children is being born into a world where getting answers to questions by talking to a phone is the norm, one can only imagine the impact this exposure to technology will have on the next generation’s ability to create and innovate.
Let’s assume that, as these children grow and begin to make their contributions to the world, those that decide to further develop AI will produce solutions to drive monumental change in the world. This monumental shift has started already, and AI and automation now play a big part in how we produce, consume, contribute, communicate and find entertainment.
Our world is experiencing the rise of AI across many domains. Moreover, the transformative technologies that have converged to give us today’s incarnation of AI will only continue to drive the creation of AI solutions that either augment human ability or replace humans altogether.
Varied opinions and predictions try to highlight how AI will impact our world. However, the jury is still out on whether AI innovations will have a purely positive impact.
Who is responsible for the rise of AI?
The contemporary incarnations of intelligent automation brought about by the rise of AI have largely been made possible by the persistent and relentless work of professor Geoffrey Hinton. Hinton is an expert in the fields of neural networks, machine learning, artificial intelligence, cognitive science object recognition and joined Google when his company DNN research, was acquired in 2013.
Professor Hinton’s life work has been to enable machines to learn in the same way that humans do. With a background in psychology and a healthy obsession with how the human mind works, professor Hinton has combined his knowledge of computer science and modelling the human brain to establish himself as a pioneer and evangelist of the capabilities of artificial intelligence.
On the path to greatness, you will usually find the remnants or foundations of predecessors who led the way. And this is no different in the case of Hinton and his accomplishments in the domain of neural networks and artificial intelligence.
Frank Rosenblatt, the founding father of AI
Hinton’s early work was based on the work of Frank Rosenblatt, an American psychologist renowned in the field of artificial intelligence.
In the late 1950s, Frank Rosenblatt developed the Perceptron, a computing system that was designed to emulate the human brain.
The Perceptron device was designed to cluster a mass collection of neurons known as neural networks. Neurons can either be biological, as they are found in the brain, or artificial, which is simulated by a computer. They consist of a large number of individual units that receive and transmit signals to one another.
Neurons are essentially small computing units based on the way the human brain processes computations, which refers to capturing data and learning from it in order to make decisions that get better over time.
Rosenblatt’s Perceptron wasn’t very successful, and his theories were way ahead of the time they were devised. One of the key issues with Frank Rosenblatt’s Perceptron was that it was very limited in what it could do. A book was even written to highlight its shortcomings.
The book, Perceptrons: An Introduction to Computational Geometry, written by Marvin Minsky, was the nail that seemed to seal the coffin and put AI into an era of stagnation.
Despite the Perceptron being deemed a failure at the time, Rosenblatt’s accomplishments are now recognised as paving the way for Hinton’s work and AI as we know it today. Some commentators say that Rosenblatt’s work was 60 years too early and jokingly claim that he was right; it just took him sixty years to prove it.
Hinton persevered in building on Rosenblatt’s model of neural networks, and his first big breakthrough was in the mid-80s when he discovered how to construct complicated neural networks. Through his groundbreaking work, he was able to incorporate neural networks into a self-driving car (in the late 80s). Hinton’s breakthrough was also used to recognise handwriting, a solution that was even adapted for commercial use.
Despite this advance, the breakthrough proved to be a false start and Hinton’s efforts staggered and eventually hit a wall. A lack of data and computing power were the biggest challenges. This led to computer science circles conceding to believe that AI, based on neural networks, was wishful thinking.
It wasn’t until 2006, when computing power became faster and cheaper, that Hinton’s work began to gain traction again. Faster computer chips and large amounts of data produced by the internet proved to be the essential catalyst to ignite Geffoery Hinton’s neural network algorithms.
Reinforcement learning’s contribution to the rise of AI
Rich Sutton, a professor at the University of Alberta, Canada, has worked in the field of artificial intelligence since the mid-80s. However, his approach to AI, whilst still based on the model of neural networks, is based on permitting neural networks to learn more naturally and organically rather than being fed terabytes of data.
Reinforcement learning’s approach to AI involves designing machines that are able to learn from experience. This approach is analogous to the way human beings learn. For example, if you put your hand on a fire, you will get burnt, and you know not to do it again. If you run too fast on a wet floor and slip and hurt yourself, you know not to do it again, or if you do it again, you take more care as you are now aware of the potential dangers.
Humans, at the most basic level, try something once. If it works, we do it again and improve. If it doesn’t work, we stop doing that thing. As the saying goes, practice makes perfect. This is reinforcement learning at its core.
Some engineers have been able to create reinforcement learning algorithms that can play video games and even outperform humans. This is achieved by introducing a reward and punishment model. Computers contextualise and experience merit or reward that is credited or reprimanded depending on whether the outcome of a behaviour it performed is right or wrong.
In the case of Alphago, an AI was able to beat a human at the Japanese game of Go, one of the most complicated and multifaceted games that humans have been able to conjure up. (some details and link)
A reinforcement learning algorithm is designed to play the game continually and progressively learn how to do better over time. This can mean that the algorithm must play the game tens of thousands of times before being able to recognise and adapt to the requirements that make it improve and get better at the game.
As we continue to watch AI take a more prominent role in our lives, we have already started seeing its implementation in products and interfaces that we regularly use.
Reinforcement learning is responsible for the recommendation engine on Netflix, which learns a user’s viewing preference and then recommends programs the viewer watches based on previous categories or genres that the viewer has watched.
The next phase of reinforcement learning
professor Sutton is on a quest to enable machines to display real human-like intelligence. This push for reinforcement learning to emulate the human brain is a model that may one day lead to singularity. Singularity is the point at which AI has risen to the level of human intelligence and is then able to surpass human intelligence.
Professor Sutton and others have predicted that by 2030 techies will be able to leverage the power of hardware that can power algorithms and software that can drive the AI train toward singularity by the year 2040.
For now, though, reinforcement learning is being developed in areas of medical and healthcare where plans are being devised to use AI backed by reinforcement learning to predict the probability of illness or diagnose symptoms.
As technology continues to push the boundaries of what AI can do, we will inevitably get to a stage where transformative technologies converge to produce machines with capabilities comparable to that of organic human capabilities.
There are major developments occurring in the fields of robotics, and censors, and I question whether this convergence will spark the incarnation of robots that exists amongst us in a form that appears to be human-like. Whatever the case, as machines and robots increase in intelligence and become more human-like, it will only be a matter of time before debates and questions are posed about robot rights.
I predict this will be a controversial topic that I hope I live long enough to witness. I write this looking forward and providing my opinion by stating that robots are not human but are a byproduct of human creativity and ingenuity. However, I’ve grown to understand that the world in which we live exists and plays out in a non-linear way, so I won’t even attempt to try to predict what the world will look like in 20-30 years’ time.
Creative possibilities offered by the rise of AI
Today, the convergence of transformative technologies has detonated an explosion of products and services that have been birthed by the union of AI, big data, machine learning and cloud computing. Innovations in the application of these technologies will continue to shape human life for many years to come.
We now have computers that can perform actions that were once only believed to be science fiction. Text-to-speech or speech-to-text translation. Language translation. Image recognition, automated intelligent chatbots, artificial intelligence for IT operations (AIOPS) solutions and tools are just some among a plethora of technology innovations.
Google translate, for example, allows you to point your phone at a magazine that is written in one language and can be read out to you in another.
Tesla is pioneering the quest to reach the nirvana of driverless cars and their semi-autonomous cars are edging the company closer to that destination. Tesla’s semi-autonomous vehicles are able to drive themselves in autopilot mode at the switch of a button.
Innovation in technologies like sensors, high definition cameras, lidar and radar generates data that is consumed by professor Hinton’s deep neural networks. These neural networks build a picture of the world that allows Teslas’s semi-autonomous cars to not just drive in straight lines, but also switch lanes, and park themselves.
The continual rise of AI
As AI continues to evolve, the capabilities of self-driving cars and other innovations birthed by AI, machine learning, reinforcement learning and deep learning will continue to evolve in a nonlinear fashion. This evolution will be supported by the exponential increase and ubiquity of data and computing power. Moore’s law gives us an idea of what this exponential growth may look like and the time frame in which it will materialize.
In 1965, Gordon Moore, predicted that the number of transistors that could be fit onto one chip would double every year but revised this estimate in 1975 to predict that the doubling would occur every two years.
Moore was an early pioneer of integrated circuits, and his formula provided engineers with insight into how much bigger and faster computers would become over time. Although some have highlighted limitations with Moore’s law, it has for a large part enabled software and hardware engineers to start working on projects long before the required computing power is available.
Moore’s Law also stipulates that as the doubling occurs, the cost paid for computers will be halved. This rule allows bigger and more complex projects to be attempted and costs of solving these to reduce over time. Moore’s Law also states that speed and capability of computers will increase every couple of years, but we will pay less for them and that exponential growth should be expected. The growth curve in the graph below illustrates how this prediction has manifested over the last few decades
Here’s the floating point and integer performance for two socket servers since 2006, which is when the public cloud really started to take off:
And below we are presented with trends in power efficiency coupled with transistor density:
We can see that the rate of the exponential growth curve is alarming yet exciting, especially when you consider that we are still very early in the phase of AI growth and adoption.
Final thoughts
It’s hard to predict where AI and automation are going. But one thing is for certain: AI is here and on the rise.
If the innate desire within humans to improve is anything to go by, it is easy to conclude that AI will continue to have transformative impacts on our human race and the world we inhabit.
We live in a world where visionaries, proactive individuals, for-profit businesses, entrepreneurs and passionate enthusiasts all possess different motivations and are supported by the capacity to drive change.
As individuals and organisations that fall into the categories mentioned begin to fully grasp the power at their disposal, they will begin to understand how to harness technology to leverage AI’s capabilities to make their mark on our ever changing world.