Machine Learning and AI in 2030

Machine Learning and Artificial Intelligence are rapidly moving up a growth S curve similar to previous major information technologies. Twenty years ago,  the Internet/Mobil technology curve was in the same S curve position as ML/AI is today. The internet was growing in use, but nowhere near the penetration we see today. Mobil devices existed, but with small, black and white displays and no internet access ... primitive compared to today's large, touch screen, internet connected computers in your hand. ML/AI today is likely as primitive compared to what it will be in 2030 as the 1998 Internet/Mobil technology was compared to its present state. By one estimate, AI will add 16 trillion dollars to the world's economy by 2030. And the benefits of ML/AI discussed below with have a networked, multiplicative impact as they reinforce one another.

Developments and Benefits

It's impossible to predict exactly what ML/AI will look like over a decade from now, but it is possible to form a rough idea of what shape key developments will take. Here are a few...

Information Access

ML/AI is already used extensively for information search. Steady progress is being made in understanding the contextual nuances of search queries. This trend will continue, and by 2030 we should be able to find just the right information for almost any search with ever increasing precision.

Autonomous Vehicles

Autonomous vehicle progress has been steady and has achieved significant milestones. The technology has been demonstrated to work successfully at a fundamental level. There remain a number of barriers to widespread adoption. However, the benefits and incentives to overcome these barriers are significant. By 2030 we should see a significant number of autonomous vehicles on the road.

Healthcare

Many countries, including the United States, have aging populations. This will put an significant stress on healthcare over the coming decade. ML/AI holds the possibility of filling the gap between available resources and healthcare demand in 2030. According to one set of experts, we'll trust AI more than doctors to diagnose disease. This will free doctors to spends more time on the things technology can't yet do. 

Robotics and Automation

McKinsey predicts that by 2030, "60 percent of occupations have at least 30 percent of constituent work activities that could be automated." New jobs will also be created, but robotics and automation will continue to create significant shifts is how work is performed.

Education

ML/AI is disrupting the traditional model for successful education. A top futurist predicts the largest internet company of 2030 will be an online school and students will learn from robot teachers over the internet. ML/AI in education has the potential to create individually customized experiences that will enhance and speed the learning process.

Retail Shopping

Retail shopping is already being dramatically effected by ML/AI. The e-commerce share of total retail sales in the U.S. is rising rapidly. Customized on-line shopping experiences are growing in sophistication. Retail in 2030 may look very different than it does today and include  interactive dressing room mirrors and a more on-demand at-home shopping experience.

Professional Services

ML/AI can digest information at a speed and scope that already exceeds human capabilities. Professional service providers in 2030 will use ML/AI to provide analyzed and summarized information needed for decision making. This will dramatically speed services delivery and lower services costs.

Financial Services

Financial services rely on collecting, storing and analyzing vast amounts of data. ML/AI is already replacing workers who perform many of the tasks related to these activities. One estimate is that up to 230,000 employees in capital markets will be replaced by ML/AI as we approach 2030. This will lower costs and improve delivery of services, but will also require significant staffing shifts in the industry.

Agriculture

Advances in robotics and sensing technologies are radically modifying agricultural practices. New ML/AI approaches include: automated harvesting, pest control, animal tracking, and soil conservation. By 2030, we should see significant increases in crop yields at lowered costs.

Challenges

ML/AI does pose challenges as it progresses spreads further into our lives and businesses. Here are a couple of examples...

Changing Jobs and Learning New Skills

As mentioned above, McKinsey predicts that by 2030, "60 percent of occupations have at least 30 percent of constituent work activities that could be automated."  Overall, this would mean that about 20% of all jobs would be automated. New jobs will be created, with many of those requiring higher levels of education, training or skill. 

New Legal Frameworks

As more work is performed by ML/AI, legal questions of responsibility and liability will arise. One example is autonomous car liability. Autonomous vehicles are expected to lower deaths caused by accidents. The Atlantic reports that automated cars could save up to 30,000 lives per year in the United States. But how responsibility for the deaths that do occur remains an open question.

An Example Individual Scenario

It can be difficult to imagine a total picture of what ML/AI will mean for our lives in 2030. One way to grasp this is to imagine use cases that demonstrate their impact. Here's one example...

John is sitting in his study at home when he receives a notification on his smartphone. It tells him that the biosensor imbedded in his arm has detected a slight irregularity in his heartbeat. John hasn't noticed any physical pain or abnormality, but he clicks the notice and sees the display of a heart rhythm pattern with annotations showing him where there might be an issue. It assures him that it's nothing immediately life threatening, but that he should consult a doctor. He's shown the location of the nearest clinic and asked if he'd like an appointment be made along with arrangements for a ride. He clicks yes and immediately sees that his ride is on the way and will arrive in 5 minutes. John grabs his jacket and goes outside to wait for the car. In a few minutes, an autonomous vehicle pulls up, he gets in and a friendly voice asks him if he's John and going to the AbleWay clinic. John says yes, sits back and turns on his phone to read more information he's been sent about his symptoms. He arrives shortly at the clinic, is welcomed by name and escorted immediately into an examination room. A couple of minutes later, Dr. Able enters carrying a tablet which he uses to show John a real-time display of his heart rhythm. Dr. Able explains what could be the cause and that the condition is something they should watch carefully to see if it continues. He prescribes a medication that should help correct the arrhythmia and tells John that the medication will be delivered to his home by the end of the day. John shakes Dr. Able's hand and walks out to the reception desk where he's told that the clinic has his insurance information and the car to take him home is waiting outside. John enters the car and starts his trip home feeling relieved that he knows more about his condition and taking steps to deal with it. 

Using Stochastic Processes to Help Humanize Artificial Intelligence

In developing examples for my book on HTML5 Canvas (HTML5 Canvas for Dummies) I experimented with using stochastic processes to improve the realism of an animated displays. Early versions of the displays appeared rigid and artificial. Adding stochastic (random) variations brought the displays closer to real life and made them much more fun to watch.

Stochastic processes are already important to AI. Stochastic Gradient Descent combined with Backpropagation is used to iteratively modify data values passed between neural network nodes in order to minimize the error between the output of the neural network and the correct/true result.

Recently, I've explored whether stochastic techniques can be used to help humanize AI. Humans are not robots. We don't mindlessly pursue objectives without variation from a given path. A more human-like AI would certainly make the man-machine interface more pleasant to deal with than a purely robotic one. There are certainly AI applications where we don't want these kind of random variations taking place ... in autonomous vehicles for example. 

You can experiment with one of my displays that employs stochastic processes ... click here to see it in action - once it's started, click control to see the variables you can modify using your keyboard.

Digital Computing + Machine Learning = A Perfect Match

Digital Computing has been with us for over 70 years. It's a deterministic technology using stored program software designed to produce accurate, precise results. For example, software for calculating your pay check will give you results that are correct down to the penny, just what you want!

Machine Learning technology is different ... it works in the domain of probabilities. For example, a machine learning based autonomous self-driving car makes many probability calculations every second ... such as the probability that a person approaching an intersection will stop and not cross in front of the car.

Digital Computing is able to perform some probabilistic calculations, but these are limited compared to those that can be performed by Machine Learning. Machine Learning is, conversely, limited in the deterministic calculations it can perform compared to the capabilities of Digital Computing.

So it's pretty obvious, the combination of Digital Computing and Machine Learning yields the perfect combination for interacting with the world we live in, as illustrated in the graphic below:

The combination of these two technologies will give us a huge boost in overall computing accuracy and cost effectiveness. Much of what we deal with on a daily basis is probabilistic in nature, and this dimension can now be effectively addressed with Machine Learning, such as is used in virtual assistants that are able to hear us speak, understand our words and give us answers to our questions.

And these two technologies will remain wedded to each other, as Machine Learning runs on a Digital Computing platform and needs Digital Computing to perform most practical tasks. Take our self-driving car example. That system needs access to precise road maps and feedback from car systems such as the engine and brakes. It's the combination of deterministic and probabilistic computing that creates the complete self-driving system that we'll end up trusting to get us safely home. 

The Rise of the Machine Learning and Artificial Intelligence S Curve

One of the hot technology topics of discussion lately surrounds the question of when the Machine Learning/Artificial Intelligence (ML/AI) 'singularity' will occur ... that is, when machine intelligence will evolve to equal human intelligence. Opinions run over a long time frame ... from as soon as 2029 (Ray Kurzweil) to around 2040-2050 (average of experts) to many decades from now.

Answering this question is linked to how rapidly one believes the ML/AI technology lifecycle S curve will rise. We do seem to have ML/AI S curve liftoff, as recent fundamental breakthrough developments in artificial neural networks, graphics processing units and other technologies have moved ML/AI from the laboratory to the field.

One perspective on the growth of ML/AI can be had by comparing ML/AI to the growth curves of previous major transformational information technology developments:

  • Mainframe & Centralized Computing
  • Personal & Distributed Computing
  • Internet & Mobile Computing

The chart below shows how the Machine Learning & Artificial Intelligence curve would look if it grew at the same rate as these previous developments:

The result is an S curve that grows at a rate that would place the singularity at the earlier of the estimates. These S curves seem to share some characteristics:

  • They're spread out by about 20 years.
  • The rapid rise of the S curve takes about 20 years.
  • At the early part of the curve, there's skepticism that the technology will achieve rapid growth. You can see a timeline of machine learning here.
  • At the top of the S curve, the technology is viewed as a must have for corporate survival.
  • As one S curve peaks, another begins its entry into the rapid rise phase.
  • Companies that are late in recognizing the emergence of a new major transformational technology often pay a high price. Major transformational technologies outperform predecessor technologies by orders of magnitude, making it difficult to impossible for companies that are late in adopting the new technology to compete with companies that are early adopters. 

Is it possible there's a hidden law of major transformational technology lifecycle growth? That is, once liftoff is achieved, do market forces pour into the technology and push it rapidly up the S curve over a period of two decades. Personally, I think this is likely the case, and that the ML/AI curve will be no different than its predecessors. Time will tell the tale. 

The Layers of Technology in Machine Learning and Artificial Intelligence

Machine Learning (ML) and Artificial Intelligence (AI) are changing the landscape of computing around the world, allowing us to do things with computers that were previously thought to be nearly impossible. ML/AI systems are now able to hear, speak, see, touch and interact with the environment and people around them.

In this article, I'll treat ML and AI together as a single topic. ML is the computer learning subset of AI, but for this discussion, they can be thought of as one.

A high level understanding of the layers of technology in ML/AI can help sort through the many options for developing and implementing an ML/AI system. ML/AI technology can be viewed as a three level hierarchy:

Mathematics > Models > Applications

Each of these layers provides an essential set of elements needed for a successful ML/AI system. Let's start with the mathematics base and build from there...

Mathematics

Mathematics, which dates back to 3000 BC and basic arithmetic, is a field of study that uses formulas (sequences of symbols) to represent ideas and the real world. Sounds a bit abstract, right? It is, but think about it ... ML/AI by its nature is doing just that inside a computing device - representing ideas and the real world. So mathematics is naturally and ideally suited to the pursuits of ML/AI.

ML/AI derives its tremendous power from the use of mathematics to, among other things, analyze probabilistic situations and outcomes. For example, an object recognition model might return the probability of .73 that a given photo contains the image of a cat. When we humans see a cat, we're usually pretty sure it's a cat. However, the combination of our senses and brains have done the complex mathematical-like analysis that produces that conclusion. Mathematics represents the calculations, estimations and processes needed to develop successful ML/AI models.

It's not necessary to understand all the math involved in an ML/AI system if you're using commercially supplied APIs or building on existing open source code. However, having some level of understanding of the underlying math can often be very useful and sometimes essential. One example, Stochastic Gradient Descent (SGD), is a mathematical function used to find a minima or maxima by iteration. It's used in a number of ML/AI models to iteratively improve the accuracy of output functions such as identifying objects in an input image. At a high level, the formula for SGD looks like this:

The components of this equation are:

  • Q( ): A function whose value is to be maximized or minimized
  • n: The number of times the function Q is recalculated
  • /: A division function used to find the average of the recalculated values of the Q function
  • i: A number indicating the individual version of the Q function 
  • w: A parameter that's used to find the minimal or maximal values for the function Q
  • summation: Adds up all the values of the individual Q function calculations and is represented by this symbol:
summation_symbol.png

Below is a list of some of typical mathematical concepts and functions used in ML/AI. Wikipedia is a good source of articles to start delving into these topics: 

Bayesian Probability and Statistics - Calculus - Classification - Cluster Analysis - Convolution - Deviation Analysis - Dimensional Analysis - Eigenvalues, Eigenvectors - Error Analysis, Accuracy, Precision, Sensitivity, Specificity - Functional Analysis, Activation Functions, Sigmoid Function, Rectified Linear Unit - Geometry, Geometric Transformations - Gradients, Stochastic Gradient Descent, Gradient Boosting - Graph Theory - Hyperparameter Optimization - Information Theory, Entropy, Cross Entropy - K-means Clustering - Linear Algebra - Logistic Regression - Loss/Cost Functions - Markov Chains - Mathematical Constants - Matrix Mathematics - Model Fitting, Underfitting, Overfitting, Regularization - Monte Carlo Algorithms - Pattern Recognition - Probability Theory - Regression Analysis, Linear, Non-Linear, Softmax - Sampling - Statistical Analysis, Bias, Correlation, Hypothesis Testing, Inference, Validation, Cross Validation - Time Series Analysis - Variation Analysis, Coefficient of Determination - Vector Spaces, Vector, Algebra, Scalars - Weights, Synaptic Weights

Models

Models are the embodiment in computer code of the mathematical representations used to perform ML/AI functions. In our example of a computer recognizing a cat in a photo, the ML/AI model represents the layers of processing needed to differentiate the image of a cat from all other possibilities.

Below is a conceptual model of an Artificial Neural Network (ANN), one of the types of ML/AI mathematical models. Data (shown as lines) is passed in a forward left to right direction between processing nodes (shown as circles). Numerical weights (w) are applied to individual data flows and biases (b) are applied to nodes in order to shape the output, such as the identification of a cat in an input image. Mathematical methods such as Stochastic Gradient Descent (discussed above) are used to adjust the weights and biases as data is repeatedly passed through the ANN.

In this diagram, the shapes and letters represent:

  • i: input layer node
  • h: hidden layer node
  • o: output layer node
  • w: weights applied to data going across layers
  • b: biases applied to node values

Below is a list of some of the mathematical models used in ML/AI. Wikipedia is a good source of basic information about these models:

Artificial Neural Networks - Association Rule Learning - Bayesian Networks - Decision Tree Learning - Deep Learning - Ensemble Learning - Hierarchical Clustering - Learning Classifier Systems - Learning to Rank - Long Short-Term Memory Neural Networks - Nearest Neighbors Algorithms - Recurrent Neural Networks - Reinforcement Learning - Sequence-to-Sequence Neural Networks - Similarity Learning - Sparse Dictionary Learning - Stochastic Neural Networks - Support Vector Machines - Unsupervised Learning

Applications

ML/AI applications use mathematical models, such as those discussed above, to perform meaningful tasks and produce meaningful results. Raw ML/AI results from mathematical models can be very interesting, but by themselves offer little utility. ML/AI applications provide that utility.

As an example, let's say we wanted to use our cat detecting Artificial Neural Network to let users of our smartphone app take a photo of a cat and determine what breed it belongs to. Our development team would need to:

  • Collect cat images from the internet
  • Sort the images into known breeds
  • Use the grouped images to train the ANN to recognize different breeds of cats
  • Test the trained ANN on images of cats of unknown breeds
  • Test the trained ANN on recognizing cats in images of many different types
  • Decide on the minimum acceptable probability from the ANN for determining a cat is really a member of the indicated breed
  • Maintain a table of the minimum acceptable probability for each breed of cat
  • Develop the user interface functions that allow the app user to photograph their cat and get the determination of which breed the application thinks it belongs to
  • Develop the server code to host the ANN and application software
  • Develop the Application Programming Interfaces (APIs) to connect client applications with the server code
  • Track ongoing performance results

ML/AI applications are appearing in every imaginable area of computing. If you do an internet search on almost any topic and include the term 'machine learning' you'll likely find results. Below are just a few examples of areas of active ML/AI applications:

Biometrics - DNA Classification - Computer Vision - Fraud Detection - Marketing - Medical Diagnosis - Economics - Natural Language Processing - Language Translation - Online Advertising - Search Engines - Handwriting Recognition - Speaker Recognition - Speech Recognition - Financial Market Analysis and Trading - Customer Service - Systems Monitoring - Recommender Systems - Self-driving Vehicles - Robotics - Cybersecurity - Legal Research - Criminal Investigations - Security Screening - Mapping - Healthcare - Face Detection and Recognition - Object Recognition - Weather Forecasting - Image Processing