What You Need to Know About AI was Written By: John Cowles, Senior Director of Engineering and Technology at Analog Devices, on July 25, 2022
This is a guest blog, NSTXL takes no ownership of the ideas presented. It is intended to stimulate conversation and discussion within the community.
The Basics of Artificial Intelligence & Machine Learning
As mentioned in the previous installment of this series: AI, Fact or Fiction, technologists revel in coming up with new terms to describe their worlds. Machine Learning researchers are no different. While the idea of artificial intelligence with computers dates back to Alan Turing in the 1940s, most of the terminology is quite recent and still growing. Anyone wishing to read about AI/ML or engage more deeply needs to get a basic vocabulary under their belt to be taken seriously. In the next several installments, we will review the main types of machine learning and some specific algorithms in each. Behind each is a heavy dose of statistics, linear algebra, and calculus. Fortunately, in most circumstances, we do not need to go that deep under the hood to properly apply ML techniques.
Recall that AI and ML are essentially extracting patterns from data. The patterns are codified into a computer model. It can then predict future outcomes or identify correlations as new data arrives. The model consists of a suitable Algorithm driven by a set of coefficients, known as Weights, that are fine-tuned as the model learns. Like all living beings, we develop what we call intuition through experience. The more examples we encounter and the more diverse they are, the better we are at finding patterns, anticipating events, and handling new situations. The same is true for ML.
The data sets that feed the model are a critical part of constructing machine learning algorithms. They consist of large numbers of examples, each having one or more Features of interest. Having access to diverse and representative data sets can be one of the major challenges for those implementing ML models. Skewed training data will give poor predictions, or worse, lead to strongly biased conclusions; again, not too different from us.
You might ask how training or learning actually happens from a computational perspective. In simple terms, the ML engineer must define a mathematical function, known as a Cost Function. That represents how well the model extracts information from the data. Learning equates to systematically minimizing the Cost Function through iteration by adjusting the model Weights. The size of data sets and the amount of computation needed to solve real problems were the main factors that prevented ML from being widely adopted before the computer hardware, memory and now cloud computing caught up to the math and theory.
Types of Machine Learning
ML algorithms are classified into 3 broad categories depending on the type of information they are expected to extract. The first class is called Supervised Learning, where a model is first trained with Labeled data sets. The Labels are the answers that the model must learn to predict. Imagine you want to predict the cost of a house based on Features such as the number of bedrooms, interior color, yard size, and proximity to shopping.
The simplest algorithm known as Regression would first be trained with many past examples of houses that include the Features AND a Label with the actual prices they sold for. Much like our brains, the model is clueless at first. The Cost Function representing the error between the actual prices and the model prediction is minimized step by step by adjusting the model Weights. Once the model is trained and verified, any new house with its Features (but no Label anymore) can be fed to the model for a sale price prediction.
Another common Supervised Learning algorithm is Classification. So, instead of predicting values like price, the model is predicting a category from a finite set of options. Is the house modern or not, is this email spam, or is this photo of a dog, cat, or octopus. Again, training data is needed with Features and Class Labels so the model can learn the Weights and be capable of classifying new unlabeled data.
In Supervised Learning, labeled training data teach the algorithm how the Features are related to the prediction. Many classic algorithms that have been used for years are actually primitive forms of Supervised Learning. But what happens if there are no labels on the data? Can ML learn without training? This is the realm of Unsupervised Learning, that we will review in the next post. We are starting down the path where machines learn by themselves!