5 min read
Early iterations of the AI applications we interact with most today were built on traditional machine learning models. These models rely on learning algorithms that are developed and maintained by data scientists. In other words, traditional machine learning models need human intervention to process new information and perform any new task that falls outside their initial training.
For example, Apple made Siri a feature of its iOS in 2011. This early version of Siri was trained to understand a set of highly specific statements and requests. Human intervention was required to expand Siri’s knowledge base and functionality.
However, AI capabilities have been evolving steadily since the breakthrough development of artificial neural networks in 2012, which allow machines to engage in reinforcement learning and simulate how the human brain processes information.
Unlike basic machine learning models, deep learning models allow AI applications to learn how to perform new tasks that need human intelligence, engage in new behaviors and make decisions without human intervention. As a result, deep learning has enabled task automation, content generation, predictive maintenance and other capabilities across industries.
Due to deep learning and other advancements, the field of AI remains in a constant and fast-paced state of flux. Our collective understanding of realized AI and theoretical AI continues to shift, meaning AI categories and AI terminology may differ (and overlap) from one source to the next. However, the types of AI can be largely understood by examining two encompassing categories: AI capabilities and AI functionalities.
Artificial Narrow Intelligence, also known as Weak AI (what we refer to as Narrow AI), is the only type of AI that exists today. Any other form of AI is theoretical. It can be trained to perform a single or narrow task, often far faster and better than a human mind can.
However, it can’t perform outside of its defined task. Instead, it targets a single subset of cognitive abilities and advances in that spectrum. Siri, Amazon’s Alexa and IBM Watson® are examples of Narrow AI. Even OpenAI’s ChatGPT is considered a form of Narrow AI because it’s limited to the single task of text-based chat.
Artificial General Intelligence (AGI), also known as Strong AI, is today nothing more than a theoretical concept. AGI can use previous learnings and skills to accomplish new tasks in a different context without the need for human beings to train the underlying models. This ability allows AGI to learn and perform any intellectual task that a human being can.
Super AI is commonly referred to as artificial superintelligence and, like AGI, is strictly theoretical. If ever realized, Super AI would think, reason, learn, make judgements and possess cognitive abilities that surpass those of human beings.
The applications possessing Super AI capabilities will have evolved beyond the point of understanding human sentiments and experiences to feel emotions, have needs and possess beliefs and desires of their own.
Underneath Narrow AI, one of the three types based on capabilities, there are two functional AI categories:
Reactive machines are AI systems with no memory and are designed to perform a very specific task. Since they can’t recollect previous outcomes or decisions, they only work with presently available data. Reactive AI stems from statistical math and can analyze vast amounts of data to produce a seemingly intelligent output.
Unlike Reactive Machine AI, this form of AI can recall past events and outcomes and monitor specific objects or situations over time. Limited Memory AI can use past- and present-moment data to decide on a course of action most likely to help achieve a desired outcome.
However, while Limited Memory AI can use past data for a specific amount of time, it can’t retain that data in a library of past experiences to use over a long-term period. As it’s trained on more data over time, Limited Memory AI can improve in performance.
Theory of Mind AI is a functional class of AI that falls underneath the General AI. Though an unrealized form of AI today, AI with Theory of Mind functionality would understand the thoughts and emotions of other entities. This understanding can affect how the AI interacts with those around them. In theory, this would allow the AI to simulate human-like relationships.
Because Theory of Mind AI could infer human motives and reasoning, it would personalize its interactions with individuals based on their unique emotional needs and intentions. Theory of Mind AI would also be able to understand and contextualize artwork and essays, which today’s generative AI tools are unable to do.
Emotion AI is a theory of mind AI currently in development. AI researchers hope it will have the ability to analyze voices, images and other kinds of data to recognize, simulate, monitor and respond appropriately to humans on an emotional level. To date, Emotion AI is unable to understand and respond to human feelings.
Self-Aware AI is a kind of functional AI class for applications that would possess super AI capabilities. Like theory of mind AI, Self-Aware AI is strictly theoretical. If ever achieved, it would have the ability to understand its own internal conditions and traits along with human emotions and thoughts. It would also have its own set of emotions, needs and beliefs.
Emotion AI is a Theory of Mind AI currently in development. Researchers hope it will have the ability to analyze voices, images and other kinds of data to recognize, simulate, monitor and respond appropriately to humans on an emotional level. To date, Emotion AI is unable to understand and respond to human feelings.
Narrow AI applications with computer vision can be trained to interpret and analyze the visual world. This allows intelligent machines to identify and classify objects within images and video footage.
Applications of computer vision include:
Computer vision is critical for use cases that involve AI machines interacting and traversing the physical world around them. Examples include self-driving cars and machines navigating warehouses and other environments.
Robots in industrial settings can use Narrow AI to perform routine, repetitive tasks that involve materials handling, assembly and quality inspections. In healthcare, robots equipped with Narrow AI can assist surgeons in monitoring vitals and detecting potential issues during procedures.
Agricultural machines can engage in autonomous pruning, moving, thinning, seeding and spraying. And smart home devices such as the iRobot Roomba can navigate a home’s interior using computer vision and use data stored in memory to understand its progress.
Expert systems equipped with Narrow AI capabilities can be trained on a corpus to emulate the human decision-making process and apply expertise to solve complex problems. These systems can evaluate vast amounts of data to uncover trends and patterns to make decisions. They can also help businesses predict future events and understand why past events occurred.
IBM has pioneered AI from the very beginning, contributing breakthrough after breakthrough to the field. IBM most recently released a big upgrade to its cloud-based, generative AI platform known as IBM watsonx™. IBM® watsonx.ai™ brings together new generative AI capabilities, powered by foundation models and traditional machine learning into a powerful studio spanning the entire AI lifecycle. With watsonx.ai, data scientists can build, train and deploy machine learning models in a single collaborative studio environment.
Explore watsonx.ai today