March 20, 2018 | Written by: Watson Health
Categorized: Blog Post | Cognitive Computing
Share this post:
As the promise of AI continues to shape the way we think about delivering healthcare, it’s typical to come across terms like “machine learning” and “deep learning.” Too often, they are used as buzzwords that can be thrown around interchangeably.
As a leader in AI development for decades, and with the power of Watson now driving its array of AI solutions across Watson Health, IBM has a deep well of expertise and has helped define and put into practice these concepts — not just to explore what technology can do, but how it can help us lead better, healthier lives.
But you don’t have to be a researcher or developer to have a better understanding of AI technology. Let’s take a step back and consider what exactly the words mean and what the differences are between them:
Artificial Intelligence (AI)
Artificial intelligence is the general concept that machines can be “taught” to mimic human decision-making and learning behaviors. It’s an older term than you may think; per Merriam-Webster, its first known use was in 1955.
An algorithm is a linear set of instructions given to a system to follow in an exacting manner. Each action is spelled out specifically in the programming — the system is not making any decisions on its own. For example, one common algorithm for computing systems is called a bucket sort, which involves data being sorted into categorized buckets, the contents of which are then sorted individually.
A neural network is a computing system based on the way neurons in the brain operate. The network is trained by being given correct answers, from which it builds its own patterns to process raw data. For example, a system may learn to identify images of a cat by being given a pool of images labeled as “cat” or “not a cat” (as opposed to being given instructions based on characteristics of a cat, such as having whiskers or a long tail).
Machine learning is the application of AI through the creation of neural networks that can demonstrate learning behavior by performing tasks that aren’t explicitly programed. The term was coined by Arthur Samuel, a computer pioneer at IBM. In 1959, he created a checkers-playing program that’s considered the world’s first self-learning computer program.
Whether or not we realize it, we witness machine learning in action all the time. Examples of machine learning include Netflix recommending a movie based on our viewing history, or Google promoting a targeted ad based on our browsing history.
The addition of a feedback loop into the system — for example, if we choose a recommended movie or click on a targeted ad — increases the accuracy and probability of the system to make correct choices.
Natural Language Processing
Natural language processing describes computing systems that have been trained to understand and respond to natural language prompts. Example of this include voice-control systems like Alexa or Siri.
Deep learning is a type of machine learning in which systems can accomplish complex tasks by using multiple layers of choices based on the output of the previous layer, creating increasingly smarter and more abstract conclusions. Deep learning systems can prioritize the criteria most important to reaching a decision.
Deep learning requires more up-front investment of effort in terms of giving the system parameters and allowing it to figure out the rules and patterns to follow. Training a deep learning machine can take days or even weeks — a process the IBM Research AI team is hoping to improve as they explore integration of a new compression algorithm they have developed.
But once a deep learning machine is up and running, it can process massive amounts of data very quickly. In fact, the key to deep learning is large amounts of data. The more data it’s given, the better it performs.
Hopefully, this guide has helped you gain a better appreciation of the amazing capacities and promise of AI technologies. Discover how Watson Health is applying AI to advanced medical imaging.