Hamburger Icon

The 5 different types of machine learning paradigms, explained

Wondering whether to have your AI learn supervised, semi-supervised, unsupervised, self-supervised, or with reinforcement learning? Here's the use cases.

Jan 30, 2025 • 12 Minute Read

Please set an alt value for this image...

Meet Robby, a curious robot on a mission to learn about fruits. Robby isn’t born with this knowledge—he has to figure it out, just like we do. And just like humans, Robby can learn in different ways. 

Just like people, machines have different learning styles. So, let’s follow Robby on his journey and explore how he learns in four distinct ways: supervised, unsupervised, semi-supervised, self-supervised, and reinforcement learning.

Supervised learning: Learning with a coach

Some machines thrive with a structured, coach-oriented approach, much like how I prefer having a trainer when I’m learning to lift weights. Mostly so I don’t end up turning my back into a pretzel or accidentally bench-pressing my own ego. 

In supervised learning, Robby is given clear guidance, much like a personal trainer showing you the ropes. Robby is presented with labeled data, where each fruit—like an apple, banana, or strawberry—comes with a tag. His task is to memorize the differences and similarities based on those labels.

This method is similar to working with a coach: You’re shown the correct form for each exercise, and over time, you learn what’s right. Robby does the same by associating visual cues with the correct labels, making it easier for him to classify new fruits in the future.

Supervised learning

Real-world examples of supervised learning:

Ever notice how your phone can identify who’s in a photo or how Siri always seems to understand you? That’s supervised learning in action. This type of learning works best when data is labeled, like tagging pictures with what’s in them or marking emails as "spam" or "not spam." The system looks at thousands of examples and learns to recognize patterns. For instance, in image classification, a machine studies labeled photos, noticing key features like shape, color, and texture, so it can correctly categorize new images.

Spam filters work the same way. The system analyzes patterns from emails already marked as spam—looking for things like certain words, phrases, or senders—so it can spot and filter out unwanted messages before they hit your inbox.

Have you ever used voice assistants like Siri or Google? These systems have also learned from thousands of labeled audio clips. When you say, "Hey Siri," the system knows exactly what to do because it’s been trained to recognize different speech patterns, accents, and even background noise.

Supervised learning doesn’t stop at voice recognition. It helps power image recognition on social media, where it tags faces in photos. And it’s behind recommendation systems on Netflix or Amazon, where your past actions—like what you’ve watched or bought—help suggest things you’ll love next.

From filtering spam to suggesting your next movie, supervised learning is behind many of the smart features we use every day, making our lives easier and more personalized.

Unsupervised learning: Discovering patterns on his own

The "free spirits" of machine learning figure things out on their own—much like when I pick up a new hobby and start experimenting without a manual.

Here, Robby is faced with a new challenge: no labels. He’s given a bunch of fruits, but there’s no coach to tell him which is which. However, Robby is naturally curious and begins to spot patterns. He notices that certain fruits, like yellow long curved ones, might belong to the same group, while round, red ones might be another group.

In this unsupervised learning scenario, Robby doesn't rely on explicit labels. Instead, he groups similar items together based on characteristics like color, shape, or size. This is how I approach learning a new programming language—no step-by-step guide, just figuring it out as I go.

Real-world examples of unsupervised learning:

Ever wonder how online stores seem to know exactly what you might want to buy next? That’s unsupervised learning at work. E-commerce platforms use this technique to look at your browsing and shopping habits, grouping you with other users who share similar interests. This helps create a personalized shopping experience, showing you products that match what you like, without anyone telling the system what categories you fall into.

You’ve probably noticed how services like Netflix, YouTube, and Spotify always suggest new content based on your past activity. That’s also unsupervised learning. By tracking what you watch or listen to, these platforms find patterns in what other users with similar tastes are enjoying. So, even if the system doesn’t know exactly what you're into yet, it can still recommend things you might love.

Ever had a strange charge on your credit card or received a weird security alert? Unsupervised learning is often behind these anomaly detection systems. By learning what "normal" activity looks like, these systems can spot anything unusual, like fraud or a potential security threat, and flag it before it becomes a problem.

Another cool application is organizing large amounts of data. Have you ever searched through a huge library of articles or images, only to get lost in the shuffle? Unsupervised learning can group similar items together—like sorting news articles by topic or organizing images by visual features—making it easier to find what you're looking for.

From finding your next favorite show to protecting your finances, unsupervised learning helps machines make sense of data on their own. It’s what powers many of the smart, intuitive features we use every day, and it’s opening up new possibilities for discovery and insight.

Semi-supervised learning: A little help goes a long way

What if Robby had a bit of help? In this middle-ground approach, semi-supervised learning allows Robby to take the lead in his learning journey while receiving a bit of guidance—much like how I approach learning a new programming language. 

When I start learning a programming language, I usually begin with a few well-defined concepts, like syntax rules or basic commands (these are the labeled data). For example, I might follow a tutorial to learn how to define a variable or write a simple function. These initial lessons give me a solid foundation, much like how a semi-supervised model learns from a small set of labeled examples.

But just like in semi-supervised learning, I’m often left to figure out most of the language on my own—dealing with unlabeled data. I might try building a program or solving coding challenges without knowing the exact right answer at first. Through trial and error, I start recognizing patterns and deducing how different concepts fit together, much like how a model infers missing labels or structure from the partially labeled dataset.

Over time, the small amount of structured input (like the tutorials) helps me make sense of the larger, more complex tasks I encounter. Similarly, in semi-supervised learning, the few labeled data points help guide the model as it processes a larger pool of unlabeled data, improving its ability to generalize.

In semi-supervised learning, Robby is given some labeled data—like a few apples and bananas are labeled—but the rest of the fruits are unlabeled. With these starting points, Robby can infer that the unlabeled fruits, like strawberries, probably belong to a separate group.

This technique is a middle ground between supervised and unsupervised learning, where the model uses both labeled and unlabeled data to make smarter decisions. It’s like a student working on a math problem set where only a few answers are provided as clues.

Real-world examples of semi-supervised learning:

Ever wondered how doctors use AI to spot things like tumors in medical images? It’s all thanks to semi-supervised learning. Labeling medical images can take hours—doctors and radiologists carefully mark each one to highlight things like fractures or growths. But what if only a few images needed to be labeled, and the system could learn from those to understand patterns across thousands of others? That’s where semi-supervised learning comes in.

With semi-supervised learning, you only need a small amount of labeled data—like a handful of images marked with abnormalities. The machine uses these labeled images to understand general patterns and then applies that knowledge to the rest of the unlabeled images. Over time, it learns to detect things like tumors or fractures without needing every image manually marked.

This saves both time and money. Instead of spending countless hours labeling images, doctors can rely on automated tools that help them make faster, more accurate diagnoses. It’s especially useful in fields like radiology, pathology, and dermatology, where massive amounts of image data are generated every day. Semi-supervised learning is truly changing the game, helping doctors provide better care without the heavy lifting.

Self-supervised learning: Solving puzzles to learn

Self-supervised learning is a hybrid approach: Robby generates his own learning signals from his experiences, much like how I learn through trial and error. It’s a process of constant iteration—making mistakes and refining the approach until the solution emerges naturally, without external guidance.

In this exciting form of learning, self-supervised learning, Robby doesn’t need any labels or a teacher. Instead, he learns by solving puzzles. For example, Robby might be shown a picture of fruits with a part of it hidden—like a banana with a bite taken out. He has to infer the missing part.

By constantly solving these “puzzles,” Robby learns to understand the relationships between different fruits. It’s like learning how to solve a jigsaw puzzle, where the pieces connect logically, but no one tells you exactly where each piece goes. 

Real-world examples of self-supervised learning:

Have you ever used autocorrect or seen how search engines predict the next word you’ll type? That’s self-supervised learning at work. It’s a technique where the machine learns on its own by figuring out patterns in data, without needing labeled examples.

For instance, in natural language processing (NLP), a model can learn to predict the next word in a sentence just by analyzing huge amounts of text. Imagine reading thousands of books, articles, and tweets without any specific labels—just raw text. The system figures out what makes sense next, like predicting "dog" when reading "The quick brown fox jumps over the...". Over time, the model gets better at understanding the structure of sentences, which helps it generate more accurate responses in chatbots or virtual assistants.

In computer vision, self-supervised learning works similarly. A model can be trained to "fill in the blanks" when part of an image is missing—like guessing what’s behind a blur. By doing this, the model learns to recognize visual patterns and details, such as shapes, textures, and objects, without needing any labels. For example, it can train itself to understand what a cat looks like by predicting missing parts of an image, like a tail or a face.

What’s powerful about this method is that it doesn’t require labeled data upfront, but it can still be used for highly specific tasks later. After the model has learned general patterns, it can be fine-tuned for more targeted applications, like language translation or object detection. Whether it’s improving translation between languages or identifying objects in photos, self-supervised learning is a game-changer, powering everything from virtual assistants to autonomous cars.

By teaching machines to learn from the data itself, self-supervised learning opens up new possibilities in artificial intelligence that are both cost-effective and scalable.

Reinforcement learning: Learning through trial and error

Lastly, there’s reinforcement learning, where Robby gets little rewards and feedback to stay motivated—kind of like how my son needs that extra nudge and praise to keep going when he’s learning something new.

In reinforcement learning, Robby learns by interacting with his environment, much like how a child learns to ride a bike. Instead of receiving direct guidance, Robby makes decisions and receives feedback based on his actions—rewards for successes and penalties for mistakes. 

For example, if Robby chooses the right fruit in a game, he gets a point; if he picks the wrong one, he loses a point. Over time, Robby adjusts his strategy, learning to make better decisions to maximize his rewards.

This approach is like learning a new skill without being explicitly told how to do it. I might practice riding my bike, sometimes wobbling or falling, but through repeated trials, I refine my balance and control. Similarly, in reinforcement learning, the model gradually improves by exploring different actions and learning from the outcomes, making it well-suited for tasks like robotics, game-playing, or autonomous driving, where actions need to be optimized over time based on real-time feedback.

Real-world examples of reinforcement learning:

Ever wonder how self-driving cars seem to "learn" how to drive like humans? That’s reinforcement learning in action. This approach is all about learning through trial and error, like when you're trying to figure out the best way to play a new video game or learn a new skill.

Take autonomous driving: a self-driving car navigates through the world by interacting with its environment. Every time it makes a safe turn or avoids an obstacle, it gets a "reward." If it makes a risky move, like speeding or turning too sharply, it gets a "penalty." 

The more the car drives, the better it gets at making decisions, thanks to real-time feedback. This feedback loop—rewarding good actions and penalizing bad ones—helps the car learn the safest and most efficient ways to drive without needing a pre-labeled dataset.

Similarly, reinforcement learning is also at the heart of things like drone navigation, where drones learn to fly smoothly and avoid obstacles by experimenting with different flight paths.

Moreover, in real-time strategy games like StarCraft or chess, AI players use reinforcement learning to figure out the best strategies by playing thousands of games against themselves and learning from each outcome.

And it's not just cards or games. Personalized recommendation systems—like the ones that suggest your next movie on Netflix or products on Amazon—also use a form of reinforcement learning. These systems tweak their suggestions based on your choices, rewarding good recommendations and adapting to your changing tastes over time.

Ready to dive into AI and machine learning? Start here! 

If you're just starting out with AI and machine learning, my AWS Machine Learning and Artificial Intelligence Fundamentals course on Pluralsight is the perfect place to begin. It's designed to break things down and give you a solid foundation without overwhelming you. You'll get a clear introduction to these exciting topics and be ready to take on more advanced concepts in no time.

Once you're comfortable and want to level up, check out the AWS Certified AI Practitioner (AIF-C01) certificate certification learning path on Pluralsight. It's a step-by-step guide made up of five courses to fully prepare you for the exam:

Conclusion

In the world of machine learning, there’s no one-size-fits-all approach. Machines, like us, can learn in different ways depending on the task at hand. Whether they’re guided by labeled data, discovering patterns on their own, learning through trial and error, or adjusting based on feedback, each method plays a crucial role in their ability to understand and interact with the world. 

From supervised learning to reinforcement learning, these various approaches help machines tackle a wide range of challenges, constantly refining their models and improving over time. Just as humans adapt their learning styles to fit different situations, machines are becoming increasingly adept at learning from experience, making them more versatile and capable of handling complex tasks in dynamic environments. The future of machine learning is full of possibilities, with machines continuously evolving to learn smarter and more efficiently.

Noreen Hasan

Noreen H.

Noreen Hasan has been a software engineer for over a decade in several industries ranging from startups to financial institutions. Her focus was on iOS mobile development, and in the last years her focus shifted to AWS and cloud computing. Her mantra has always been 'Building to Solve, Building to Empower' because solving problems and empowering others is the main reason she feels fulfilled in this field. When she is not developing technical solutions, she enjoys swimming, Zumba, reading and listening to podcasts.

More about this author