Introduction to Machine Learning, Part 2: Unsupervised Machine Learning

From the series: Introduction to Machine Learning

­­Unsupervised machine learning looks for patterns in datasets that don’t have labeled responses.

You’d use this technique when you want to explore your data but don’t yet have a specific goal, or you’re not sure what information the data contains.

It’s also a good way to reduce the dimension of your data.

As we’ve previously discussed, most unsupervised learning techniques are a form of cluster analysis, which separates data into groups based on shared characteristics.

Clustering algorithms fall into two broad groups:

  • Hard clustering, where each data point belongs to only one cluster
  • Soft clustering, where each data point can belong to more than one cluster

For context, here’s a hard clustering example: 

Say you’re an engineer building cell phone towers. You need to decide where, and how many, towers to construct. To make sure you’re providing the best signal reception, you need to locate the towers within clusters of people.

To start, you need an initial guess at the number of clusters. To do this, compare scenarios with three towers and four towers to see how well each is able to provide service.

Because a phone can only talk to one tower at a time, this is a hard clustering problem.

For this, you could use k-means clustering, because the k-means algorithm treats each observation in the data as an object having a location in space. It finds cluster centers, or means, that reduce the total distance from data points to their cluster centers.

So, that was hard clustering. Let’s see how you might use a soft clustering algorithm in the real world.

Pretend you’re a biologist analyzing the genes involved in normal and abnormal cell division. You have data from two tissue samples, and you want to compare them to determine whether certain patterns of gene features correlate to cancer.

Because the same genes can be involved in several biological processes, no single gene is likely to belong to one cluster only.

Apply a fuzzy c-means algorithm to the data, and then visualize the clusters to see which groups of genes behave in similar ways.

You can then use this model to help see which features correlate with normal or abnormal cell division.

This covers the two main techniques (hard and soft clustering) for exploring data with unlabeled responses.

Remember though, that you can also use unsupervised machine learning to reduce the number of features, or the dimensionality, of your data.

You’d do this to make your data less complex – especially if you’re working with data that has hundreds or thousands of variables. By reducing the complexity of your data, you’re able to focus on the important features and gain better insights.

Let's look at 3 common dimensionality reduction algorithms:

  • Principal Component Analysis (PCA) performs a linear transformation on the data so that most of the variance in your dataset is captured by the first few principal components. This could be useful for developing condition indicators for machine health monitoring.
  • Factor Analysis identifies underlying correlations between variables in your dataset. It provides a representation of unobserved latent, or common, factors. Factor analysis is sometimes used to explain stock price variation.
  • Nonnegative matrix factorization is used when model terms must represent nonnegative quantities, such as physical quantities. If you need to compare a lot of text on webpages or documents, this would be a good method to start with as text is either not present, or occurs a positive number of times.

In this video, we took a closer look at hard and soft clustering algorithms, and we also showed why you’d want to use unsupervised machine learning to reduce the number of features in your dataset.

As for your next steps:

Unsupervised learning might be your end goal. If you’re just looking to segment data, a clustering algorithm is an appropriate choice.

On the other hand, you might want to use unsupervised learning as a dimensionality reduction step for supervised learning. In our next video we’ll take a closer look at supervised learning.

For now, that wraps up this video. Don’t forget to check out the description below for more resources and links.