Reducing Complexity in Data
This course covers several techniques used to optimally simplify data used in supervised machine learning applications ranging from relatively simple feature selection techniques to very complex applications of clustering using deep neural networks.
What you'll learn
Machine learning techniques have grown significantly more powerful in recent years, but excessive complexity in data is still a major problem. There are several reasons for this - distinguishing signal from noise gets harder with more complex data, and the risks of overfitting go up as well. Finally, as cloud-based machine learning becomes more and more popular, reducing complexity in data is crucial in making training more affordable. Cloud-based ML solutions can be very expensive indeed.
In this course, Reducing Complexity in Data you will learn how to make the data fed into machine learning models more tractable and more manageable, without resorting to any hacks or shortcuts, and without compromising on quality or correctness.
First, you will learn the importance of parsimony in data, and understand the pitfalls of working with data of excessively high-dimensionality, often referred to as the curse of dimensionality.
Next, you will discover how and when to resort to feature selection, employing statistically sound techniques to find a subset of the features input based on their information content and link to the output.
Finally, you will explore how to use two advanced techniques - clustering, and autoencoding. Both of these are applications of unsupervised learning used to simplify data as a precursor to a supervised learning algorithm. Each of them often relies on a sophisticated implementation such as deep learning using neural networks.
When you’re finished with this course, you will have the skills and knowledge of conceptually sound complexity reduction needed to reduce the complexity of data used in supervised machine learning applications.
Table of contents
- Version Check 0m
- Module Overview 1m
- Prerequisites and Course Outline 6m
- The Curse of Dimensionality 6m
- Overfitting and the Bias-variance Trade-off 7m
- Techniques to Reduce Complexity 4m
- Choosing the Right Technique 5m
- Drawbacks of Reducing Complexity 2m
- Demo: The Diabetes Dataset - Exploration 7m
- Demo: Establishing a Baseline Model 2m
- Demo: The Boston Housing Prices Dataset - Exploration 4m
- Demo: Kitchen Sink Regression to Establish a Baseline Model 6m
- Summary 2m
- Module Overview 1m
- Statistical Techniques for Feature Selection 4m
- Conceptual Overview of Different Feature Selection Techniques 5m
- Demo: Selecting Features Using a Variance Threshold 7m
- Demo: Selecting K Best Features Using Chi2 Analysis 4m
- Demo: Setting up Helper Functions for Feature Selection 5m
- Demo: Find the Right Value for K Using Chi2 Analysis 3m
- Demo: Find the Right Value for K Using ANOVA 3m
- Demo: Select Features Using Percentiles and Mutual Information Analysis 3m
- Demo: Dictionary Learning on Handwritten Digits 7m
- Summary 2m
- Module Overview 2m
- Understanding Principal Components Analysis 8m
- Demo: Performing PCA to Reduce Dimensionality 4m
- Demo: Building Linear Models Using Principal Components 4m
- Understanding Factor Analysis 3m
- Demo: Applying Factor Analysis to Reduce Dimensionality 6m
- Understanding Linear Discriminant Analysis 2m
- Demo: Performing Linear Discriminant Analysis to Reorient Data 6m
- Summary 1m
- Module Overview 3m
- Understanding Manifold Learning 5m
- Demo: Generate Manifold and Set up Helper Functions 6m
- Demo: Manifold Learning Using Multidimensional Scaling and Spectral Embedding 4m
- Demo: Manifold Learning Using t-SNE and Isomap 3m
- Demo: Manifold Learning Using Locally Linear Embedding 3m
- Demo: Performing Kernel PCA to Reduce Complexity in Nonlinear Data 7m
- Summary 1m
- Module Overview 1m
- K-means Model Stacking 3m
- Demo: Classifying Image with Original Features 4m
- Demo: Transforming Data Using K-means Cluster Centers 4m
- Autoencoding 8m
- Demo: Prepare Image Data to Feed an Autoencoder 5m
- Demo: Using Autoencoders to Learn Efficient Representations of Data 6m
- Summary and Further Study 2m