Daily Dose of Data Science

Share this post

The Overlooked Limitations of Grid Search and Random Search

www.blog.dailydoseofds.com

Discover more from Daily Dose of Data Science

High-quality insights on Data Science and Python, along with best practices — shared daily. Get a 550+ Page Data Science PDF Guide and 450+ Practice Questions Notebook, FREE.
Over 36,000 subscribers
Continue reading
Sign in

The Overlooked Limitations of Grid Search and Random Search

...and what to try instead with.

Avi Chawla
Aug 11, 2023
23
Share this post

The Overlooked Limitations of Grid Search and Random Search

www.blog.dailydoseofds.com
Share

Hyperparameter tuning is a tedious task in training ML models.

Typically, we use two common approaches for this:

  • Grid search

  • Random search

Grid search and random search

But they have many limitations.

For instance:

  • Grid search performs an exhaustive search over all combinations. This is computationally expensive.

  • Grid search and random search are restricted to the specified hyperparameter range. Yet, the ideal hyperparameter may exist outside that range.

Limited search space with grid search and random search
  • They can ONLY perform discrete searches, even if the hyperparameter is continuous.

Grid search and random search can only try discrete values for continuous hyperparameters

To this end, Bayesian Optimization is a highly underappreciated yet immensely powerful approach for tuning hyperparameters.

It uses Bayesian statistics to estimate the distribution of the best hyperparameters.

This allows it to take informed steps to select the next set of hyperparameters. As a result, it gradually converges to an optimal set of hyperparameters much faster.

The efficacy is evident from the image below.

Bayesian optimization leads the model to the same F1 score but:

  • it takes 7x fewer iterations

  • it executes 5x faster

  • it reaches the optimal configuration earlier

But how does it exactly work, and why is it so effective?

What is the core intuition behind Bayesian optimization?

How does it optimally reduce the search space of the hyperparameters?

If you are curious, then this is precisely what we are learning in today’s extensive machine learning deep dive.

Bayesian Optimization Article

The idea behind Bayesian optimization appeared to be extremely compelling to me when I first learned it a few years back.

Learning about this optimized hyperparameter tuning and utilizing them has been extremely helpful to me in building large ML models quickly.

Thus, learning about Bayesian optimization will be immensely valuable if you envision doing the same.

Thus, today’s article covers:

  • Issues with traditional hyperparameter tuning approaches.

  • What is the motivation for Bayesian optimization?

  • How does Bayesian optimization work?

  • The intuition behind Bayesian optimization.

  • Results from the research paper that proposed Bayesian optimization for hyperparameter tuning.

  • A hands-on Bayesian optimization experiment.

  • Comparing Bayesian optimization with grid search and random search.

  • Analyzing the results of Bayesian optimization.

  • Best practices for using Bayesian optimization.

👉 Interested folks can read it here: Bayesian Optimization for Hyperparameter Tuning.

Hope you will learn something new today :)

Thanks for reading Daily Dose of Data Science! Subscribe for free to learn something new and insightful about Python and Data Science every day. Also, get a Free Data Science PDF (350+ pages) with 250+ tips.

👉 If you liked this post, don’t forget to leave a like ❤️. It helps more people discover this newsletter on Substack and tells me that you appreciate reading these daily insights. The button is located towards the bottom of this email.

Thanks for reading!

23
Share this post

The Overlooked Limitations of Grid Search and Random Search

www.blog.dailydoseofds.com
Share
Previous
Next
Comments
Top
New
Community

No posts

Ready for more?

© 2023 Avi Chawla
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing