Welcome to the
Daily Dose of Data Science

We help you succeed in DS & ML careers

Every week, we publish practical no-fluff deep dives on topics that truly matter to your skills for succeeding and staying relevant in ML & DS roles.

Daily Dose of Data Science

Plans & Pricing

Awesome! Relief pricing has been activated. Join below at 30% off across all plans.


$8.4 /month

  • 1-2 new articles every week.
  • Access all previous articles.
  • Personal chat support.
Subscribe now


$84 /year cheaper

  • 1-2 new articles every week.
  • Access all previous articles.
  • Personal chat support.
  • 10 extra articles over monthly.
  • 2 months extra over monthly.
  • 10 day refund.
Subscribe now



  • 1-2 new articles every week.
  • Access all previous articles.
  • Personal chat support.
  • Pay one-time.
  • No renewals.
  • Lifetime access.
  • 10 day refund.
Subscribe now


Joining will unlock all 40+ deep dives we have published so far. The following five deep dives will give you a gist of the practical topics we typically cover and how they will help you grow.

Model Compression: A Step Towards Efficient Machine Learning

Model accuracy alone (or an equivalent performance metric) rarely determines which model will be deployed.

Much of the engineering effort goes into making the model production-friendly.

Because typically, the model that gets shipped is NEVER solely determined by performance — a misconception that many have.

Deployment considerations

Instead, we also consider several operational and feasibility metrics, such as:

  • Inference Latency: Time taken by the model to return a prediction.
  • Model size: The memory occupied by the model.
  • Ease of scalability, etc.

For instance, consider the image below. It compares the accuracy and size of a large neural network we developed in the article with its pruned (or reduced/compressed) versions:

Looking at these results, don’t you strongly prefer deploying the model that is 72% smaller, but is still (almost) as accurate as the large model?

Of course, this depends on the task but in most cases, it might not make any sense to deploy the large model when one of its largely pruned versions performs equally well.

We discussed and implemented 4 model compression techniques in the article below, which ML teams regularly use to save 1000s of dollars in running ML models in production.

Federated Learning: A Critical Step Towards Privacy-Preserving Machine Learning.

There’s so much data on your mobile phone right now — images, text messages, etc.

And this is just about one user — you.

But applications can have millions of users. The amount of data we can train ML models on is unfathomable.

The problem?

This data is private.

So you cannot consolidate this data into a single place to train a model.

The solution?

Federated learning is a smart way to address this challenge.

The core idea is to ship models to devices, train the model on the device, and retrieve the updates:

Lately, more and more users have started caring about their privacy.

Thus, more and more ML teams are resorting to federated learning to build ML models, while still preserving user privacy.

Of course, there are many challenges to federated learning:

  • How do we decide whether federated learning is actually suitable for us?
  • As the model is trained on the client side, how to reduce its size?
  • How do we aggregate different models received from the client side?
  • [IMPORTANT] Privacy-sensitive datasets are always biased with personal likings and beliefs. For instance, in an image-related task:
    • Some clients may only have pet images.
    • Some clients may only have car images.
    • Some clients may love to travel, so most images they have are travel-related.
    • How do we handle such skewness in client data distribution?
  • What are the considerations for federated learning?
  • Lastly, how do we implement federated learning models?

We cover everything in this deep dive on federated learning (entirely beginner-friendly):

A Beginner-friendly Guide to Multi-GPU Training

If you look at job descriptions for Applied ML or ML engineer roles on LinkedIn, most of them demand skills like the ability to train models on large datasets:

Of course, this is not something new or emerging.

But the reason they explicitly mention “large datasets” is quite simple to understand.

Businesses have more data than ever before.

Traditional single-node model training just doesn’t work because one cannot wait months to train a model.

Distributed (or multi-GPU) training is one of the most essential ways to address this.

In this deep dive, we cover some core technicalities behind multi-GPU training, how it works under the hood, and implementation details.

We also look at the key considerations for multi-GPU (or distributed) training, which, if not addressed appropriately, may lead to suboptimal performance or slow training.

5 Must-Know Ways to Test ML Models in Production (Implementation Included)

Despite rigorously testing an ML model locally (on validation and test sets), it could be a terrible idea to instantly replace the previous model with the new model.

A more reliable strategy is to test the model in production (yes, on real-world incoming data).

While this might sound risky, ML teams do it all the time, and it isn’t that complicated.

There are many ways to do this.

We discussed five must-know strategies; how they work, when to use them, advantages and considerations, and their implementations in this deep dive:

The article is entirely beginner-friendly, so even if you have not deployed any model before, you should be good to go.

Bayesian Optimization for Hyperparameter Tuning

There are many issues with grid search and random search.

  • They are computationally expensive due to exhaustive search.
  • The search is restricted to the specified hyperparameter range. But what if the ideal hyperparameter exists outside that range?
  • They can ONLY perform discrete searches, even if the hyperparameter is continuous.

Bayesian optimization solves this.

It uses Bayesian statistics to estimate the distribution of the best hyperparameters.

Both grid search and random Search evaluate every hyperparameter configuration independently. Thus, they iteratively explore all hyperparameter configurations to find the most optimal one.

However, Bayesian Optimization takes informed steps based on the results of the previous hyperparameter configurations.

This lets it confidently discard non-optimal configurations. Consequently, the model converges to an optimal set of hyperparameters much faster.

The efficacy of Bayesian Optimization is evident from the image below.

Bayesian optimization leads the model to the same F1 score but:

  • it takes 7x fewer iterations
  • it executes 5x faster
  • it reaches the optimal configuration earlier

The idea behind Bayesian optimization appeared to be extremely compelling to me when I first learned it a few years back.

Learning about this optimized hyperparameter tuning and utilizing it has been extremely helpful to me in building large ML models quickly.

Thus, learning about Bayesian optimization will be immensely valuable if you envision doing the same.

Assuming you have never had any experience with Bayesian optimation before, the article covers:

  • Issues with traditional hyperparameter tuning approaches.
  • What is the motivation for Bayesian optimization?
  • How does Bayesian optimization work?
  • The intuition behind Bayesian optimization.
  • Results from the research paper that proposed Bayesian optimization for hyperparameter tuning.
  • A hands-on Bayesian optimization experiment.
  • Comparing Bayesian optimization with grid search and random search.
  • Analyzing the results of Bayesian optimization.
  • Best practices for using Bayesian optimization.

Learning about optimized hyperparameter tuning and utilizing it will be extremely helpful to you if you wish to build large ML models quickly.

Data scientists and ML engineers love the Daily Dose of Data Science

Join the Daily Dose of Data Science Today!

A daily column with insights, observations, tutorials, and best practices on data science.

Get Started!
Join the Daily Dose of Data Science Today!

Great! You’ve successfully signed up. Please check your email.

Welcome back! You've successfully signed in.

You've successfully subscribed to Daily Dose of Data Science.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.