Skip to main content
LLMs

Time Complexity of 10 ML Algorithms

(training and inference)...in a single frame.

Avi Chawla
Avi Chawla
๐Ÿ‘‰

TODAY'S ISSUE

TODAYโ€™S DAILY DOSE OF DATA SCIENCE

Time Complexity of 10 ML Algorithms

This visual depicts the run-time complexity of the 10 most popular ML algorithms.

Why care?

Everyone is a big fan of sklearn implementations.

It takes just two (max three) lines of code to run any ML algorithm with sklearn.

However, due to this simplicity, most people often overlook the core understanding of an algorithm and the data-specific conditions that allow us to use an algorithm.

For instance, you cannot use SVM or t-SNE on a big dataset:

  • SVMโ€™s run-time grows cubically with the total number of samples.
  • t-SNEโ€™s run-time grows quadratically with the total number of samples.

Another advantage of understanding the run-time is that it helps us understand how an algorithm works end-to-end.

That said, we made a few assumptions in the above table:

  • In a random forest, all decision trees may have different depths. We have assumed them to be equal.
    • Then, we find the k-smallest distances from this list.
    • The run-time to determine the k-smallest values may depend on the implementation.
      • Sorting and selecting the k-smallest values will be O(nlogn).
      • But if we use a priority queue, it will take O(nlog(k)).
  • In t-SNE, thereโ€™s a learning step. However, the major run-time comes from computing the pairwise similarities in the high-dimensional space. You can learn how t-SNE works here: tSNE article.

During inference in kNN, we first find the distance to all data points. This gives a list of distances of size n (total samples).

Today, as an exercise, I would encourage you to derive these run-time complexities yourself.

This activity will give you confidence in algorithmic understanding.

๐Ÿ‘‰ Over to you: Can you tell the inference run-time of KMeans Clustering?

Thanks for reading!

IN CASE YOU MISSED IT

โ€‹Build a Reasoning Model Like DeepSeek-R1โ€‹โ€‹

If you have used DeepSeek-R1 (or any other reasoning model), you must have seen that they autonomously allocate thinking time before producing a response.

Last week, we shared how to embed reasoning capabilities into any LLM.

We trained our own reasoning model like DeepSeek-R1 (with code).

To do this, we used:

  • UnslothAI for efficient fine-tuning.
  • Llama 3.1-8B as the LLM to add reasoning capabilities to.

โ€‹Find the implementation and detailed walkthrough newsletter here โ†’

ROADMAP

From local ML to production ML

Once a model has been trained, we move to productionizing and deploying it.

If ideas related to production and deployment intimidate you, hereโ€™s a quick roadmap for you to upskill (assuming you know how to train a model):

This roadmap should set you up pretty well, even if you have NEVER deployed a single model before since everything is practical and implementation-driven.

Published on Mar 18, 2025