Classical ML and Deep Learning
A collection of 32 posts
Causality (Part 2)
A guide to building robust decision-making systems in businesses with causal inference.
Causality (Part 1)
A guide to building robust decision-making systems in businesses with causal inference.
Model Interpretability (Part 3)
A deep dive into interpretability methods, why they matter, along with their intuition, considerations, how to avoid being misled, and code.
Model Interpretability (Part 2)
A deep dive into interpretability methods, why they matter, along with their intuition, considerations, how to avoid being misled, and code.
Model Interpretability (Part 1)
A deep dive into PDPs and ICE plots, along with their intuition, considerations, how to avoid being misled, and code.
Model Calibration (Part 2)
How to make ML models reflect true probabilities in their predictions?
Federated Learning: A Critical Step Towards Privacy-Preserving ML
Learn real-world ML model development with a primary focus on data privacy – A practical guide.
Model Calibration (Part 1)
How to make ML models reflect true probabilities in their predictions?
Conformal Predictions: Build Confidence in Your ML Model's Predictions
A critical step towards building and using ML models reliably.
Implementing KANs From Scratch Using PyTorch
A step-by-step demonstration of an emerging neural network architecture — KANs.
A Beginner-friendly Introduction to Kolmogorov Arnold Networks (KAN)
What are KANs, how are they trained, and what makes them so powerful?
15 Ways to Optimize Neural Network Training (With Implementation)
Techniques that help you become a "machine learning engineer" from a "machine learning model developer."
A Detailed and Beginner-Friendly Introduction to PyTorch Lightning: The Supercharged PyTorch
Immensely simplify deep learning model building with PyTorch Lightning.
A Mathematical Deep Dive Into the Curse of Dimensionality
Mathematically understanding the surprising phenomena that arise when dealing with data in high dimensions.
Why Bagging is So Ridiculously Effective At Variance Reduction?
Diving into the mathematical motivation for using bagging.
Gaussian Mixture Models (GMMs)
Gaussian Mixture Models: A more robust alternative to KMeans.
HDBSCAN: The Supercharged Version of DBSCAN (An Algorithmic Deep Dive)
A beginner-friendly introduction to HDBSCAN clustering and how it is superior to DBSCAN clustering.
DBSCAN++: The Faster and Scalable Alternative to DBSCAN Clustering
Addressing major limitations of the most popular density-based clustering algorithm — DBSCAN.
Formulating and Implementing the t-SNE Algorithm From Scratch
The most extensive visual guide to never forget how t-SNE works.
Bayesian Optimization for Hyperparameter Tuning
The caveats of grid search and random search and how Bayesian optimization addresses them.
Formulating and Implementing XGBoost From Scratch
An extensive visual guide to never forget how XGBoost works.
Formulating the Principal Component Analysis (PCA) Algorithm From Scratch
Approaching PCA as an optimization problem.
You Are Probably Building Inconsistent Classification Models Without Even Realizing
The limitations of always using cross-entropy loss in ordinal datasets.
Why R-squared is a Flawed Regression Metric?
The lesser-known limitations of the R-squared metric.
Generalized Linear Models (GLMs): The Supercharged Linear Regression
The limitations of linear regression and how GLMs solve them.
Why Sklearn’s Logistic Regression Has no Learning Rate Hyperparameter?
What are we missing here?
Why Do We Use log-loss To Train Logistic Regression?
The origin of log-loss.
Why Do We Use Sigmoid in Logistic Regression?
The origin of the Sigmoid function and a guide on modeling classification datasets.
The Probabilistic Origin of Regularization
Where did the regularization term come from?