TODAY'S ISSUE
TOGETHER WITH ASSEMBLYAI
βAssemblyAI Universal-2: Speech Recognition at Superhuman Accuracyβ
Itβs crazy hard to train reliable transcription models on speech/audio data since so many nuances go into the data, like accents, pauses, small utterances, murmurs, you name it.
βAssemblyAIβ first trained Universal-1 on 12.5 million hours of audio, outperforming every other model in the industry (from Google, OpenAI, etc.) across 15+ languages.
Now, they released βUniversal-2β, their most advanced speech-to-text model yet.
Hereβs how βUniversal-2β compares with Universal-1:
- 24% improvement in proper nouns recognition
- 21% improvement in alphanumeric accuracy
- 15% better text formatting
TODAYβS DAILY DOSE OF DATA SCIENCE
[Hands-on] Training Autoencoders
One can do so many things with Autoencoders:
- Dimensionality reduction.
- Anomaly detection where if reconstruction error is high, somethingβs fishy!
- Detect multivariate covariate shift, which we discussed βhereβ.
- Data denoising (clean noisy data by training on noise).
They are simple yet so powerful!
At their core, Autoencoders have two main parts:
- Encoder: Compresses the input into a dense representation (latent space).
- Decoder: Reconstructs the input from this dense representation.
And the idea is to make the reconstructed output as close to the original input as possible:
On a side note, here's how the denoising autoencoder works:
- During training, add random noise to the original input, and train an autoencoder to predict the original input.
- During inference, provide a noisy input and the network will reconstruct it.
Coming back to the topic...
Below, let's quickly implement an autoencoder:
We'll use βPyTorch Lightningβ for this.
First, we define our autoencoder!
Next, we define our dataset (MNIST):
Finally, we train the model in two lines of code with βPyTorch Lightningβ:
The Lightning Trainer automates 40+ training optimization techniques including:
- Epoch and batch iteration
optimizer.step()
,loss.backward()
etc.- Calling
model.eval()
- enabling/disabling grads during evaluation
- Checkpoint saving and loading
- Multi-GPU
- 16-bit precision.
Since the model has been trained, we can visualize the performance.
Let's encode/decode an image using our trained model below:
Done!
And that's how you train an autoencoder.
As mentioned earlier, autoencoders are incredibly helpful in detecting multivariate covariate shifts.
This is important to address since almost all real-world ML models gradually degrade in performance due to covariate shift.
It is a serious problem because we trained the model on one distribution, but it is being used to predict on another distribution in production.
Autoencoders help in addressing this, and we discussed it βhereβ.
π Over to you: What are some other use cases of Autoencoder?
IN CASE YOU MISSED IT
Build a Multi-agent Research Assistant With SwarmZeroβ
After using OpenAIβs Swarm, we realized several limitations.
One major shortcoming is that it isnβt suited for production use cases since the project is only meant for experimental purposes.
ββSwarmZeroββ solves this.
We recently shared a practical and hands-on demo of this.
Weβll build a PerplexityAI-like research assistant app that:
- Accepts a user query.
- Searches the web about it.
- And turns it into a well-crafted article, which can saved as a PDF, in a Google Doc, confluence page, and more.
βLearn how to build multi-agent applications with SwarmZero β
THAT'S A WRAP
No-Fluff Industry ML resources to
Succeed in DS/ML roles
At the end of the day, all businesses care about impact. Thatβs it!
- Can you reduce costs?
- Drive revenue?
- Can you scale ML models?
- Predict trends before they happen?
We have discussed several other topics (with implementations) in the past that align with such topics.
Here are some of them:
- Learn sophisticated graph architectures and how to train them on graph data in this crash course.
- So many real-world NLP systems rely on pairwise context scoring. Learn scalable approaches here.
- Run large models on small devices using Quantization techniques.
- Learn how to generate prediction intervals or sets with strong statistical guarantees for increasing trust using Conformal Predictions.
- Learn how to identify causal relationships and answer business questions using causal inference in this crash course.
- Learn how to scale and implement ML model training in this practical guide.
- Learn 5 techniques with implementation to reliably test ML models in production.
- Learn how to build and implement privacy-first ML systems using Federated Learning.
- Learn 6 techniques with implementation to compress ML models.
All these resources will help you cultivate key skills that businesses and companies care about the most.
SPONSOR US
Advertise to 450k+ data professionals
Our newsletter puts your products and services directly in front of an audience that matters β thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., around the world.