A crash course on RAG systems - Part 2
Beginner-friendly and with implementation.
Beginner-friendly and with implementation.

TODAY'S ISSUE
Last week, we started a crash course on building RAG systems.
Part 2 is now available, where we are building on the foundations laid in Part 1.
Read here: βA Crash Course on Building RAG Systems β Part 2 (With Implementation)β.
Over the last few weeks, we have spent plenty of time understanding the key components of real-world NLP systems (like the deep dives on bi-encoders and cross-encoders for context pair similarity scoring).
RAG is another key NLP system that got massive attention due to one of the key challenges it solved around LLMs.
More specifically, if you know how to build a reliable RAG system, you can bypass the challenge and cost of fine-tuning LLMs.
Thatβs a considerable cost saving for enterprises.
And at the end of the day, all businesses care about impact. Thatβs it!
Thus, the objective of this crash course is to help you implement reliable RAG systems, understand the underlying challenges, and develop expertise in building RAG apps on LLMs, which every industry cares about now.
Of course, if you have never worked with LLMs, thatβs okay. We cover everything in a practical and beginner-friendly way.
We have been making great progress in extending the context window of LLMs.
But how?
We covered techniques that help us unlock larger context windows earlier this week.
βRead the techniques to extend the context length of LLMs here ββ
Recently, OpenAI released Swarm.
Itβs an open-source framework designed to manage and coordinate multiple AI agents in a highly customizable way.
AI Agents are autonomous systems that can reason, think, plan, figure out the relevant sources and extract information from them when needed, take actions, and even correct themselves if something goes wrong.
We published a practical and hands-on demo of this in the newsletter. We built an internet research assistant app that:
The demo is shown below: