RAG vs Graph RAG

...explained visually.

👉
Hey! Enjoy our free data science newsletter! Subscribe below and receive a free data science PDF (530+ pages) with 150+ core data science and machine learning lessons.

TODAY'S ISSUE

TODAY’S DAILY DOSE OF DATA SCIENCE

RAG vs Graph RAG visually explained​

We created the following visual which illustrates the difference between traditional and Graph RAG.

Today let’s understand how it works.

On as side note, we’ve already covered Graph RAG in much more detail with implementation in our Part 7 of our RAG crash course, read it here: Graph RAG deep dive


Imagine you have a lengthy document, such as a biography of an individual (X), where each chapter discusses one of his accomplishments, among other details.

For example:

  • Chapter 1: Discusses Accomplishment-1.
  • Chapter 2: Discusses Accomplishment-2.
  • ...
  • Chapter 10: Discusses Accomplishment-10.

Now, I want you to understand the next part carefully!

Lets say you've created a traditional RAG over this document and use it to summarise all these accomplishments.

This might not be possible with traditional retrieval as it must requires the entire context...

...but you might only be fetching some top-k relevant chunks from the vector db.

Moreover, since traditional RAG systems retrieve each chunk independently, this can often leave the LLM to infer the connections between these chunks. (provided the chunks are retrieved).

Graph RAG solves this problem.

The idea is to first create a graph (entities & relationships) from the documents and then do traversal over that graph during the retrieval phase.

Lets see how Graph RAG solves the above problems.

First, a system (typically an LLM) will create the graph by understanding the biography (unstructured text).

This will produce a full graph of nodes entities & relationships, and a subgraph around accomplishments will look something like this:

  • X → <accomplished> → Accomplishment-1.
  • X → <accomplished> → Accomplishment-2.
  • ...
  • X → <accomplished> → Accomplishment-N.

When summarizing these accomplishments, the retrieval phase can do a graph traversal to fetch all the relevant context related to X's accomplishments.

This context, when passed to the LLM, will produce a more coherent and complete answer as opposed to traditional RAG.

Another reason why Graph RAG systems are so effective is because LLMs are inherently adept at reasoning with structured data.

Graph RAG instills that structure into them with their retrieval mechanism.

On a side note, even search engines now actively use Graph RAG systems due to their high utility.

We’ve already covered Graph RAG in much more detail with implementation in our Part 7 of our RAG crash course, read it here: ​Graph RAG deep dive​.

Moreover, our full RAG crash course discusses RAG from basics to beyond:

Thanks for reading.

MODEL TRAINING OPTIMIZATION

​Multi-GPU training (A Practical Guide)​​

If you look at job descriptions for Applied ML or ML engineer roles on LinkedIn, most of them demand skills like the ability to train models on large datasets:

Of course, this is not something new or emerging.

But the reason they explicitly mention “large datasets” is quite simple to understand.

Businesses have more data than ever before.

Traditional single-node model training just doesn’t work because one cannot wait months to train a model.

​​Distributed (or multi-GPU)​ training is one of the most essential ways to address this.

​​Here​, we covered the core technicalities behind multi-GPU training, how it works under the hood, and implementation details.

We also look at the key considerations for multi-GPU (or distributed) training, which, if not addressed appropriately, may lead to suboptimal performance or slow training.

​​Learn how to train models with multi-GPU training →​​

THAT'S A WRAP

No-Fluff Industry ML resources to

Succeed in DS/ML roles

At the end of the day, all businesses care about impact. That’s it!

  • Can you reduce costs?
  • Drive revenue?
  • Can you scale ML models?
  • Predict trends before they happen?

We have discussed several other topics (with implementations) in the past that align with such topics.

Here are some of them:

  • Learn sophisticated graph architectures and how to train them on graph data in this crash course.
  • So many real-world NLP systems rely on pairwise context scoring. Learn scalable approaches here.
  • Run large models on small devices using Quantization techniques.
  • Learn how to generate prediction intervals or sets with strong statistical guarantees for increasing trust using Conformal Predictions.
  • Learn how to identify causal relationships and answer business questions using causal inference in this crash course.
  • Learn how to scale and implement ML model training in this practical guide.
  • Learn 5 techniques with implementation to reliably test ML models in production.
  • Learn how to build and implement privacy-first ML systems using Federated Learning.
  • Learn 6 techniques with implementation to compress ML models.

All these resources will help you cultivate key skills that businesses and companies care about the most.

Our newsletter puts your products and services directly in front of an audience that matters — thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., around the world.

Get in touch today →


Join the Daily Dose of Data Science Today!

A daily column with insights, observations, tutorials, and best practices on data science.

Get Started!
Join the Daily Dose of Data Science Today!

Great! You’ve successfully signed up. Please check your email.

Welcome back! You've successfully signed in.

You've successfully subscribed to Daily Dose of Data Science.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.