Skip to main content
LLMs

5 Powerful MCP Servers

...to give superpowers to your AI Agents.

Avi Chawla
Avi Chawla
👉

TODAY'S ISSUE

TODAY’S DAILY DOSE OF DATA SCIENCE

5 Powerful MCP Server

Integrating a tool/API with Agents demands:

  • reading docs
  • writing code
  • updating the code, etc.

To simplify this, platforms now offer MCP servers. Developers can plug them with Agents and use their APIs instantly. We also covered it in a recent newsletter issue (read here).

Below, let's look at 5 incredibly powerful MCP servers.

#1) Firecrawl MCP server

This adds powerful web scraping capabilities to Cursor, Claude, and any other LLM clients using Firecrawl.

0:00
/0:29

Tools include:

  • Scraping
  • Crawling
  • Deep research
  • Extracting structured data
  • and more

Here’s a demo:

#2) Browserbase MCP server

This allows Agents to initiate a browser session with Browserbase.

0:00
/0:33

Tools include:

  • Create browser session
  • Navigate to a URL
  • Take screenshot
  • and more

Here’s a demo:

#3) Opik MCP server

0:00
/0:28

This enables traceability into AI Agents and lets you monitor your LLM applications, by Comet.

Tools include:

  • Creating projects
  • Enable tracing
  • Getting tracing stats
  • and more

Here’s a demo:

#4) Brave MCP server

0:00
/0:20

This enables Agents to use the Brave Search API for both web and local search capabilities.

Tools include:

  • Brave web search
  • Brave local search

#5) Sequential thinking

0:00
/0:31

This enables dynamic and reflective problem-solving through a structured thinking process.

Which ones are your favorite MCP servers? Let us know!

IN CASE YOU MISSED IT

​​KV caching in LLMs, explained visually​

KV caching is a popular technique to speed up LLM inference.

To get some perspective, look at the inference speed difference from our demo:

0:00
/0:46
  • with KV caching → 9 seconds
  • without KV caching → 40 seconds (~4.5x slower, and this gap grows as more tokens are produced).

The visual explains how it works:

​We covered this in detail in a recent issue here →

ROADMAP

​16 techniques to build real-world RAG systems​

On paper, implementing a RAG system seems simple—connect a vector database, process documents, embed the data, embed the query, query the vector database, and prompt the LLM.

But in practice, turning a prototype into a high-performance application is an entirely different challenge.

We published a two-part guide that covers 16 practical techniques to build real-world RAG systems:

Published on Apr 5, 2025