Function calling & MCP for LLMs

...explained visually and with code.

👉
Hey! Enjoy our free data science newsletter! Subscribe below and receive a free data science PDF (530+ pages) with 150+ core data science and machine learning lessons.

TODAY'S ISSUE

TODAY’S DAILY DOSE OF DATA SCIENCE

Function calling & MCP for LLMs

Before MCPs became mainstream (or popular like they are right now), most AI workflows relied on traditional Function Calling.

Now, MCP (Model Context Protocol) is introducing a shift in how developers structure tool access and orchestration for Agents.

Here’s a visual that explains Function calling & MCP:

Let’s dive in to learn more!


What is Function Calling?

Function Calling is a mechanism that allows an LLM to recognize what tool it needs based on the user's input and when to invoke it.

Here’s how it typically works:

  • The LLM receives a prompt from the user.
  • The LLM decides the tool it needs.
  • The programmer implements procedures to accept a tool call request from the LLM and prepares a function call.
  • The function call (with parameters) is passed to a backend service that would handle the actual execution.

Let’s understand this in action real quick!

First, let’s define a tool function, get_stock_price. It uses the yfinance library to fetch the latest closing price for a specified stock ticker:

Next, we prompt an LLM (served with Ollama) and pass the tools the model can access for external information (if needed):

Printing the response, we get:

Notice that the message key in the above response object has tool_calls, which includes relevant details, such as:

  • tool.function.name: The name of the tool to be called.
  • tool.function.arguments: The arguments required by the tool.

Thus, we can utilize this info to produce a response as follows:

This produces the following output:

Notice that the entire process happened within our application context. We were responsible for:

  • Hosting and maintaining the tools/APIs.
  • Implementing a logic to determine which tool should be invoked and what its parameters are.
  • Handling the tool execution and scaling it if needed.
  • Managing authentication and error handling.

In short, Function Calling is about enabling dynamic tool use within your own stack—but it still requires you to wire everything manually.


What is MCP?

MCP, or Model Context Protocol, attempts to standardize this process.

While Function Calling focuses on what the model wants to do, MCP focuses on how tools are made discoverable and consumable—especially across multiple agents, models, or platforms.

Instead of hard-wiring tools inside every app or agent, MCP:

  • Standardizes how tools are defined, hosted, and exposed to LLMs.
  • Makes it easy for an LLM to discover available tools, understand their schemas, and use them.
  • Provides approval and audit workflows before tools are invoked.
  • Separates the concern of tool implementation from consumption.

Let’s understand this in action real quick by integrating ​Firecrawl's MCP server​ to utilize scraping tools within Cursor IDE.

To do this, go to Settings → MCP → Add new global MCP server.

In the JSON file, add what's shown below👇

Once done, you will find all the tools exposed by Firecrawl's MCP server that your Agents can use!

Notice that we didn't write a line of Python code to integrate Firecrawl's tools. Instead, we just integrated the MCP server.

Next, let's interact with this MCP server.

As shown in the video, when asked to list the imports of CrewAI tools listed in my blog:

0:00
/0:29
  • It identified the MCP tool (scraper).
  • Prepared the input argument.
  • Invoked the scraping tool.
  • Used the tool’s output to generate a response.

So to put it another way—think of MCP as infrastructure.

It creates a shared ecosystem where tools are treated like standardized services—similar to how REST APIs or gRPC endpoints work in traditional software engineering.


Conclusion

Here’s the key point: MCP and Function Calling are not in conflict. They’re two sides of the same workflow.

  • Function Calling helps an LLM decide what it wants to do.
  • MCP ensures that tools are reliably available, discoverable, and executable—without you needing to custom-integrate everything.

For example, an agent might say, “I need to search the web,” using function calling.

That request can be routed through MCP to select from available web search tools, invoke the correct one, and return the result in a standard format.

If you don't know about MCP servers, we covered them recently in the newsletter here:

Over to you: What is your perspective on Function Calling and MCPs?

Thanks for reading!

Naive rag to practical RAG

​16 techniques to build real-world RAG systems​

On paper, implementing a RAG system seems simple—connect a vector database, process documents, embed the data, embed the query, query the vector database, and prompt the LLM.

But in practice, turning a prototype into a high-performance application is an entirely different challenge.

We published a two-part guide that covers 16 practical techniques to build real-world RAG systems:

CRASH COURSE (56 MINS)

​Graph Neural Networks​

  • Google Maps uses graph ML for ETA prediction.
  • Pinterest uses graph ML (PingSage) for recommendations.
  • Netflix uses graph ML (SemanticGNN) for recommendations.
  • Spotify uses graph ML (HGNNs) for audiobook recommendations.
  • Uber Eats uses graph ML (a GraphSAGE variant) to suggest dishes, restaurants, etc.

The list could go on since almost every major tech company I know employs graph ML in some capacity.

Becoming proficient in ​graph ML​ now seems to be far more critical than traditional deep learning to differentiate your profile and aim for these positions.

A significant proportion of our real-world data often exists (or can be represented) as graphs:

  • Entities (nodes) are connected by relationships (edges).
  • Connections carry significant meaning, which, if we knew how to model, can lead to much more robust models.

The field of ​graph neural networks (GNNs)​ intends to fill this gap by extending deep learning techniques to graph data.

Learn sophisticated graph architectures and how to train them on graph data in ​this crash course​​ →

THAT'S A WRAP

No-Fluff Industry ML resources to

Succeed in DS/ML roles

At the end of the day, all businesses care about impact. That’s it!

  • Can you reduce costs?
  • Drive revenue?
  • Can you scale ML models?
  • Predict trends before they happen?

We have discussed several other topics (with implementations) in the past that align with such topics.

Here are some of them:

  • Learn sophisticated graph architectures and how to train them on graph data in this crash course.
  • So many real-world NLP systems rely on pairwise context scoring. Learn scalable approaches here.
  • Run large models on small devices using Quantization techniques.
  • Learn how to generate prediction intervals or sets with strong statistical guarantees for increasing trust using Conformal Predictions.
  • Learn how to identify causal relationships and answer business questions using causal inference in this crash course.
  • Learn how to scale and implement ML model training in this practical guide.
  • Learn 5 techniques with implementation to reliably test ML models in production.
  • Learn how to build and implement privacy-first ML systems using Federated Learning.
  • Learn 6 techniques with implementation to compress ML models.

All these resources will help you cultivate key skills that businesses and companies care about the most.

Our newsletter puts your products and services directly in front of an audience that matters — thousands of leaders, senior data scientists, machine learning engineers, data analysts, etc., around the world.

Get in touch today →


Join the Daily Dose of Data Science Today!

A daily column with insights, observations, tutorials, and best practices on data science.

Get Started!
Join the Daily Dose of Data Science Today!

Great! You’ve successfully signed up. Please check your email.

Welcome back! You've successfully signed in.

You've successfully subscribed to Daily Dose of Data Science.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.