AI Agents Crash Course—Part 13 (With Implementation)

A practical deep dive into 10 step-by-step improvements for Agentic systems.

👉

Introduction

In the previous part of this final project, we laid the groundwork for building an agentic content writing system—starting with a single-agent setup, then gradually layering in real-time search tools, multi-agent collaboration, better prompting, and structured outputs with Pydantic.

Each step added more precision and capability, helping our system go from a simple text generator to a coordinated team of agents that can research, write, and format articles more reliably.

But we’re not done yet.

In this second part, we’ll take the system even further—introducing quality control, human feedback, task memory, and automation. You’ll see how to:

  • Add validation guardrails to catch issues before they go live
  • Bring in a human-in-the-loop step to guide or approve agent outputs
  • Let agents reference previous task results for better coordination
  • Attach callbacks to automate post-processing (like saving files or sending alerts)
  • And finally, wrap it all up into a self-sufficient end-to-end pipeline.

This phase is all about tightening the loop—adding structure, safeguards, and smart defaults so your system doesn’t just work, but works well in real settings.

By the end, you’ll have a high-functioning content writing crew of agents that:

  • Plans its tasks intelligently.
  • Gathers external information in real-time.
  • Coordinates between roles like writer, editor, and researcher.
  • Structures its outputs with typed models.
  • Invokes guardrails to catch errors.
  • Requests human feedback when uncertain.
  • Can trigger automated actions through callbacks.
  • And much.

Let’s begin building our end-to-end agentic content writing system from where we left off in the previous part!

💡
For simplicity, we have split these into two parts. We have already covered the first 5 steps of improvements in Part 12 (linked below) and the next five steps are available in this article..
AI Agents Crash Course—Part 12 (With Implementation)
A practical deep dive into 10 step-by-step improvements for Agentic systems.

Setup

Throughout this article, we shall be using CrewAI, an open-source framework that makes it seamless to orchestrate role-playing, set goals, integrate tools, bring any of the popular LLMs, etc., to build autonomous AI agents.

GitHub - crewAIInc/crewAI: Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. - crewAIInc/crewAI

To highlight more, CrewAI is a standalone independent framework without any dependencies on Langchain or other agent frameworks.

Let's dive in!

Setup

To get started, install CrewAI as follows:

Like the RAG crash course, we shall be using Ollama to serve LLMs locally. That said, CrewAI integrates with several LLM providers like:

  • OpenAI
  • Gemini
  • Groq
  • Azure
  • Fireworks AI
  • Cerebras
  • SambaNova
  • and many more.
💡
If you have an OpenAI API key, we recommend using that since the outputs may not make sense at times with weak LLMs. If you don't have an API key, you can get some credits by creating a dummy account on OpenAI and use that instead. If not, you can continue reading and use Ollama instead but the outputs could be poor in that case.

To set up OpenAI, create a .env file in the current directory and specify your OpenAI API key as follows:

Also, here's a step-by-step guide on using Ollama:

  • Go to Ollama.com, select your operating system, and follow the instructions.
    • If you are using Linux, you can run the following command:

  • Ollama supports a bunch of models that are also listed in the model library:
library
Get up and running with large language models.

Once you've found the model you're looking for, run this command in your terminal:

The above command will download the model locally, so give it some time to complete.

That said, for our demo, we would be running Llama 3.2 1B model instead since it's smaller and will not take much memory:

Done!

Everything is set up now, and we can move on to building our agents.

You can download the code below:

Join the Daily Dose of Data Science Today!

A daily column with insights, observations, tutorials, and best practices on data science.

Get Started!
Join the Daily Dose of Data Science Today!

Great! You’ve successfully signed up. Please check your email.

Welcome back! You've successfully signed in.

You've successfully subscribed to Daily Dose of Data Science.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.