Introduction
In the previous part of this final project, we laid the groundwork for building an agentic content writing system—starting with a single-agent setup, then gradually layering in real-time search tools, multi-agent collaboration, better prompting, and structured outputs with Pydantic.
Each step added more precision and capability, helping our system go from a simple text generator to a coordinated team of agents that can research, write, and format articles more reliably.

But we’re not done yet.
In this second part, we’ll take the system even further—introducing quality control, human feedback, task memory, and automation. You’ll see how to:
- Add validation guardrails to catch issues before they go live
- Bring in a human-in-the-loop step to guide or approve agent outputs
- Let agents reference previous task results for better coordination
- Attach callbacks to automate post-processing (like saving files or sending alerts)
- And finally, wrap it all up into a self-sufficient end-to-end pipeline.
This phase is all about tightening the loop—adding structure, safeguards, and smart defaults so your system doesn’t just work, but works well in real settings.

By the end, you’ll have a high-functioning content writing crew of agents that:
- Plans its tasks intelligently.
- Gathers external information in real-time.
- Coordinates between roles like writer, editor, and researcher.
- Structures its outputs with typed models.
- Invokes guardrails to catch errors.
- Requests human feedback when uncertain.
- Can trigger automated actions through callbacks.
- And much.
Let’s begin building our end-to-end agentic content writing system from where we left off in the previous part!

Setup
Throughout this article, we shall be using CrewAI, an open-source framework that makes it seamless to orchestrate role-playing, set goals, integrate tools, bring any of the popular LLMs, etc., to build autonomous AI agents.
To highlight more, CrewAI is a standalone independent framework without any dependencies on Langchain or other agent frameworks.
Let's dive in!
Setup
To get started, install CrewAI as follows:
Like the RAG crash course, we shall be using Ollama to serve LLMs locally. That said, CrewAI integrates with several LLM providers like:
- OpenAI
- Gemini
- Groq
- Azure
- Fireworks AI
- Cerebras
- SambaNova
- and many more.
To set up OpenAI, create a .env
file in the current directory and specify your OpenAI API key as follows:
Also, here's a step-by-step guide on using Ollama:
- Go to Ollama.com, select your operating system, and follow the instructions.

- If you are using Linux, you can run the following command:
- Ollama supports a bunch of models that are also listed in the model library:


Once you've found the model you're looking for, run this command in your terminal:
The above command will download the model locally, so give it some time to complete.
That said, for our demo, we would be running Llama 3.2 1B model instead since it's smaller and will not take much memory:
Done!
Everything is set up now, and we can move on to building our agents.
You can download the code below:
Read the full article
Sign up now to read the full article and get access to all articles for paying subscribers only.
Join today!