
TODAY'S ISSUE
TODAY’S DAILY DOSE OF DATA SCIENCE
5 Levels of Agentic AI Systems
Agentic AI systems don't just generate text; they can make decisions, call functions, and even run autonomous workflows.
The visual explains 5 levels of AI agency—from simple responders to fully autonomous agents.

Note: If you want to learn how to build Agentic systems, we have published 6 parts so far in our Agents crash course (with implementation):
- Part 1 → Why Agents? Creating Agents, Tasks, Tools, etc.
- Part 2 → Modular agents, custom tools, structured outputs, etc.
- Part 3 → Agentic Flows.
- Part 4 → Hands-on deep dive on building 2 agentic Flows systems.
- Part 5 and Part 6 → Advanced techniques to build robust Agents.
Let’s dive in to learn more about the 5 levels of Agentic AI systems.
1) Basic responder

- A human guides the entire flow.
- The LLM is just a generic responder that receives an input and produces an output. It has little control over the program flow.
2) Router pattern

- A human defines the paths/functions that exist in the flow.
- The LLM makes basic decisions on which function or path it can take.
3) Tool calling

- A human defines a set of tools the LLM can access to complete a task.
- LLM decides when to use them and also the arguments for execution.
4) Multi-agent pattern

A manager agent coordinates multiple sub-agents and decides the next steps iteratively.
- A human lays out the hierarchy between agents, their roles, tools, etc.
- The LLM controls execution flow, deciding what to do next.
5) Autonomous pattern

The most advanced pattern, wherein, the LLM generates and executes new code independently, effectively acting as an independent AI developer.
To recall:
- Basic responder only generates text.
- Router pattern decides when to take a path.
- Tool calling picks & runs tools.
- Multi-Agent pattern manages several agents.
- Autonomous pattern works fully independently.
👉 Over to you: Which one do you use the most?
Note: If you want to learn how to build Agentic systems, we have published 6 parts so far in our Agents crash course (with implementation):
IN CASE YOU MISSED IT
Build a Reasoning Model Like DeepSeek-R1
If you have used DeepSeek-R1 (or any other reasoning model), you must have seen that they autonomously allocate thinking time before producing a response.
Last week, we shared how to embed reasoning capabilities into any LLM.
We trained our own reasoning model like DeepSeek-R1 (with code).

To do this, we used:
- UnslothAI for efficient fine-tuning.
- Llama 3.1-8B as the LLM to add reasoning capabilities to.
Find the implementation and detailed walkthrough newsletter here →