Skip to main content
LLMs

​Building a Browser Automation Agent​

...explained visually and with code.

Avi Chawla
Avi Chawla
👉

TODAY'S ISSUE

hands-on

​Building a Browser Automation Agent​.

The browser is still the most universal interface, with 4.3 billion pages visited every day!

Today, let's do a demo on how we can completely automate it with a local stack:

  • Stagehand open-source AI browser automation.
  • CrewAI for orchestration.
  • Ollama to run gpt-oss.

System overview:

  • The user enters an automation query.
  • Planner Agent creates an automation plan.
  • The Browser Automation Agent executes it using the Stagehand tool.
  • The Response Agent generates a response.

Now, let's dive into the code!

Define LLM

We use three LLMs:

  • Planner LLM: Creates a structured plan for an automation task.
  • Automation LLM: Executes the plan using the Stagehand tool.
  • Response LLM: Synthesizes final response.

Define Automation Planner Agent

The planner agent receives an automation task from the user and creates a structured layout for execution by the browser agent.

Define Stagehand Browser Tool

A custom CrewAI tool utilizes AI to interact with web pages.

It leverages Stagehand's computer-use agentic capabilities to autonomously navigate URLs, perform page actions, and extract data to answer questions.

Define Browser Automation Agent

Browser Automation Agent utilizes the aforementioned Stagehand tool for autonomous browser control and plan execution.

Define Response Synthesis Agent

Synthesis Agent acts as final quality control, refining output from the browser automation agent to generate a polished response.

Create CrewAI Agentic Flow

Finally, we connect our Agents within a workflow using CrewAI Flows.

Done!

Here’s our multi-agent browser automation workflow in action, where we asked it to find the top contributor on the Stagehand GitHub repo:

0:00
/0:25

It initiated a local browser session, navigated the web page, and extracted the information.

You can find the Stagehand GitHub repo here →

You can find the code in the GitHub Repository →

Naive rag to practical RAG

16 techniques to build real-world RAG systems

On paper, implementing a RAG system seems simple—connect a vector database, process documents, embed the data, embed the query, query the vector database, and prompt the LLM.

But in practice, turning a prototype into a high-performance application is an entirely different challenge.

We published a two-part guide that covers 16 practical techniques to build real-world RAG systems:

CRASH COURSE (56 MINS)

Graph Neural Networks

  • Google Maps uses graph ML for ETA prediction.
  • Pinterest uses graph ML (PingSage) for recommendations.
  • Netflix uses graph ML (SemanticGNN) for recommendations.
  • Spotify uses graph ML (HGNNs) for audiobook recommendations.
  • Uber Eats uses graph ML (a GraphSAGE variant) to suggest dishes, restaurants, etc.

The list could go on since almost every major tech company I know employs graph ML in some capacity.

Becoming proficient in graph ML now seems to be far more critical than traditional deep learning to differentiate your profile and aim for these positions.

A significant proportion of our real-world data often exists (or can be represented) as graphs:

  • Entities (nodes) are connected by relationships (edges).
  • Connections carry significant meaning, which, if we knew how to model, can lead to much more robust models.

The field of graph neural networks (GNNs) intends to fill this gap by extending deep learning techniques to graph data.

Learn sophisticated graph architectures and how to train them on graph data in this crash course

Published on Aug 12, 2025