Build your AI Web Scraper in Minutes with Crawl4AI

Web scraping is one of the most in-demand skills in AI and tech today. Companies are actively seeking professionals who can extract valuable data efficiently.

In this guide, I’ll walk you through building an AI-powered web scraper using Crawl4AI, leveraging LLMs to intelligently extract and process structured data from any website—whether you’re scraping leads, gathering research data, or building a custom dataset, this will save you time and effort ⏳.

By the end, you’ll have a fully functional web scraper that can extract leads from YellowPages, process them with AI, and save the results to a CSV file—all with minimal effort.

And the best part? It costs practically nothing to run!

Let’s dive in!

Access the project in my Github repository now!


🤖 What is Crawl4AI?

Web scraping has come a long way, and Crawl4AI is here to take it to the next level

Crawl4AI is an open-source web crawling and scraping framework designed for speed, scalability, and seamless integration with LLMs (e.g., GPT-4o, Claude). It combines traditional scraping methods with AI-driven data extraction, making it ideal for data pipelines, automation workflows, and AI agents.

Key Features

  • LLM-Friendly Output – Generates clean Markdown-formatted data, perfect for retrieval-augmented generation (RAG) and direct ingestion into LLMs.
  • Smart Data Extraction – Combines AI-powered parsing with traditional methods (CSS, XPath) for maximum versatility.
  • Advanced Browser Control – Handles JavaScript-heavy websites with proxy support, session management, and stealth scraping.
  • High Performance – Supports parallel crawling and chunk-based extraction for efficient, scalable data collection.
  • Cost-Effective & Open-Source – Eliminates the need for costly subscriptions or expensive APIs, offering full customization and scalability without breaking the bank.

Crawl4AI empowers you to extract data intelligently, efficiently, and at scale—unlocking new possibilities for automation and AI-driven workflows.

️ What We Will Build

Businesses and agencies often need access to local business information to find potential clients, analyze competitors, or generate leads. Instead of manually searching directories, an AI scraper can automate this process—saving time and effort.

To showcase the power of AI web scraping with Crawl4AI, I chose to built a scraper that extracts local businesses information from Yellowpages.

This scraper automatically navigates through listings, collecting key details like:

Business Name

Address

Phone Number

Website & Additional Info

Once extracted, the data is structured and saved into a CSV file, making it easy to use for lead generation, market research, or business analytics.

This project demonstrates how Crawl4AI + AI models can quickly extract and process web data with minimal effort. Let’s break down how it works!


️ How It Works

Before running our scraper, let’s break down how the code works and give you a quick overview of how LLM-powered scraping with Crawl4AI functions.

(Don’t worry—I’ll keep it short and focus on the essential parts! )

1️⃣ Browser Configuration

First, we need to configure the browser settings. Crawl4AI uses Playwright under the hood, so we get full control over how the browser behaves, including:

  • Headless mode (whether the browser runs in the background)

  • Proxy settings (to avoid getting blocked)

  • User agents & timeouts (to mimic real users)

Here’s how we define our browser configuration:

<span>from</span> <span>crawl4ai</span> <span>import</span> <span>BrowserConfig</span>
<span>def</span> <span>get_browser_config</span><span>()</span> <span>-></span> <span>BrowserConfig</span><span>:</span>
<span>return</span> <span>BrowserConfig</span><span>(</span>
<span>browser_type</span><span>=</span><span>"</span><span>chromium</span><span>"</span><span>,</span> <span># Simulate a Chromium-based browser </span> <span>headless</span><span>=</span><span>True</span><span>,</span> <span># Run in headless mode (no UI) </span> <span>verbose</span><span>=</span><span>True</span><span>,</span> <span># Enable detailed logs for debugging </span> <span>)</span>
<span>from</span> <span>crawl4ai</span> <span>import</span> <span>BrowserConfig</span>

<span>def</span> <span>get_browser_config</span><span>()</span> <span>-></span> <span>BrowserConfig</span><span>:</span>
    <span>return</span> <span>BrowserConfig</span><span>(</span>
        <span>browser_type</span><span>=</span><span>"</span><span>chromium</span><span>"</span><span>,</span>  <span># Simulate a Chromium-based browser </span>        <span>headless</span><span>=</span><span>True</span><span>,</span>  <span># Run in headless mode (no UI) </span>        <span>verbose</span><span>=</span><span>True</span><span>,</span>  <span># Enable detailed logs for debugging </span>    <span>)</span>
from crawl4ai import BrowserConfig def get_browser_config() -> BrowserConfig: return BrowserConfig( browser_type="chromium", # Simulate a Chromium-based browser headless=True, # Run in headless mode (no UI) verbose=True, # Enable detailed logs for debugging )

Enter fullscreen mode Exit fullscreen mode

2️⃣ Defining the LLM Extraction Strategy

Now comes the AI part! Crawl4AI allows us to use an LLM extraction strategy to tell the model exactly what to extract from each page.

Here’s how we define our strategy:

<span>llm_strategy</span> <span>=</span> <span>LLMExtractionStrategy</span><span>(</span>
<span>provider</span><span>=</span><span>"</span><span>gemini/gemini-2.0-flash</span><span>"</span><span>,</span> <span># LLM provider (Gemini, OpenAI, etc.) </span> <span>api_token</span><span>=</span><span>os</span><span>.</span><span>getenv</span><span>(</span><span>"</span><span>GEMINI_API_KEY</span><span>"</span><span>),</span> <span># API key for authentication </span> <span>schema</span><span>=</span><span>BusinessData</span><span>.</span><span>model_json_schema</span><span>(),</span> <span># JSON schema of expected data </span> <span>extraction_type</span><span>=</span><span>"</span><span>schema</span><span>"</span><span>,</span> <span># Use structured schema extraction </span> <span>instruction</span><span>=</span><span>(</span>
<span>"</span><span>Extract all business information: </span><span>'</span><span>name</span><span>'</span><span>, </span><span>'</span><span>address</span><span>'</span><span>, </span><span>'</span><span>website</span><span>'</span><span>, </span><span>"</span>
<span>"'</span><span>phone number</span><span>'</span><span> and a one-sentence </span><span>'</span><span>description</span><span>'</span><span> from the content.</span><span>"</span>
<span>),</span>
<span>input_format</span><span>=</span><span>"</span><span>markdown</span><span>"</span><span>,</span> <span># Define input format </span> <span>verbose</span><span>=</span><span>True</span><span>,</span> <span># Enable logging for debugging </span><span>)</span>
<span>llm_strategy</span> <span>=</span> <span>LLMExtractionStrategy</span><span>(</span>
    <span>provider</span><span>=</span><span>"</span><span>gemini/gemini-2.0-flash</span><span>"</span><span>,</span>  <span># LLM provider (Gemini, OpenAI, etc.) </span>    <span>api_token</span><span>=</span><span>os</span><span>.</span><span>getenv</span><span>(</span><span>"</span><span>GEMINI_API_KEY</span><span>"</span><span>),</span>  <span># API key for authentication </span>    <span>schema</span><span>=</span><span>BusinessData</span><span>.</span><span>model_json_schema</span><span>(),</span>  <span># JSON schema of expected data </span>    <span>extraction_type</span><span>=</span><span>"</span><span>schema</span><span>"</span><span>,</span>  <span># Use structured schema extraction </span>    <span>instruction</span><span>=</span><span>(</span>
        <span>"</span><span>Extract all business information: </span><span>'</span><span>name</span><span>'</span><span>, </span><span>'</span><span>address</span><span>'</span><span>, </span><span>'</span><span>website</span><span>'</span><span>, </span><span>"</span>
        <span>"'</span><span>phone number</span><span>'</span><span> and a one-sentence </span><span>'</span><span>description</span><span>'</span><span> from the content.</span><span>"</span>
    <span>),</span>
    <span>input_format</span><span>=</span><span>"</span><span>markdown</span><span>"</span><span>,</span>  <span># Define input format </span>    <span>verbose</span><span>=</span><span>True</span><span>,</span>  <span># Enable logging for debugging </span><span>)</span>
llm_strategy = LLMExtractionStrategy( provider="gemini/gemini-2.0-flash", # LLM provider (Gemini, OpenAI, etc.) api_token=os.getenv("GEMINI_API_KEY"), # API key for authentication schema=BusinessData.model_json_schema(), # JSON schema of expected data extraction_type="schema", # Use structured schema extraction instruction=( "Extract all business information: 'name', 'address', 'website', " "'phone number' and a one-sentence 'description' from the content." ), input_format="markdown", # Define input format verbose=True, # Enable logging for debugging )

Enter fullscreen mode Exit fullscreen mode

Structuring the Output

To ensure the extracted data follows a consistent structure, we use a Pydantic model:

<span>from</span> <span>pydantic</span> <span>import</span> <span>BaseModel</span><span>,</span> <span>Field</span>
<span>class</span> <span>BusinessData</span><span>(</span><span>BaseModel</span><span>):</span>
<span>name</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>The business name.</span><span>"</span><span>)</span>
<span>address</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>The business address.</span><span>"</span><span>)</span>
<span>phone_number</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>The business phone number.</span><span>"</span><span>)</span>
<span>website</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>The business website URL.</span><span>"</span><span>)</span>
<span>description</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>A short description of the business.</span><span>"</span><span>)</span>
<span>from</span> <span>pydantic</span> <span>import</span> <span>BaseModel</span><span>,</span> <span>Field</span>

<span>class</span> <span>BusinessData</span><span>(</span><span>BaseModel</span><span>):</span>
    <span>name</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>The business name.</span><span>"</span><span>)</span>
    <span>address</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>The business address.</span><span>"</span><span>)</span>
    <span>phone_number</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>The business phone number.</span><span>"</span><span>)</span>
    <span>website</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>The business website URL.</span><span>"</span><span>)</span>
    <span>description</span><span>:</span> <span>str</span> <span>=</span> <span>Field</span><span>(...,</span> <span>description</span><span>=</span><span>"</span><span>A short description of the business.</span><span>"</span><span>)</span>
from pydantic import BaseModel, Field class BusinessData(BaseModel): name: str = Field(..., description="The business name.") address: str = Field(..., description="The business address.") phone_number: str = Field(..., description="The business phone number.") website: str = Field(..., description="The business website URL.") description: str = Field(..., description="A short description of the business.")

Enter fullscreen mode Exit fullscreen mode

Why use Pydantic?

It ensures the LLM returns structured data that we can easily validate and process.

Multiple LLM Choices

You noticed that in the LLM strategy, I chose to use the Gemini 2.0 Flash LLM. However, as Crawl4AI is built on LiteLLM, you can swap it with OpenAI, Claude, DeepSeek, Groq, or any other supported LLM! (See the full list here).

3️⃣ Scraping the web page

Now that we have the browser and LLM strategy set up, we need a function to scrape each page and extract business details:

<span>async</span> <span>def</span> <span>fetch_and_process_page</span><span>(</span>
<span>crawler</span><span>:</span> <span>AsyncWebCrawler</span><span>,</span>
<span>page_number</span><span>:</span> <span>int</span><span>,</span>
<span>base_url</span><span>:</span> <span>str</span><span>,</span>
<span>css_selector</span><span>:</span> <span>str</span><span>,</span>
<span>llm_strategy</span><span>:</span> <span>LLMExtractionStrategy</span><span>,</span>
<span>session_id</span><span>:</span> <span>str</span><span>,</span>
<span>seen_names</span><span>:</span> <span>Set</span><span>[</span><span>str</span><span>],</span>
<span>)</span> <span>-></span> <span>Tuple</span><span>[</span><span>List</span><span>[</span><span>dict</span><span>],</span> <span>bool</span><span>]:</span>
<span>url</span> <span>=</span> <span>base_url</span><span>.</span><span>format</span><span>(</span><span>page_number</span><span>=</span><span>page_number</span><span>)</span>
<span>print</span><span>(</span><span>f</span><span>"</span><span>Loading page </span><span>{</span><span>page_number</span><span>}</span><span>...</span><span>"</span><span>)</span>
<span># Fetch page content with the extraction strategy </span> <span>result</span> <span>=</span> <span>await</span> <span>crawler</span><span>.</span><span>arun</span><span>(</span>
<span>url</span><span>=</span><span>url</span><span>,</span>
<span>config</span><span>=</span><span>CrawlerRunConfig</span><span>(</span>
<span>cache_mode</span><span>=</span><span>CacheMode</span><span>.</span><span>BYPASS</span><span>,</span> <span># No cached data </span> <span>extraction_strategy</span><span>=</span><span>llm_strategy</span><span>,</span> <span># Define extraction method </span> <span>css_selector</span><span>=</span><span>css_selector</span><span>,</span> <span># Target specific page elements </span> <span>session_id</span><span>=</span><span>session_id</span><span>,</span> <span># Unique ID for the session </span> <span>),</span>
<span>)</span>
<span># Parse extracted content </span> <span>extracted_data</span> <span>=</span> <span>json</span><span>.</span><span>loads</span><span>(</span><span>result</span><span>.</span><span>extracted_content</span><span>)</span>
<span># Process extracted businesses </span> <span>all_businesses</span> <span>=</span> <span>[]</span>
<span>for</span> <span>business</span> <span>in</span> <span>extracted_data</span><span>:</span>
<span>if</span> <span>is_duplicated</span><span>(</span><span>business</span><span>[</span><span>"</span><span>name</span><span>"</span><span>],</span> <span>seen_names</span><span>):</span>
<span>print</span><span>(</span><span>f</span><span>"</span><span>Duplicate business </span><span>'</span><span>{</span><span>business</span><span>[</span><span>'</span><span>name</span><span>'</span><span>]</span><span>}</span><span>'</span><span> found. Skipping.</span><span>"</span><span>)</span>
<span>continue</span> <span># Avoid duplicates </span>
<span>seen_names</span><span>.</span><span>add</span><span>(</span><span>business</span><span>[</span><span>"</span><span>name</span><span>"</span><span>])</span>
<span>all_businesses</span><span>.</span><span>append</span><span>(</span><span>business</span><span>)</span>
<span>if</span> <span>not</span> <span>all_businesses</span><span>:</span>
<span>print</span><span>(</span><span>f</span><span>"</span><span>No valid businesses found on page </span><span>{</span><span>page_number</span><span>}</span><span>.</span><span>"</span><span>)</span>
<span>return</span> <span>[],</span> <span>False</span>
<span>print</span><span>(</span><span>f</span><span>"</span><span>Extracted </span><span>{</span><span>len</span><span>(</span><span>all_businesses</span><span>)</span><span>}</span><span> businesses from page </span><span>{</span><span>page_number</span><span>}</span><span>.</span><span>"</span><span>)</span>
<span>return</span> <span>all_businesses</span><span>,</span> <span>False</span> <span># Continue crawling </span>
<span>async</span> <span>def</span> <span>fetch_and_process_page</span><span>(</span>
    <span>crawler</span><span>:</span> <span>AsyncWebCrawler</span><span>,</span>
    <span>page_number</span><span>:</span> <span>int</span><span>,</span>
    <span>base_url</span><span>:</span> <span>str</span><span>,</span>
    <span>css_selector</span><span>:</span> <span>str</span><span>,</span>
    <span>llm_strategy</span><span>:</span> <span>LLMExtractionStrategy</span><span>,</span>
    <span>session_id</span><span>:</span> <span>str</span><span>,</span>
    <span>seen_names</span><span>:</span> <span>Set</span><span>[</span><span>str</span><span>],</span>
<span>)</span> <span>-></span> <span>Tuple</span><span>[</span><span>List</span><span>[</span><span>dict</span><span>],</span> <span>bool</span><span>]:</span>

    <span>url</span> <span>=</span> <span>base_url</span><span>.</span><span>format</span><span>(</span><span>page_number</span><span>=</span><span>page_number</span><span>)</span>
    <span>print</span><span>(</span><span>f</span><span>"</span><span>Loading page </span><span>{</span><span>page_number</span><span>}</span><span>...</span><span>"</span><span>)</span>

    <span># Fetch page content with the extraction strategy </span>    <span>result</span> <span>=</span> <span>await</span> <span>crawler</span><span>.</span><span>arun</span><span>(</span>
        <span>url</span><span>=</span><span>url</span><span>,</span>
        <span>config</span><span>=</span><span>CrawlerRunConfig</span><span>(</span>
            <span>cache_mode</span><span>=</span><span>CacheMode</span><span>.</span><span>BYPASS</span><span>,</span>  <span># No cached data </span>            <span>extraction_strategy</span><span>=</span><span>llm_strategy</span><span>,</span>  <span># Define extraction method </span>            <span>css_selector</span><span>=</span><span>css_selector</span><span>,</span>  <span># Target specific page elements </span>            <span>session_id</span><span>=</span><span>session_id</span><span>,</span>  <span># Unique ID for the session </span>        <span>),</span>
    <span>)</span>

    <span># Parse extracted content </span>    <span>extracted_data</span> <span>=</span> <span>json</span><span>.</span><span>loads</span><span>(</span><span>result</span><span>.</span><span>extracted_content</span><span>)</span>

    <span># Process extracted businesses </span>    <span>all_businesses</span> <span>=</span> <span>[]</span>
    <span>for</span> <span>business</span> <span>in</span> <span>extracted_data</span><span>:</span>
        <span>if</span> <span>is_duplicated</span><span>(</span><span>business</span><span>[</span><span>"</span><span>name</span><span>"</span><span>],</span> <span>seen_names</span><span>):</span>
            <span>print</span><span>(</span><span>f</span><span>"</span><span>Duplicate business </span><span>'</span><span>{</span><span>business</span><span>[</span><span>'</span><span>name</span><span>'</span><span>]</span><span>}</span><span>'</span><span> found. Skipping.</span><span>"</span><span>)</span>
            <span>continue</span>  <span># Avoid duplicates </span>
        <span>seen_names</span><span>.</span><span>add</span><span>(</span><span>business</span><span>[</span><span>"</span><span>name</span><span>"</span><span>])</span>
        <span>all_businesses</span><span>.</span><span>append</span><span>(</span><span>business</span><span>)</span>

    <span>if</span> <span>not</span> <span>all_businesses</span><span>:</span>
        <span>print</span><span>(</span><span>f</span><span>"</span><span>No valid businesses found on page </span><span>{</span><span>page_number</span><span>}</span><span>.</span><span>"</span><span>)</span>
        <span>return</span> <span>[],</span> <span>False</span>

    <span>print</span><span>(</span><span>f</span><span>"</span><span>Extracted </span><span>{</span><span>len</span><span>(</span><span>all_businesses</span><span>)</span><span>}</span><span> businesses from page </span><span>{</span><span>page_number</span><span>}</span><span>.</span><span>"</span><span>)</span>
    <span>return</span> <span>all_businesses</span><span>,</span> <span>False</span>  <span># Continue crawling </span>
async def fetch_and_process_page( crawler: AsyncWebCrawler, page_number: int, base_url: str, css_selector: str, llm_strategy: LLMExtractionStrategy, session_id: str, seen_names: Set[str], ) -> Tuple[List[dict], bool]: url = base_url.format(page_number=page_number) print(f"Loading page {page_number}...") # Fetch page content with the extraction strategy result = await crawler.arun( url=url, config=CrawlerRunConfig( cache_mode=CacheMode.BYPASS, # No cached data extraction_strategy=llm_strategy, # Define extraction method css_selector=css_selector, # Target specific page elements session_id=session_id, # Unique ID for the session ), ) # Parse extracted content extracted_data = json.loads(result.extracted_content) # Process extracted businesses all_businesses = [] for business in extracted_data: if is_duplicated(business["name"], seen_names): print(f"Duplicate business '{business['name']}' found. Skipping.") continue # Avoid duplicates seen_names.add(business["name"]) all_businesses.append(business) if not all_businesses: print(f"No valid businesses found on page {page_number}.") return [], False print(f"Extracted {len(all_businesses)} businesses from page {page_number}.") return all_businesses, False # Continue crawling

Enter fullscreen mode Exit fullscreen mode

This function:

  • Includes the necessary LLM strategy and CSS selector in the crawler config.

  • Loads the webpage by calling the arun method.

  • Extracts business details using the LLM strategy.

  • Filters duplicates to prevent redundant data.

  • Returns a list of all collected local businesses.

Pro Tip

The session_id helps maintain consistent browsing behavior across pagination – crucial for websites that track user sessions!

Targeting the Right Data with CSS Selectors

To help the LLM focus on the relevant sections of the page, we use CSS selectors. These selectors allow us to pinpoint specific HTML elements that contain the desired data, ensuring a smoother and cleaner extraction for the LLM.

The screenshot below shows the HTML structure of a Yellow Pages listing, where we can use CSS selectors to extract business details precisely.


4️⃣ Putting Everything Together

Scraping just one page isn’t enough—we want to crawl all businesses listings. So let’s tie everything together into a full web crawling workflow!

<span>async</span> <span>def</span> <span>crawl_yellowpages</span><span>():</span>
<span>"""</span><span> Main function to scrape business data. </span><span>"""</span>
<span># Initialize configurations </span> <span>browser_config</span> <span>=</span> <span>get_browser_config</span><span>()</span>
<span>llm_strategy</span> <span>=</span> <span>get_llm_strategy</span><span>(</span>
<span>llm_instructions</span><span>=</span><span>SCRAPER_INSTRUCTIONS</span><span>,</span> <span># Extraction instructions </span> <span>output_format</span><span>=</span><span>BusinessData</span> <span># Output schema </span> <span>)</span>
<span>session_id</span> <span>=</span> <span>"</span><span>crawler_session</span><span>"</span>
<span># Initialize state variables </span> <span>page_number</span> <span>=</span> <span>1</span>
<span>all_records</span> <span>=</span> <span>[]</span>
<span>seen_names</span> <span>=</span> <span>set</span><span>()</span>
<span># Start the web crawler session </span> <span>async</span> <span>with</span> <span>AsyncWebCrawler</span><span>(</span><span>config</span><span>=</span><span>browser_config</span><span>)</span> <span>as</span> <span>crawler</span><span>:</span>
<span>while</span> <span>True</span><span>:</span>
<span>records</span><span>,</span> <span>no_results_found</span> <span>=</span> <span>await</span> <span>fetch_and_process_page</span><span>(</span>
<span>crawler</span><span>,</span>
<span>page_number</span><span>,</span>
<span>BASE_URL</span><span>,</span>
<span>CSS_SELECTOR</span><span>,</span>
<span>llm_strategy</span><span>,</span>
<span>session_id</span><span>,</span>
<span>seen_names</span><span>,</span>
<span>)</span>
<span>if</span> <span>no_results_found</span><span>:</span>
<span>print</span><span>(</span><span>"</span><span>No more records found. Stopping crawl.</span><span>"</span><span>)</span>
<span>break</span>
<span>if</span> <span>not</span> <span>records</span><span>:</span>
<span>print</span><span>(</span><span>f</span><span>"</span><span>No records extracted from page </span><span>{</span><span>page_number</span><span>}</span><span>.</span><span>"</span><span>)</span>
<span>break</span>
<span>all_records</span><span>.</span><span>extend</span><span>(</span><span>records</span><span>)</span>
<span>page_number</span> <span>+=</span> <span>1</span> <span># Move to the next page </span>
<span># Stop after a maximum number of pages </span> <span>if</span> <span>page_number</span> <span>></span> <span>MAX_PAGES</span><span>:</span>
<span>break</span>
<span># Pause to prevent rate limits </span> <span>await</span> <span>asyncio</span><span>.</span><span>sleep</span><span>(</span><span>2</span><span>)</span>
<span># Save extracted data </span> <span>if</span> <span>all_records</span><span>:</span>
<span>save_data_to_csv</span><span>(</span><span>records</span><span>=</span><span>all_records</span><span>,</span> <span>data_struct</span><span>=</span><span>BusinessData</span><span>,</span> <span>filename</span><span>=</span><span>"</span><span>businesses_data.csv</span><span>"</span><span>)</span>
<span>else</span><span>:</span>
<span>print</span><span>(</span><span>"</span><span>No records found.</span><span>"</span><span>)</span>
<span># Show LLM usage stats </span> <span>llm_strategy</span><span>.</span><span>show_usage</span><span>()</span>
<span>async</span> <span>def</span> <span>crawl_yellowpages</span><span>():</span>
    <span>"""</span><span> Main function to scrape business data. </span><span>"""</span>
    <span># Initialize configurations </span>    <span>browser_config</span> <span>=</span> <span>get_browser_config</span><span>()</span>
    <span>llm_strategy</span> <span>=</span> <span>get_llm_strategy</span><span>(</span>
        <span>llm_instructions</span><span>=</span><span>SCRAPER_INSTRUCTIONS</span><span>,</span>  <span># Extraction instructions </span>        <span>output_format</span><span>=</span><span>BusinessData</span>  <span># Output schema </span>    <span>)</span>
    <span>session_id</span> <span>=</span> <span>"</span><span>crawler_session</span><span>"</span>

    <span># Initialize state variables </span>    <span>page_number</span> <span>=</span> <span>1</span>
    <span>all_records</span> <span>=</span> <span>[]</span>
    <span>seen_names</span> <span>=</span> <span>set</span><span>()</span>

    <span># Start the web crawler session </span>    <span>async</span> <span>with</span> <span>AsyncWebCrawler</span><span>(</span><span>config</span><span>=</span><span>browser_config</span><span>)</span> <span>as</span> <span>crawler</span><span>:</span>
        <span>while</span> <span>True</span><span>:</span>
            <span>records</span><span>,</span> <span>no_results_found</span> <span>=</span> <span>await</span> <span>fetch_and_process_page</span><span>(</span>
                <span>crawler</span><span>,</span>
                <span>page_number</span><span>,</span>
                <span>BASE_URL</span><span>,</span>
                <span>CSS_SELECTOR</span><span>,</span>
                <span>llm_strategy</span><span>,</span>
                <span>session_id</span><span>,</span>
                <span>seen_names</span><span>,</span>
            <span>)</span>

            <span>if</span> <span>no_results_found</span><span>:</span>
                <span>print</span><span>(</span><span>"</span><span>No more records found. Stopping crawl.</span><span>"</span><span>)</span>
                <span>break</span>

            <span>if</span> <span>not</span> <span>records</span><span>:</span>
                <span>print</span><span>(</span><span>f</span><span>"</span><span>No records extracted from page </span><span>{</span><span>page_number</span><span>}</span><span>.</span><span>"</span><span>)</span>
                <span>break</span>  

            <span>all_records</span><span>.</span><span>extend</span><span>(</span><span>records</span><span>)</span>
            <span>page_number</span> <span>+=</span> <span>1</span>  <span># Move to the next page </span>
            <span># Stop after a maximum number of pages </span>            <span>if</span> <span>page_number</span> <span>></span> <span>MAX_PAGES</span><span>:</span>
                <span>break</span>

            <span># Pause to prevent rate limits </span>            <span>await</span> <span>asyncio</span><span>.</span><span>sleep</span><span>(</span><span>2</span><span>)</span>

    <span># Save extracted data </span>    <span>if</span> <span>all_records</span><span>:</span>
        <span>save_data_to_csv</span><span>(</span><span>records</span><span>=</span><span>all_records</span><span>,</span> <span>data_struct</span><span>=</span><span>BusinessData</span><span>,</span> <span>filename</span><span>=</span><span>"</span><span>businesses_data.csv</span><span>"</span><span>)</span>
    <span>else</span><span>:</span>
        <span>print</span><span>(</span><span>"</span><span>No records found.</span><span>"</span><span>)</span>

    <span># Show LLM usage stats </span>    <span>llm_strategy</span><span>.</span><span>show_usage</span><span>()</span>
async def crawl_yellowpages(): """ Main function to scrape business data. """ # Initialize configurations browser_config = get_browser_config() llm_strategy = get_llm_strategy( llm_instructions=SCRAPER_INSTRUCTIONS, # Extraction instructions output_format=BusinessData # Output schema ) session_id = "crawler_session" # Initialize state variables page_number = 1 all_records = [] seen_names = set() # Start the web crawler session async with AsyncWebCrawler(config=browser_config) as crawler: while True: records, no_results_found = await fetch_and_process_page( crawler, page_number, BASE_URL, CSS_SELECTOR, llm_strategy, session_id, seen_names, ) if no_results_found: print("No more records found. Stopping crawl.") break if not records: print(f"No records extracted from page {page_number}.") break all_records.extend(records) page_number += 1 # Move to the next page # Stop after a maximum number of pages if page_number > MAX_PAGES: break # Pause to prevent rate limits await asyncio.sleep(2) # Save extracted data if all_records: save_data_to_csv(records=all_records, data_struct=BusinessData, filename="businesses_data.csv") else: print("No records found.") # Show LLM usage stats llm_strategy.show_usage()

Enter fullscreen mode Exit fullscreen mode

What this function does:

  • Sets up the browser & LLM strategy.

  • Invokes fetch_and_process_page to scrape the local businesses data for each page

  • Runs a while loop and uses pagination to scrape multiple pages.

  • Saves all extracted businesses data to a CSV file.

  • Displays LLM usage statistics to track input/output tokens count for cost estimate.

With just a few steps, we’ve built a powerful AI scraper that can extract local business listings! Now, let’s put it to the test and see it in action.


Try It Out!

Step 1: Clone the Project

To get started, clone the GitHub repository and install the necessary dependencies (preferably in a virtual environment):

<span># Clone the repository from GitHub </span>
git clone https://github.com/kaymen99/llm-web-scraper
<span>cd </span>llm-web-scraper
<span># Create a virtual environment to manage dependencies </span>
python <span>-m</span> venv venv
<span># Activate the virtual environment </span>
<span>source </span>venv/bin/activate <span># On macOS/Linux</span>
<span># On Windows: venv\Scripts\activate </span>
<span># Install the required dependencies from the requirements.txt</span>
pip <span>install</span> <span>-r</span> requirements.txt
<span># Install playwright browsers </span>
playwright <span>install</span>
<span># Clone the repository from GitHub </span>
git clone https://github.com/kaymen99/llm-web-scraper  
<span>cd </span>llm-web-scraper  

<span># Create a virtual environment to manage dependencies </span>
python <span>-m</span> venv venv  

<span># Activate the virtual environment </span>
<span>source </span>venv/bin/activate <span># On macOS/Linux</span>
<span># On Windows: venv\Scripts\activate </span>

<span># Install the required dependencies from the requirements.txt</span>
pip <span>install</span> <span>-r</span> requirements.txt 

<span># Install playwright browsers </span>
playwright <span>install</span>
# Clone the repository from GitHub git clone https://github.com/kaymen99/llm-web-scraper cd llm-web-scraper # Create a virtual environment to manage dependencies python -m venv venv # Activate the virtual environment source venv/bin/activate # On macOS/Linux # On Windows: venv\Scripts\activate # Install the required dependencies from the requirements.txt pip install -r requirements.txt # Install playwright browsers playwright install

Enter fullscreen mode Exit fullscreen mode

Step 2: Set Up Your Environment Variables

Create a .env file in the root directory with the following content:

GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here

Enter fullscreen mode Exit fullscreen mode

Tip: You can use any LLM supported by LiteLLM—just ensure you provide the correct API key!

Step 3: Customize the Scraper (Optional)

Inside the project directory, you’ll find a config.py file where you can modify key settings, such as:

  • The website URL to scrape.

  • The LLM provider being used.

  • The maximum number of pages to crawl.

  • Scraper instructions.

For example, to scrape different types of businesses, update the BASE_URL:

<span># - Plumbers in Vancouver: "https://www.yellowpages.ca/search/si/{page_number}/Plumbers/Vancouver+BC" # - Restaurants in Montreal: "https://www.yellowpages.ca/search/si/{page_number}/Restaurants/Montreal+QC" </span><span>BASE_URL</span> <span>=</span> <span>"</span><span>https://www.yellowpages.ca/search/si/{page_number}/Dentists/Toronto+ON</span><span>"</span>
<span># - Plumbers in Vancouver: "https://www.yellowpages.ca/search/si/{page_number}/Plumbers/Vancouver+BC" # - Restaurants in Montreal: "https://www.yellowpages.ca/search/si/{page_number}/Restaurants/Montreal+QC" </span><span>BASE_URL</span> <span>=</span> <span>"</span><span>https://www.yellowpages.ca/search/si/{page_number}/Dentists/Toronto+ON</span><span>"</span>
# - Plumbers in Vancouver: "https://www.yellowpages.ca/search/si/{page_number}/Plumbers/Vancouver+BC" # - Restaurants in Montreal: "https://www.yellowpages.ca/search/si/{page_number}/Restaurants/Montreal+QC" BASE_URL = "https://www.yellowpages.ca/search/si/{page_number}/Dentists/Toronto+ON"

Enter fullscreen mode Exit fullscreen mode

To switch to a different LLM provider, update these lines:

<span>LLM_MODEL</span> <span>=</span> <span>"</span><span>gpt-4o-mini</span><span>"</span>
<span>API_TOKEN</span> <span>=</span> <span>os</span><span>.</span><span>getenv</span><span>(</span><span>"</span><span>OPENAI_API_KEY</span><span>"</span><span>)</span>
<span>LLM_MODEL</span> <span>=</span> <span>"</span><span>gpt-4o-mini</span><span>"</span>
<span>API_TOKEN</span> <span>=</span> <span>os</span><span>.</span><span>getenv</span><span>(</span><span>"</span><span>OPENAI_API_KEY</span><span>"</span><span>)</span>
LLM_MODEL = "gpt-4o-mini" API_TOKEN = os.getenv("OPENAI_API_KEY")

Enter fullscreen mode Exit fullscreen mode

Step 4: Run the Scraper

Start the crawler with:

python main.py
python main.py
python main.py

Enter fullscreen mode Exit fullscreen mode

The program will:

  • Scrape local businesses listings page by page.

  • Save all extracted data to businesses_data.csv.

  • Display LLM tokens usage statistics after completion.

See the results:

Go on, give it a spin and watch it in action!

Cost Breakdown

I chose the Gemini-2.0 Flash LLM from Google for my AI scraper. Let’s take a look at the token usage:

Screenshot:

From the usage data, we can see that the scraper processes approximately 13,000 input tokens and 2,000 output tokens per page. Let’s calculate how much it costs to scrape a single Yellow Pages entry using our AI scraper:

Usage Tokens Used Pricing (per 1M tokens) Cost
Input Tokens 13,000 $0.10 $0.0013
Output Tokens 2,000 $0.40 $0.0008
Total Cost 15,000 $0.0021 (≈ $0.002)

So, the cost for scraping a single page is only ~$0.002—practically free!

Use Cases

Our AI web scraper isn’t just a tool—it’s a game-changer for automating web data collection. Here’s how it can be applied:

  • Lead GenerationExtract business details like emails, phone numbers, and addresses to build targeted outreach lists effortlessly.
  • Market ResearchAnalyze trends, customer behavior, and industry insights by gathering real-time data from various sources.
  • ️ Competitor AnalysisMonitor pricing, services, and customer reviews to stay ahead in your industry.
  • 🤖 AI Data EnrichmentLeverage LLMs to clean, categorize, and enhance scraped data for deeper insights.
  • Research & AnalysisExtract structured data from directories, reports, and publications to fuel business or academic studies.

Whether you’re a marketer, researcher, or developer, this AI scraper streamlines data extraction—fast, efficient, and automated!

Final Thoughts

Congrats! You’ve successfully built your own AI-powered scraper using Crawl4AI, giving you the ability to collect as many potential leads as you need for your business or clients.

This scraper is highly adaptable—just plug in the website and specify the data you want to extract, then let it do the rest!

Got ideas to improve it? Drop them in the comments!

Want to learn more? Follow my blog and check out my GitHub for more AI projects & tutorials.

Happy scraping!

原文链接: Build your AI Web Scraper in Minutes with Crawl4AI

© 版权声明
THE END
喜欢就支持一下吧
点赞5 分享
It doesn't matter how slow you are, as long as you're determined to get there, you'll get there.
不管你有多慢,都不要紧,只要你有决心,你最终都会到达想去的地方
评论 抢沙发

请登录后发表评论

    暂无评论内容