Ultra-Detailed Tutorial: Crawling GitHub Repository Folders Without API
This ultra-detailed tutorial, authored by Shpetim Haxhiu, walks you through crawling GitHub repository folders programmatically without relying on the GitHub API. It includes everything from understanding the structure to providing a robust, recursive implementation with enhancements.
1. Setup and Installation
Before you start, ensure you have:
- Python: Version 3.7 or higher installed.
- Libraries: Install
requests
andBeautifulSoup
.
pip <span>install </span>requests beautifulsoup4pip <span>install </span>requests beautifulsoup4pip install requests beautifulsoup4
Enter fullscreen mode Exit fullscreen mode
- Editor: Any Python-supported IDE, such as VS Code or PyCharm.
2. Analyzing GitHub HTML Structure
To scrape GitHub folders, you need to understand the HTML structure of a repository page. On a GitHub repository page:
- Folders are linked with paths like
/tree/<branch>/<folder>
. - Files are linked with paths like
/blob/<branch>/<file>
.
Each item (folder or file) is inside a <div>
with the attribute role="rowheader"
and contains an <a>
tag. For example:
<span><div</span> <span>role=</span><span>"rowheader"</span><span>></span><span><a</span> <span>href=</span><span>"/owner/repo/tree/main/folder-name"</span><span>></span>folder-name<span></a></span><span></div></span><span><div</span> <span>role=</span><span>"rowheader"</span><span>></span> <span><a</span> <span>href=</span><span>"/owner/repo/tree/main/folder-name"</span><span>></span>folder-name<span></a></span> <span></div></span><div role="rowheader"> <a href="/owner/repo/tree/main/folder-name">folder-name</a> </div>
Enter fullscreen mode Exit fullscreen mode
3. Implementing the Scraper
3.1. Recursive Crawling Function
The script will recursively scrape folders and print their structure. To limit the recursion depth and avoid unnecessary load, we’ll use a depth
parameter.
<span>import</span> <span>requests</span><span>from</span> <span>bs4</span> <span>import</span> <span>BeautifulSoup</span><span>import</span> <span>time</span><span>def</span> <span>crawl_github_folder</span><span>(</span><span>url</span><span>,</span> <span>depth</span><span>=</span><span>0</span><span>,</span> <span>max_depth</span><span>=</span><span>3</span><span>):</span><span>"""</span><span> Recursively crawls a GitHub repository folder structure. Parameters: - url (str): URL of the GitHub folder to scrape. - depth (int): Current recursion depth. - max_depth (int): Maximum depth to recurse. </span><span>"""</span><span>if</span> <span>depth</span> <span>></span> <span>max_depth</span><span>:</span><span>return</span><span>headers</span> <span>=</span> <span>{</span><span>"</span><span>User-Agent</span><span>"</span><span>:</span> <span>"</span><span>Mozilla/5.0</span><span>"</span><span>}</span><span>response</span> <span>=</span> <span>requests</span><span>.</span><span>get</span><span>(</span><span>url</span><span>,</span> <span>headers</span><span>=</span><span>headers</span><span>)</span><span>if</span> <span>response</span><span>.</span><span>status_code</span> <span>!=</span> <span>200</span><span>:</span><span>print</span><span>(</span><span>f</span><span>"</span><span>Failed to access </span><span>{</span><span>url</span><span>}</span><span> (Status code: </span><span>{</span><span>response</span><span>.</span><span>status_code</span><span>}</span><span>)</span><span>"</span><span>)</span><span>return</span><span>soup</span> <span>=</span> <span>BeautifulSoup</span><span>(</span><span>response</span><span>.</span><span>text</span><span>,</span> <span>'</span><span>html.parser</span><span>'</span><span>)</span><span># Extract folder and file links </span> <span>items</span> <span>=</span> <span>soup</span><span>.</span><span>select</span><span>(</span><span>'</span><span>div[role=</span><span>"</span><span>rowheader</span><span>"</span><span>] a</span><span>'</span><span>)</span><span>for</span> <span>item</span> <span>in</span> <span>items</span><span>:</span><span>item_name</span> <span>=</span> <span>item</span><span>.</span><span>text</span><span>.</span><span>strip</span><span>()</span><span>item_url</span> <span>=</span> <span>f</span><span>"</span><span>https://github.com</span><span>{</span><span>item</span><span>[</span><span>'</span><span>href</span><span>'</span><span>]</span><span>}</span><span>"</span><span>if</span> <span>'</span><span>/tree/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span><span>print</span><span>(</span><span>f</span><span>"</span><span>{</span><span>'</span><span> </span><span>'</span> <span>*</span> <span>depth</span><span>}</span><span>Folder: </span><span>{</span><span>item_name</span><span>}</span><span>"</span><span>)</span><span>crawl_github_folder</span><span>(</span><span>item_url</span><span>,</span> <span>depth</span> <span>+</span> <span>1</span><span>,</span> <span>max_depth</span><span>)</span><span>elif</span> <span>'</span><span>/blob/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span><span>print</span><span>(</span><span>f</span><span>"</span><span>{</span><span>'</span><span> </span><span>'</span> <span>*</span> <span>depth</span><span>}</span><span>File: </span><span>{</span><span>item_name</span><span>}</span><span>"</span><span>)</span><span># Example usage </span><span>if</span> <span>__name__</span> <span>==</span> <span>"</span><span>__main__</span><span>"</span><span>:</span><span>repo_url</span> <span>=</span> <span>"</span><span>https://github.com/<owner>/<repo>/tree/<branch>/<folder></span><span>"</span><span>crawl_github_folder</span><span>(</span><span>repo_url</span><span>)</span><span>import</span> <span>requests</span> <span>from</span> <span>bs4</span> <span>import</span> <span>BeautifulSoup</span> <span>import</span> <span>time</span> <span>def</span> <span>crawl_github_folder</span><span>(</span><span>url</span><span>,</span> <span>depth</span><span>=</span><span>0</span><span>,</span> <span>max_depth</span><span>=</span><span>3</span><span>):</span> <span>"""</span><span> Recursively crawls a GitHub repository folder structure. Parameters: - url (str): URL of the GitHub folder to scrape. - depth (int): Current recursion depth. - max_depth (int): Maximum depth to recurse. </span><span>"""</span> <span>if</span> <span>depth</span> <span>></span> <span>max_depth</span><span>:</span> <span>return</span> <span>headers</span> <span>=</span> <span>{</span><span>"</span><span>User-Agent</span><span>"</span><span>:</span> <span>"</span><span>Mozilla/5.0</span><span>"</span><span>}</span> <span>response</span> <span>=</span> <span>requests</span><span>.</span><span>get</span><span>(</span><span>url</span><span>,</span> <span>headers</span><span>=</span><span>headers</span><span>)</span> <span>if</span> <span>response</span><span>.</span><span>status_code</span> <span>!=</span> <span>200</span><span>:</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>Failed to access </span><span>{</span><span>url</span><span>}</span><span> (Status code: </span><span>{</span><span>response</span><span>.</span><span>status_code</span><span>}</span><span>)</span><span>"</span><span>)</span> <span>return</span> <span>soup</span> <span>=</span> <span>BeautifulSoup</span><span>(</span><span>response</span><span>.</span><span>text</span><span>,</span> <span>'</span><span>html.parser</span><span>'</span><span>)</span> <span># Extract folder and file links </span> <span>items</span> <span>=</span> <span>soup</span><span>.</span><span>select</span><span>(</span><span>'</span><span>div[role=</span><span>"</span><span>rowheader</span><span>"</span><span>] a</span><span>'</span><span>)</span> <span>for</span> <span>item</span> <span>in</span> <span>items</span><span>:</span> <span>item_name</span> <span>=</span> <span>item</span><span>.</span><span>text</span><span>.</span><span>strip</span><span>()</span> <span>item_url</span> <span>=</span> <span>f</span><span>"</span><span>https://github.com</span><span>{</span><span>item</span><span>[</span><span>'</span><span>href</span><span>'</span><span>]</span><span>}</span><span>"</span> <span>if</span> <span>'</span><span>/tree/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>{</span><span>'</span><span> </span><span>'</span> <span>*</span> <span>depth</span><span>}</span><span>Folder: </span><span>{</span><span>item_name</span><span>}</span><span>"</span><span>)</span> <span>crawl_github_folder</span><span>(</span><span>item_url</span><span>,</span> <span>depth</span> <span>+</span> <span>1</span><span>,</span> <span>max_depth</span><span>)</span> <span>elif</span> <span>'</span><span>/blob/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>{</span><span>'</span><span> </span><span>'</span> <span>*</span> <span>depth</span><span>}</span><span>File: </span><span>{</span><span>item_name</span><span>}</span><span>"</span><span>)</span> <span># Example usage </span><span>if</span> <span>__name__</span> <span>==</span> <span>"</span><span>__main__</span><span>"</span><span>:</span> <span>repo_url</span> <span>=</span> <span>"</span><span>https://github.com/<owner>/<repo>/tree/<branch>/<folder></span><span>"</span> <span>crawl_github_folder</span><span>(</span><span>repo_url</span><span>)</span>import requests from bs4 import BeautifulSoup import time def crawl_github_folder(url, depth=0, max_depth=3): """ Recursively crawls a GitHub repository folder structure. Parameters: - url (str): URL of the GitHub folder to scrape. - depth (int): Current recursion depth. - max_depth (int): Maximum depth to recurse. """ if depth > max_depth: return headers = {"User-Agent": "Mozilla/5.0"} response = requests.get(url, headers=headers) if response.status_code != 200: print(f"Failed to access {url} (Status code: {response.status_code})") return soup = BeautifulSoup(response.text, 'html.parser') # Extract folder and file links items = soup.select('div[role="rowheader"] a') for item in items: item_name = item.text.strip() item_url = f"https://github.com{item['href']}" if '/tree/' in item_url: print(f"{' ' * depth}Folder: {item_name}") crawl_github_folder(item_url, depth + 1, max_depth) elif '/blob/' in item_url: print(f"{' ' * depth}File: {item_name}") # Example usage if __name__ == "__main__": repo_url = "https://github.com/<owner>/<repo>/tree/<branch>/<folder>" crawl_github_folder(repo_url)
Enter fullscreen mode Exit fullscreen mode
4. Features Explained
- Headers for Request: Using a
User-Agent
string to mimic a browser and avoid blocking. - Recursive Crawling:
- Detects folders (
/tree/
) and recursively enters them. - Lists files (
/blob/
) without entering further.
- Detects folders (
- Indentation: Reflects folder hierarchy in the output.
- Depth Limitation: Prevents excessive recursion by setting a maximum depth (
max_depth
).
5. Enhancements
These enhancements are designed to improve the functionality and reliability of the crawler. They address common challenges like exporting results, handling errors, and avoiding rate limits, ensuring the tool is efficient and user-friendly.
5.1. Exporting Results
Save the output to a structured JSON file for easier usage.
<span>import</span> <span>json</span><span>def</span> <span>crawl_to_json</span><span>(</span><span>url</span><span>,</span> <span>depth</span><span>=</span><span>0</span><span>,</span> <span>max_depth</span><span>=</span><span>3</span><span>):</span><span>"""</span><span>Crawls and saves results as JSON.</span><span>"""</span><span>result</span> <span>=</span> <span>{}</span><span>if</span> <span>depth</span> <span>></span> <span>max_depth</span><span>:</span><span>return</span> <span>result</span><span>headers</span> <span>=</span> <span>{</span><span>"</span><span>User-Agent</span><span>"</span><span>:</span> <span>"</span><span>Mozilla/5.0</span><span>"</span><span>}</span><span>response</span> <span>=</span> <span>requests</span><span>.</span><span>get</span><span>(</span><span>url</span><span>,</span> <span>headers</span><span>=</span><span>headers</span><span>)</span><span>if</span> <span>response</span><span>.</span><span>status_code</span> <span>!=</span> <span>200</span><span>:</span><span>print</span><span>(</span><span>f</span><span>"</span><span>Failed to access </span><span>{</span><span>url</span><span>}</span><span>"</span><span>)</span><span>return</span> <span>result</span><span>soup</span> <span>=</span> <span>BeautifulSoup</span><span>(</span><span>response</span><span>.</span><span>text</span><span>,</span> <span>'</span><span>html.parser</span><span>'</span><span>)</span><span>items</span> <span>=</span> <span>soup</span><span>.</span><span>select</span><span>(</span><span>'</span><span>div[role=</span><span>"</span><span>rowheader</span><span>"</span><span>] a</span><span>'</span><span>)</span><span>for</span> <span>item</span> <span>in</span> <span>items</span><span>:</span><span>item_name</span> <span>=</span> <span>item</span><span>.</span><span>text</span><span>.</span><span>strip</span><span>()</span><span>item_url</span> <span>=</span> <span>f</span><span>"</span><span>https://github.com</span><span>{</span><span>item</span><span>[</span><span>'</span><span>href</span><span>'</span><span>]</span><span>}</span><span>"</span><span>if</span> <span>'</span><span>/tree/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span><span>result</span><span>[</span><span>item_name</span><span>]</span> <span>=</span> <span>crawl_to_json</span><span>(</span><span>item_url</span><span>,</span> <span>depth</span> <span>+</span> <span>1</span><span>,</span> <span>max_depth</span><span>)</span><span>elif</span> <span>'</span><span>/blob/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span><span>result</span><span>[</span><span>item_name</span><span>]</span> <span>=</span> <span>"</span><span>file</span><span>"</span><span>return</span> <span>result</span><span>if</span> <span>__name__</span> <span>==</span> <span>"</span><span>__main__</span><span>"</span><span>:</span><span>repo_url</span> <span>=</span> <span>"</span><span>https://github.com/<owner>/<repo>/tree/<branch>/<folder></span><span>"</span><span>structure</span> <span>=</span> <span>crawl_to_json</span><span>(</span><span>repo_url</span><span>)</span><span>with</span> <span>open</span><span>(</span><span>"</span><span>output.json</span><span>"</span><span>,</span> <span>"</span><span>w</span><span>"</span><span>)</span> <span>as</span> <span>file</span><span>:</span><span>json</span><span>.</span><span>dump</span><span>(</span><span>structure</span><span>,</span> <span>file</span><span>,</span> <span>indent</span><span>=</span><span>2</span><span>)</span><span>print</span><span>(</span><span>"</span><span>Repository structure saved to output.json</span><span>"</span><span>)</span><span>import</span> <span>json</span> <span>def</span> <span>crawl_to_json</span><span>(</span><span>url</span><span>,</span> <span>depth</span><span>=</span><span>0</span><span>,</span> <span>max_depth</span><span>=</span><span>3</span><span>):</span> <span>"""</span><span>Crawls and saves results as JSON.</span><span>"""</span> <span>result</span> <span>=</span> <span>{}</span> <span>if</span> <span>depth</span> <span>></span> <span>max_depth</span><span>:</span> <span>return</span> <span>result</span> <span>headers</span> <span>=</span> <span>{</span><span>"</span><span>User-Agent</span><span>"</span><span>:</span> <span>"</span><span>Mozilla/5.0</span><span>"</span><span>}</span> <span>response</span> <span>=</span> <span>requests</span><span>.</span><span>get</span><span>(</span><span>url</span><span>,</span> <span>headers</span><span>=</span><span>headers</span><span>)</span> <span>if</span> <span>response</span><span>.</span><span>status_code</span> <span>!=</span> <span>200</span><span>:</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>Failed to access </span><span>{</span><span>url</span><span>}</span><span>"</span><span>)</span> <span>return</span> <span>result</span> <span>soup</span> <span>=</span> <span>BeautifulSoup</span><span>(</span><span>response</span><span>.</span><span>text</span><span>,</span> <span>'</span><span>html.parser</span><span>'</span><span>)</span> <span>items</span> <span>=</span> <span>soup</span><span>.</span><span>select</span><span>(</span><span>'</span><span>div[role=</span><span>"</span><span>rowheader</span><span>"</span><span>] a</span><span>'</span><span>)</span> <span>for</span> <span>item</span> <span>in</span> <span>items</span><span>:</span> <span>item_name</span> <span>=</span> <span>item</span><span>.</span><span>text</span><span>.</span><span>strip</span><span>()</span> <span>item_url</span> <span>=</span> <span>f</span><span>"</span><span>https://github.com</span><span>{</span><span>item</span><span>[</span><span>'</span><span>href</span><span>'</span><span>]</span><span>}</span><span>"</span> <span>if</span> <span>'</span><span>/tree/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span> <span>result</span><span>[</span><span>item_name</span><span>]</span> <span>=</span> <span>crawl_to_json</span><span>(</span><span>item_url</span><span>,</span> <span>depth</span> <span>+</span> <span>1</span><span>,</span> <span>max_depth</span><span>)</span> <span>elif</span> <span>'</span><span>/blob/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span> <span>result</span><span>[</span><span>item_name</span><span>]</span> <span>=</span> <span>"</span><span>file</span><span>"</span> <span>return</span> <span>result</span> <span>if</span> <span>__name__</span> <span>==</span> <span>"</span><span>__main__</span><span>"</span><span>:</span> <span>repo_url</span> <span>=</span> <span>"</span><span>https://github.com/<owner>/<repo>/tree/<branch>/<folder></span><span>"</span> <span>structure</span> <span>=</span> <span>crawl_to_json</span><span>(</span><span>repo_url</span><span>)</span> <span>with</span> <span>open</span><span>(</span><span>"</span><span>output.json</span><span>"</span><span>,</span> <span>"</span><span>w</span><span>"</span><span>)</span> <span>as</span> <span>file</span><span>:</span> <span>json</span><span>.</span><span>dump</span><span>(</span><span>structure</span><span>,</span> <span>file</span><span>,</span> <span>indent</span><span>=</span><span>2</span><span>)</span> <span>print</span><span>(</span><span>"</span><span>Repository structure saved to output.json</span><span>"</span><span>)</span>import json def crawl_to_json(url, depth=0, max_depth=3): """Crawls and saves results as JSON.""" result = {} if depth > max_depth: return result headers = {"User-Agent": "Mozilla/5.0"} response = requests.get(url, headers=headers) if response.status_code != 200: print(f"Failed to access {url}") return result soup = BeautifulSoup(response.text, 'html.parser') items = soup.select('div[role="rowheader"] a') for item in items: item_name = item.text.strip() item_url = f"https://github.com{item['href']}" if '/tree/' in item_url: result[item_name] = crawl_to_json(item_url, depth + 1, max_depth) elif '/blob/' in item_url: result[item_name] = "file" return result if __name__ == "__main__": repo_url = "https://github.com/<owner>/<repo>/tree/<branch>/<folder>" structure = crawl_to_json(repo_url) with open("output.json", "w") as file: json.dump(structure, file, indent=2) print("Repository structure saved to output.json")
Enter fullscreen mode Exit fullscreen mode
5.2. Error Handling
Add robust error handling for network errors and unexpected HTML changes:
<span>try</span><span>:</span><span>response</span> <span>=</span> <span>requests</span><span>.</span><span>get</span><span>(</span><span>url</span><span>,</span> <span>headers</span><span>=</span><span>headers</span><span>,</span> <span>timeout</span><span>=</span><span>10</span><span>)</span><span>response</span><span>.</span><span>raise_for_status</span><span>()</span><span>except</span> <span>requests</span><span>.</span><span>exceptions</span><span>.</span><span>RequestException</span> <span>as</span> <span>e</span><span>:</span><span>print</span><span>(</span><span>f</span><span>"</span><span>Error fetching </span><span>{</span><span>url</span><span>}</span><span>: </span><span>{</span><span>e</span><span>}</span><span>"</span><span>)</span><span>return</span><span>try</span><span>:</span> <span>response</span> <span>=</span> <span>requests</span><span>.</span><span>get</span><span>(</span><span>url</span><span>,</span> <span>headers</span><span>=</span><span>headers</span><span>,</span> <span>timeout</span><span>=</span><span>10</span><span>)</span> <span>response</span><span>.</span><span>raise_for_status</span><span>()</span> <span>except</span> <span>requests</span><span>.</span><span>exceptions</span><span>.</span><span>RequestException</span> <span>as</span> <span>e</span><span>:</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>Error fetching </span><span>{</span><span>url</span><span>}</span><span>: </span><span>{</span><span>e</span><span>}</span><span>"</span><span>)</span> <span>return</span>try: response = requests.get(url, headers=headers, timeout=10) response.raise_for_status() except requests.exceptions.RequestException as e: print(f"Error fetching {url}: {e}") return
Enter fullscreen mode Exit fullscreen mode
5.3. Rate Limiting
To avoid being rate-limited by GitHub, introduce delays:
<span>import</span> <span>time</span><span>def</span> <span>crawl_with_delay</span><span>(</span><span>url</span><span>,</span> <span>depth</span><span>=</span><span>0</span><span>):</span><span>time</span><span>.</span><span>sleep</span><span>(</span><span>2</span><span>)</span> <span># Delay between requests </span> <span># Crawling logic here </span><span>import</span> <span>time</span> <span>def</span> <span>crawl_with_delay</span><span>(</span><span>url</span><span>,</span> <span>depth</span><span>=</span><span>0</span><span>):</span> <span>time</span><span>.</span><span>sleep</span><span>(</span><span>2</span><span>)</span> <span># Delay between requests </span> <span># Crawling logic here </span>import time def crawl_with_delay(url, depth=0): time.sleep(2) # Delay between requests # Crawling logic here
Enter fullscreen mode Exit fullscreen mode
6. Ethical Considerations
Authored by Shpetim Haxhiu, an expert in software automation and ethical programming, this section ensures adherence to best practices while using the GitHub crawler.
- Compliance: Adhere to GitHub’s Terms of Service.
- Minimize Load: Respect GitHub’s servers by limiting requests and adding delays.
- Permission: Obtain permission for extensive crawling of private repositories.
7. Complete Code
Here’s the consolidated script with all features included:
<span>import</span> <span>requests</span><span>from</span> <span>bs4</span> <span>import</span> <span>BeautifulSoup</span><span>import</span> <span>json</span><span>import</span> <span>time</span><span>def</span> <span>crawl_github_folder</span><span>(</span><span>url</span><span>,</span> <span>depth</span><span>=</span><span>0</span><span>,</span> <span>max_depth</span><span>=</span><span>3</span><span>):</span><span>result</span> <span>=</span> <span>{}</span><span>if</span> <span>depth</span> <span>></span> <span>max_depth</span><span>:</span><span>return</span> <span>result</span><span>headers</span> <span>=</span> <span>{</span><span>"</span><span>User-Agent</span><span>"</span><span>:</span> <span>"</span><span>Mozilla/5.0</span><span>"</span><span>}</span><span>try</span><span>:</span><span>response</span> <span>=</span> <span>requests</span><span>.</span><span>get</span><span>(</span><span>url</span><span>,</span> <span>headers</span><span>=</span><span>headers</span><span>,</span> <span>timeout</span><span>=</span><span>10</span><span>)</span><span>response</span><span>.</span><span>raise_for_status</span><span>()</span><span>except</span> <span>requests</span><span>.</span><span>exceptions</span><span>.</span><span>RequestException</span> <span>as</span> <span>e</span><span>:</span><span>print</span><span>(</span><span>f</span><span>"</span><span>Error fetching </span><span>{</span><span>url</span><span>}</span><span>: </span><span>{</span><span>e</span><span>}</span><span>"</span><span>)</span><span>return</span> <span>result</span><span>soup</span> <span>=</span> <span>BeautifulSoup</span><span>(</span><span>response</span><span>.</span><span>text</span><span>,</span> <span>'</span><span>html.parser</span><span>'</span><span>)</span><span>items</span> <span>=</span> <span>soup</span><span>.</span><span>select</span><span>(</span><span>'</span><span>div[role=</span><span>"</span><span>rowheader</span><span>"</span><span>] a</span><span>'</span><span>)</span><span>for</span> <span>item</span> <span>in</span> <span>items</span><span>:</span><span>item_name</span> <span>=</span> <span>item</span><span>.</span><span>text</span><span>.</span><span>strip</span><span>()</span><span>item_url</span> <span>=</span> <span>f</span><span>"</span><span>https://github.com</span><span>{</span><span>item</span><span>[</span><span>'</span><span>href</span><span>'</span><span>]</span><span>}</span><span>"</span><span>if</span> <span>'</span><span>/tree/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span><span>print</span><span>(</span><span>f</span><span>"</span><span>{</span><span>'</span><span> </span><span>'</span> <span>*</span> <span>depth</span><span>}</span><span>Folder: </span><span>{</span><span>item_name</span><span>}</span><span>"</span><span>)</span><span>result</span><span>[</span><span>item_name</span><span>]</span> <span>=</span> <span>crawl_github_folder</span><span>(</span><span>item_url</span><span>,</span> <span>depth</span> <span>+</span> <span>1</span><span>,</span> <span>max_depth</span><span>)</span><span>elif</span> <span>'</span><span>/blob/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span><span>print</span><span>(</span><span>f</span><span>"</span><span>{</span><span>'</span><span> </span><span>'</span> <span>*</span> <span>depth</span><span>}</span><span>File: </span><span>{</span><span>item_name</span><span>}</span><span>"</span><span>)</span><span>result</span><span>[</span><span>item_name</span><span>]</span> <span>=</span> <span>"</span><span>file</span><span>"</span><span>time</span><span>.</span><span>sleep</span><span>(</span><span>2</span><span>)</span> <span># Avoid rate-limiting </span> <span>return</span> <span>result</span><span>if</span> <span>__name__</span> <span>==</span> <span>"</span><span>__main__</span><span>"</span><span>:</span><span>repo_url</span> <span>=</span> <span>"</span><span>https://github.com/<owner>/<repo>/tree/<branch>/<folder></span><span>"</span><span>structure</span> <span>=</span> <span>crawl_github_folder</span><span>(</span><span>repo_url</span><span>)</span><span>with</span> <span>open</span><span>(</span><span>"</span><span>output.json</span><span>"</span><span>,</span> <span>"</span><span>w</span><span>"</span><span>)</span> <span>as</span> <span>file</span><span>:</span><span>json</span><span>.</span><span>dump</span><span>(</span><span>structure</span><span>,</span> <span>file</span><span>,</span> <span>indent</span><span>=</span><span>2</span><span>)</span><span>print</span><span>(</span><span>"</span><span>Repository structure saved to output.json</span><span>"</span><span>)</span><span>import</span> <span>requests</span> <span>from</span> <span>bs4</span> <span>import</span> <span>BeautifulSoup</span> <span>import</span> <span>json</span> <span>import</span> <span>time</span> <span>def</span> <span>crawl_github_folder</span><span>(</span><span>url</span><span>,</span> <span>depth</span><span>=</span><span>0</span><span>,</span> <span>max_depth</span><span>=</span><span>3</span><span>):</span> <span>result</span> <span>=</span> <span>{}</span> <span>if</span> <span>depth</span> <span>></span> <span>max_depth</span><span>:</span> <span>return</span> <span>result</span> <span>headers</span> <span>=</span> <span>{</span><span>"</span><span>User-Agent</span><span>"</span><span>:</span> <span>"</span><span>Mozilla/5.0</span><span>"</span><span>}</span> <span>try</span><span>:</span> <span>response</span> <span>=</span> <span>requests</span><span>.</span><span>get</span><span>(</span><span>url</span><span>,</span> <span>headers</span><span>=</span><span>headers</span><span>,</span> <span>timeout</span><span>=</span><span>10</span><span>)</span> <span>response</span><span>.</span><span>raise_for_status</span><span>()</span> <span>except</span> <span>requests</span><span>.</span><span>exceptions</span><span>.</span><span>RequestException</span> <span>as</span> <span>e</span><span>:</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>Error fetching </span><span>{</span><span>url</span><span>}</span><span>: </span><span>{</span><span>e</span><span>}</span><span>"</span><span>)</span> <span>return</span> <span>result</span> <span>soup</span> <span>=</span> <span>BeautifulSoup</span><span>(</span><span>response</span><span>.</span><span>text</span><span>,</span> <span>'</span><span>html.parser</span><span>'</span><span>)</span> <span>items</span> <span>=</span> <span>soup</span><span>.</span><span>select</span><span>(</span><span>'</span><span>div[role=</span><span>"</span><span>rowheader</span><span>"</span><span>] a</span><span>'</span><span>)</span> <span>for</span> <span>item</span> <span>in</span> <span>items</span><span>:</span> <span>item_name</span> <span>=</span> <span>item</span><span>.</span><span>text</span><span>.</span><span>strip</span><span>()</span> <span>item_url</span> <span>=</span> <span>f</span><span>"</span><span>https://github.com</span><span>{</span><span>item</span><span>[</span><span>'</span><span>href</span><span>'</span><span>]</span><span>}</span><span>"</span> <span>if</span> <span>'</span><span>/tree/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>{</span><span>'</span><span> </span><span>'</span> <span>*</span> <span>depth</span><span>}</span><span>Folder: </span><span>{</span><span>item_name</span><span>}</span><span>"</span><span>)</span> <span>result</span><span>[</span><span>item_name</span><span>]</span> <span>=</span> <span>crawl_github_folder</span><span>(</span><span>item_url</span><span>,</span> <span>depth</span> <span>+</span> <span>1</span><span>,</span> <span>max_depth</span><span>)</span> <span>elif</span> <span>'</span><span>/blob/</span><span>'</span> <span>in</span> <span>item_url</span><span>:</span> <span>print</span><span>(</span><span>f</span><span>"</span><span>{</span><span>'</span><span> </span><span>'</span> <span>*</span> <span>depth</span><span>}</span><span>File: </span><span>{</span><span>item_name</span><span>}</span><span>"</span><span>)</span> <span>result</span><span>[</span><span>item_name</span><span>]</span> <span>=</span> <span>"</span><span>file</span><span>"</span> <span>time</span><span>.</span><span>sleep</span><span>(</span><span>2</span><span>)</span> <span># Avoid rate-limiting </span> <span>return</span> <span>result</span> <span>if</span> <span>__name__</span> <span>==</span> <span>"</span><span>__main__</span><span>"</span><span>:</span> <span>repo_url</span> <span>=</span> <span>"</span><span>https://github.com/<owner>/<repo>/tree/<branch>/<folder></span><span>"</span> <span>structure</span> <span>=</span> <span>crawl_github_folder</span><span>(</span><span>repo_url</span><span>)</span> <span>with</span> <span>open</span><span>(</span><span>"</span><span>output.json</span><span>"</span><span>,</span> <span>"</span><span>w</span><span>"</span><span>)</span> <span>as</span> <span>file</span><span>:</span> <span>json</span><span>.</span><span>dump</span><span>(</span><span>structure</span><span>,</span> <span>file</span><span>,</span> <span>indent</span><span>=</span><span>2</span><span>)</span> <span>print</span><span>(</span><span>"</span><span>Repository structure saved to output.json</span><span>"</span><span>)</span>import requests from bs4 import BeautifulSoup import json import time def crawl_github_folder(url, depth=0, max_depth=3): result = {} if depth > max_depth: return result headers = {"User-Agent": "Mozilla/5.0"} try: response = requests.get(url, headers=headers, timeout=10) response.raise_for_status() except requests.exceptions.RequestException as e: print(f"Error fetching {url}: {e}") return result soup = BeautifulSoup(response.text, 'html.parser') items = soup.select('div[role="rowheader"] a') for item in items: item_name = item.text.strip() item_url = f"https://github.com{item['href']}" if '/tree/' in item_url: print(f"{' ' * depth}Folder: {item_name}") result[item_name] = crawl_github_folder(item_url, depth + 1, max_depth) elif '/blob/' in item_url: print(f"{' ' * depth}File: {item_name}") result[item_name] = "file" time.sleep(2) # Avoid rate-limiting return result if __name__ == "__main__": repo_url = "https://github.com/<owner>/<repo>/tree/<branch>/<folder>" structure = crawl_github_folder(repo_url) with open("output.json", "w") as file: json.dump(structure, file, indent=2) print("Repository structure saved to output.json")
Enter fullscreen mode Exit fullscreen mode
By following this detailed guide, you can build a robust GitHub folder crawler. This tool can be adapted for various needs while ensuring ethical compliance.
Feel free to leave questions in the comments section! Also, don’t forget to connect with me:
- Email: shpetim.h@gmail.com
- LinkedIn: linkedin.com/in/shpetimhaxhiu
- GitHub: github.com/shpetimhaxhiu
原文链接:Detailed Tutorial: Crawling GitHub Repository Folders Without API
暂无评论内容