How to traverse web pages using Python web crawler?
There are multiple ways to traverse web pages in Python, here are two common methods:
- Using the requests and BeautifulSoup libraries: firstly, sending an HTTP request with the requests library to retrieve the HTML content of a webpage, then parsing the HTML content using the BeautifulSoup library. The find_all() method provided by BeautifulSoup can be used to iterate through specific tags or elements on the webpage.
import requests
from bs4 import BeautifulSoup
# 发送HTTP请求获取网页内容
response = requests.get('http://example.com')
html_content = response.text
# 解析HTML内容
soup = BeautifulSoup(html_content, 'html.parser')
# 遍历网页上的所有链接
for link in soup.find_all('a'):
print(link.get('href'))
- Using the Scrapy library: Scrapy is a powerful Python web crawling framework that offers a complete set of tools and methods for fetching, processing, and storing web data. By writing custom Spiders, it is possible to traverse through various links and pages on a website.
import scrapy
class MySpider(scrapy.Spider):
name = 'example'
start_urls = ['http://example.com']
def parse(self, response):
# 遍历网页上的所有链接
for link in response.css('a::attr(href)').getall():
yield {
'link': link
}
The above are two common methods for webpage traversal, choose the appropriate way based on specific needs.