日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

python3学习(6):ID 遍历爬虫,将需要下载的网页数量最小化

發布時間:2024/1/1 python 20 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python3学习(6):ID 遍历爬虫,将需要下载的网页数量最小化 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

從python3學習(5)中可知所有爬取的網站URL只有在結尾處有區別,因此,可以利用該弱點來遍歷訪問所有URL。

### 二、 ID 遍歷爬蟲,利用網站結構的弱點,輕松訪問所有內容。 # Downloading: http://example.webscraping.com/places/default/view/Afghanistan-1 # Downloading: http://example.webscraping.com/places/default/view/Aland-Islands-2 # Downloading: http://example.webscraping.com/places/default/view/Albania-3 # Downloading: http://example.webscraping.com/places/default/view/Algeria-4 # Downloading: http://example.webscraping.com/places/default/view/American-Samoa-5 # Downloading: http://example.webscraping.com/places/default/view/Andorra-6 # Downloading: http://example.webscraping.com/places/default/view/Angola-7 ## 由上可知,這些 URL 只有結尾處有區別。 import urllib.request ## -- written by LiSongbo def Rocky_dnload(url,user_agent='wswp',num_retries = 2): print('Downloading:',url)LiSongbo_he={'User-agent':user_agent}request = urllib.request.Request(url, headers=LiSongbo_he)try: ## -- written by LiSongbo html = urllib.request.urlopen(request).read()except urllib.request.URLError as e: ## -- written by LiSongbo print('Download error:',e.reason)html = None if num_retries > 0: ## -- written by LiSongbo if hasattr(e,'code') and 500 <= e.code < 600: return Rocky_dnload(url,user_agent,num_retries-1) ## retry 5xx HTTP errors return htmlimport re ## -- written by LiSongbo def Rocky_crawl_sitemap(url): ## -- written by LiSongbo sitemap = Rocky_dnload(url) ## download the sitmap file sitemap = sitemap.decode('utf-8')links = re.findall('<loc>(.*?)</loc>', sitemap) ## extract the sitemap links from flag loc for link in links: ## download each link html = Rocky_dnload(link) ## crape html here import itertools ## -- written by LiSongbo max_errors = 5 n_errors = 0 for page in itertools.count(1): ## -- written by LiSongbo url = 'http://example.webscraping.com/view/-%d' % pagehtml = Rocky_dnload(url)if html is None: ## -- written by LiSongbo n_errors += 1 if n_errors==max_errors: break else: n_errors = 0

運行結果如下:

Downloading: http://example.webscraping.com/view/-1
Downloading: http://example.webscraping.com/view/-2
Downloading: http://example.webscraping.com/view/-3
Downloading: http://example.webscraping.com/view/-4
Downloading: http://example.webscraping.com/view/-5
Downloading: http://example.webscraping.com/view/-6
Downloading: http://example.webscraping.com/view/-7
Downloading: http://example.webscraping.com/view/-8

Downloading: http://example.webscraping.com/view/-9

……

## -- written by LiSongbo


總結

以上是生活随笔為你收集整理的python3学习(6):ID 遍历爬虫,将需要下载的网页数量最小化的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。