Scrapy 学习笔记(-)
Scrapy
?
Scrapy 是一個為了爬取網站數據,提取結構性數據而編寫的應用框架。 其可以應用在數據挖掘,信息處理或存儲歷史數據等一系列的程序中。其最初是為了頁面抓取 (更確切來說, 網絡抓取 )所設計的, 也可以應用在獲取API所返回的數據(例如 Amazon Associates Web Services ) 或者通用的網絡爬蟲。Scrapy用途廣泛,可以用于數據挖掘、監測和自動化測試。
Scrapy 使用了 Twisted異步網絡庫來處理網絡通訊。整體架構大致如下
Scrapy主要包括了以下組件:
- 引擎(Scrapy)
用來處理整個系統的數據流處理, 觸發事務(框架核心) - 調度器(Scheduler)
用來接受引擎發過來的請求, 壓入隊列中, 并在引擎再次請求的時候返回. 可以想像成一個URL(抓取網頁的網址或者說是鏈接)的優先隊列, 由它來決定下一個要抓取的網址是什么, 同時去除重復的網址 - 下載器(Downloader)
用于下載網頁內容, 并將網頁內容返回給蜘蛛(Scrapy下載器是建立在twisted這個高效的異步模型上的) - 爬蟲(Spiders)
爬蟲是主要干活的, 用于從特定的網頁中提取自己需要的信息, 即所謂的實體(Item)。用戶也可以從中提取出鏈接,讓Scrapy繼續抓取下一個頁面 - 項目管道(Pipeline)
負責處理爬蟲從網頁中抽取的實體,主要的功能是持久化實體、驗證實體的有效性、清除不需要的信息。當頁面被爬蟲解析后,將被發送到項目管道,并經過幾個特定的次序處理數據。 - 下載器中間件(Downloader Middlewares)
位于Scrapy引擎和下載器之間的框架,主要是處理Scrapy引擎與下載器之間的請求及響應。 - 爬蟲中間件(Spider Middlewares)
介于Scrapy引擎和爬蟲之間的框架,主要工作是處理蜘蛛的響應輸入和請求輸出。 - 調度中間件(Scheduler Middewares)
介于Scrapy引擎和調度之間的中間件,從Scrapy引擎發送到調度的請求和響應。
Scrapy運行流程大概如下:
- 引擎從調度器中取出一個鏈接(URL)用于接下來的抓取
- 引擎把URL封裝成一個請求(Request)傳給下載器
- 下載器把資源下載下來,并封裝成應答包(Response)
- 爬蟲解析Response
- 解析出實體(Item),則交給實體管道進行進一步的處理
- 解析出的是鏈接(URL),則把URL交給調度器等待抓取
?
?
一、安裝:
Linux:pip3 install scrapyWindows:1、直接安裝:pip3 install scrapy2、可能報錯(需要安裝twisted):2.1、下載twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted2.2、進入下載目錄,執行 pip3 install Twisted?17.1.0?cp35?cp35m?win_amd64.whl3、 安裝pywin32:pip install pywin32
?
二、基本使用?
?2.1、基本命令
1. scrapy startproject 項目名稱- 在當前目錄中創建中創建一個項目文件(類似于Django)2. scrapy genspider [-t template] <name> <domain>- 創建爬蟲應用如:scrapy gensipider -t basic oldboy oldboy.comscrapy gensipider -t xmlfeed autohome autohome.com.cnPS:查看所有命令:scrapy gensipider -l查看模板命令:scrapy gensipider -d 模板名稱3. scrapy list- 展示爬蟲應用列表4. scrapy crawl 爬蟲應用名稱- 運行單獨爬蟲應用
?
?2.2、項目結構以及爬蟲應用簡介
project_name/scrapy.cfgproject_name/__init__.pyitems.pypipelines.pysettings.pyspiders/__init__.py爬蟲1.py爬蟲2.py爬蟲3.py
文件說明:
- scrapy.cfg ?項目的主配置信息。(真正爬蟲相關的配置信息在settings.py文件中)
- items.py ? ?設置數據存儲模板,用于結構化數據,如:Django的Model
- pipelines ? ?數據處理行為,如:一般結構化的數據持久化
- settings.py 配置文件,如:遞歸的層數、并發數,延遲下載等
- spiders ? ? ?爬蟲目錄,如:創建文件,編寫爬蟲規則
?注意:一般創建爬蟲文件時,以網站域名命名
import scrapyclass XiaoHuarSpider(scrapy.spiders.Spider):name = "xiaohuar" # 爬蟲名稱 *****allowed_domains = ["xiaohuar.com"] # 允許的域名start_urls = ["http://www.xiaohuar.com/hua/", # 其實URL ]def parse(self, response):# 訪問起始URL并獲取結果后的回調函數
import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Requestclass DigSpider(scrapy.Spider):# 爬蟲應用的名稱,通過此名稱啟動爬蟲命令name = "dig"# 允許的域名allowed_domains = ["chouti.com"]# 起始URLstart_urls = ['http://dig.chouti.com/',]has_request_set = {}def parse(self, response):print(response.url)hxs = HtmlXPathSelector(response)page_list = hxs.select('//div[@id="dig_lcpage"]//a[re:test(@href, "/all/hot/recent/\d+")]/@href').extract()for page in page_list:page_url = 'http://dig.chouti.com%s' % pagekey = self.md5(page_url)if key in self.has_request_set:passelse:self.has_request_set[key] = page_urlobj = Request(url=page_url, method='GET', callback=self.parse)yield obj@staticmethoddef md5(val):import hashlibha = hashlib.md5()ha.update(bytes(val, encoding='utf-8'))key = ha.hexdigest()return keydemo
執行此爬蟲文件,則在終端進入項目目錄執行如下命令:
scrapy crawl dig --nolog # --nolog:不顯示日志
對于上述代碼重要之處在于:
- Request是一個封裝用戶請求的類,在回調函數中yield該對象表示繼續訪問
- HtmlXpathSelector用于結構化HTML代碼并提供選擇器功能
?
yield scrapy.Request(url, callback=self.parse) 解析:
因為使用的yield,而不是return。parse函數將會被當做一個生成器使用。scrapy會逐一獲取parse方法中生成的結果,并判斷該結果是一個什么樣的類型。如果是request則加入爬取隊列,如果是item類型則使用pipeline處理,其他類型則返回錯誤信息。具體見源代碼?2.3、url去重
scrapy下的request中init自帶參數:
其中有個參數:dont_filter = False , scrapy默認使用 scrapy.dupefilter.RFPDupeFilter 進行去重,具體可進源碼(RFPDupeFilter?)查看,相關配置:
DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter' DUPEFILTER_DEBUG = False JOBDIR = "保存范文記錄的日志路徑,如:/root/" # 最終路徑為 /root/requests.seen
?
我們自己自定義去重操作,這樣能更靈活的操作數據,比如將已訪問的url 存進內存、數據庫、文件、緩存等,更換方式靈活
1)首先,我們要自定義url去重的類:RepeatFilter? , 在setting.py中將scrapy默認去重的路徑改成我們自定義的路徑:
# setting.py DUPEFILTER_CLASS = 'scrapy.dupefilter.RepeatFilter'
?
2)然后再新建dupefilter.py文件,在該py文件中自定義RepeatFilter類, 重寫RFPDupeFilter 類的幾個方法:
- def from_settings(cls, settings) : 會定義成類方法,默認調用的第一個方法就是這個方法,作用:實例化RepeatFilter
- def request_seen(self, request) : 檢查當前請求是否被訪問過,返回True表示訪問過 ,返回False表示未訪問過
- def open(self) : 開始爬取請求時調用
- def close(self, reason) : 結束爬蟲爬取時調用
- def log(self, request, spider) : 日志記錄
?調用順序:1、from_settings 方法 → 2、init 方法 → 3、open 方法 → 4、request_seen 方法 →5、close 方法
# dupefilter.py/RepeatFilter class RepeatFilter(object):def __init__(self):self.visited_url = set()@classmethoddef from_settings(cls, settings):"""初始化時,調用:param settings: :return: """return cls()def request_seen(self, request):"""檢測當前請求是否已經被訪問過:param request: :return: True表示已經訪問過;False表示未訪問過"""if request.url in self.visited_url:return Trueself.visited_url.add(request.url)return Falsedef open(self):"""開始爬去請求時,調用:return: """print('open replication')def close(self, reason):"""結束爬蟲爬取時,調用:param reason: :return: """print('close replication')def log(self, request, spider):"""記錄日志:param request: :param spider: :return: """print('repeat', request.url)
?
2.4、pipelines相關?
?scrapy中items將數據格式化,之后傳遞給pipelines處理,
通過下述yield方法,即可將items數據傳遞到pipelines中,執行pipelines中?各類下的?process_item 方法,進行數據持久化處理 ,執行優先級由setting中注冊(ITEM_PIPELINES)配置的數據決定,數值越小越先執行
yield item_example #此語句一執行,就會跳到pipelines.py下的類中執行其中的方法,比如 process_item方法:數據做持久化時調用 ,open_spider方法:爬蟲開始爬取數據時調用的 ?
pipelines相關類是在pipelines.py中編寫的,要使pipelines.py下的類能正常運行,需要在setting中將需要用到的類注冊進去,如:
#pipelines.py/ArticleImagePipeline類class ArticleImagePipeline(ImagesPipeline):def item_completed(self, results, item, info):if "front_image_url" in item:for ok, value in results:image_file_path = value["path"]item["front_image_path"] = image_file_pathreturn item #return item:會將item交個下個類繼續處理,如不想讓其他類處理,則需手動報異常:Raise DropItem(),表示丟棄item,不再被其他類處理
?
setting中注冊:
ITEM_PIPELINES = {'ArticleSpider.pipelines.ArticleImagePipeline': 2, # 自定義ArticleImagePipeline ,需配置好管道路徑 }
?
?pipelines中的類,也有幾個方法,有點類似上述去重的類:
- def process_item(self, item, spider) :操作并進行持久化時調用
- def from_crawler(cls, crawler) :初始化時調用,用于讀取setting信息及實例化pipelines對象
- def open_spider(self,spider) :爬蟲開始執行時調用
- def close_spider(self,spider) :爬蟲結束時調用
?
2.5、cookie相關
# -*- coding: utf-8 -*- import scrapy from scrapy.http.response.html import HtmlResponse from scrapy.http import Request from scrapy.http.cookies import CookieJarclass ChoutiSpider(scrapy.Spider):name = "chouti"allowed_domains = ["chouti.com"]start_urls = ('http://www.chouti.com/',)def start_requests(self):url = 'http://dig.chouti.com/'yield Request(url=url, callback=self.login, meta={'cookiejar': True})def login(self, response):print(response.headers.getlist('Set-Cookie'))req = Request(url='http://dig.chouti.com/login',method='POST',headers={'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'},body='phone=8613121758648&password=woshiniba&oneMonth=1',callback=self.check_login,meta={'cookiejar': True})yield reqdef check_login(self, response):print(response.text)處理cookie
?
# -*- coding: utf-8 -*- import scrapy import sys import io from scrapy.http import Request from scrapy.selector import Selector, HtmlXPathSelector from ..items import ChoutiItemsys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='gb18030') from scrapy.http.cookies import CookieJar class ChoutiSpider(scrapy.Spider):name = "chouti"allowed_domains = ["chouti.com",]start_urls = ['http://dig.chouti.com/']cookie_dict = Nonedef parse(self, response):cookie_obj = CookieJar()cookie_obj.extract_cookies(response,response.request)self.cookie_dict = cookie_obj._cookies # cookie# 帶上用戶名密碼+cookieyield Request(url="http://dig.chouti.com/login",method='POST',body = "phone=8615131255089&password=woshiniba&oneMonth=1",headers={'Content-Type': "application/x-www-form-urlencoded; charset=UTF-8"},cookies=cookie_obj._cookies,callback=self.check_login)def check_login(self,response):print(response.text)yield Request(url="http://dig.chouti.com/",callback=self.good)def good(self,response):id_list = Selector(response=response).xpath('//div[@share-linkid]/@share-linkid').extract()for nid in id_list:print(nid)url = "http://dig.chouti.com/link/vote?linksId=%s" % nidyield Request(url=url,method="POST",cookies=self.cookie_dict,callback=self.show)# page_urls = Selector(response=response).xpath('//div[@id="dig_lcpage"]//a/@href').extract()# for page in page_urls:# url = "http://dig.chouti.com%s" % page# yield Request(url=url,callback=self.good)def show(self,response):print(response.text)示例:自動登錄抽屜并批量點贊
?
?2.6、Scrapy框架擴展, 自定義擴展利用信號在指定位置注冊制定操作
?Scrapy框架擴展,scrapy提供一些擴展(鉤子)可以供自定義操作,setting.py中:
EXTENSIONS = {'scrapy.extensions.telnet.TelnetConsole': None,
}
我們進入TelnetConsole 源碼中,通過重寫init方法、from_crawler方法,可以自定義我們的鉤子
?
scrapy.signals中提供多種信號狀態,可供我們自定義擴展鉤子時使用:
engine_started = object() # 引擎開始時 engine_stopped = object() # 引擎結束時 spider_opened = object() # 爬蟲開始時 spider_idle = object() # spider_closed = object() # 爬蟲結束時 spider_error = object() # 爬蟲錯誤時 request_scheduled = object() # request給到調度器時 request_dropped = object() # request丟棄時 response_received = object() # response接收時 response_downloaded = object() # response下載時 item_scraped = object() # item執行時 item_dropped = object() # item丟棄時
?
自定義鉤子:
新建?extensions.py文件 ,創建類?MyExtension
from scrapy import signalsclass MyExtension(object):def __init__(self, value):self.value = value@classmethoddef from_crawler(cls, crawler):val = crawler.settings.getint('MMMM')ext = cls(val)crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)return extdef spider_opened(self, spider):print('open')def spider_closed(self, spider):print('close')
?
setting.py中注冊:?
EXTENSIONS = {# 'scrapy.extensions.telnet.TelnetConsole': None,'articlespider.extensions.MyExtend': 300, }
?
2.7、setting.py配置:?
# -*- coding: utf-8 -*-# Scrapy settings for step8_king project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html# 1. 爬蟲名稱 BOT_NAME = 'step8_king'# 2. 爬蟲應用路徑 SPIDER_MODULES = ['step8_king.spiders'] NEWSPIDER_MODULE = 'step8_king.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent # 3. 客戶端 user-agent請求頭 # USER_AGENT = 'step8_king (+http://www.yourdomain.com)'# Obey robots.txt rules # 4. 禁止爬蟲配置 # ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16) # 5. 并發請求數 # CONCURRENT_REQUESTS = 4# Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # 6. 延遲下載秒數 # DOWNLOAD_DELAY = 2# The download delay setting will honor only one of: # 7. 單域名訪問并發數,并且延遲下次秒數也應用在每個域名 # CONCURRENT_REQUESTS_PER_DOMAIN = 2 # 單IP訪問并發數,如果有值則忽略:CONCURRENT_REQUESTS_PER_DOMAIN,并且延遲下次秒數也應用在每個IP # CONCURRENT_REQUESTS_PER_IP = 3# Disable cookies (enabled by default) # 8. 是否支持cookie,cookiejar進行操作cookie # COOKIES_ENABLED = True # COOKIES_DEBUG = True# Disable Telnet Console (enabled by default) # 9. Telnet用于查看當前爬蟲的信息,操作爬蟲等... # 使用telnet ip port ,然后通過命令操作 # TELNETCONSOLE_ENABLED = True # TELNETCONSOLE_HOST = '127.0.0.1' # TELNETCONSOLE_PORT = [6023,]# 10. 默認請求頭 # Override the default request headers: # DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', # }# Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html # 11. 定義pipeline處理請求 # ITEM_PIPELINES = { # 'step8_king.pipelines.JsonPipeline': 700, # 'step8_king.pipelines.FilePipeline': 500, # }# 12. 自定義擴展,基于信號進行調用 # Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html # EXTENSIONS = { # # 'step8_king.extensions.MyExtension': 500, # }# 13. 爬蟲允許的最大深度,可以通過meta查看當前深度;0表示無深度 # DEPTH_LIMIT = 3# 14. 爬取時,0表示深度優先Lifo(默認);1表示廣度優先FiFo# 后進先出,深度優先 # DEPTH_PRIORITY = 0 # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleLifoDiskQueue' # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.LifoMemoryQueue' # 先進先出,廣度優先# DEPTH_PRIORITY = 1 # SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue' # SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'# 15. 調度器隊列 # SCHEDULER = 'scrapy.core.scheduler.Scheduler' # from scrapy.core.scheduler import Scheduler# 16. 訪問URL去重 # DUPEFILTER_CLASS = 'step8_king.duplication.RepeatUrl'# Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html""" 17. 自動限速算法from scrapy.contrib.throttle import AutoThrottle自動限速設置1. 獲取最小延遲 DOWNLOAD_DELAY2. 獲取最大延遲 AUTOTHROTTLE_MAX_DELAY3. 設置初始下載延遲 AUTOTHROTTLE_START_DELAY4. 當請求下載完成后,獲取其"連接"時間 latency,即:請求連接到接受到響應頭之間的時間5. 用于計算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay = latency / self.target_concurrencynew_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延遲時間new_delay = max(target_delay, new_delay)new_delay = min(max(self.mindelay, new_delay), self.maxdelay)slot.delay = new_delay """# 開始自動限速 # AUTOTHROTTLE_ENABLED = True # The initial download delay # 初始下載延遲 # AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies # 最大下載延遲 # AUTOTHROTTLE_MAX_DELAY = 10 # The average number of requests Scrapy should be sending in parallel to each remote server # 平均每秒并發數 # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0# Enable showing throttling stats for every response received: # 是否顯示 # AUTOTHROTTLE_DEBUG = True# Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings""" 18. 啟用緩存目的用于將已經發送的請求或相應緩存下來,以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage """ # 是否啟用緩存策略 # HTTPCACHE_ENABLED = True# 緩存策略:所有請求均緩存,下次在請求直接訪問原來的緩存即可 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy" # 緩存策略:根據Http響應頭:Cache-Control、Last-Modified 等進行緩存的策略 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"# 緩存超時時間 # HTTPCACHE_EXPIRATION_SECS = 0# 緩存保存路徑 # HTTPCACHE_DIR = 'httpcache'# 緩存忽略的Http狀態碼 # HTTPCACHE_IGNORE_HTTP_CODES = []# 緩存存儲的插件 # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'""" 19. 代理,需要在環境變量中設置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一:使用默認os.environ{http_proxy:http://root:woshiniba@192.168.11.11:9999/https_proxy:http://192.168.11.11:9999/}方式二:使用自定義下載中間件def to_bytes(text, encoding=None, errors='strict'):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError('to_bytes must receive a unicode, str or bytes ''object, got %s' % type(text).__name__)if encoding is None:encoding = 'utf-8'return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES = [{'ip_port': '111.11.228.75:80', 'user_pass': ''},{'ip_port': '120.198.243.22:80', 'user_pass': ''},{'ip_port': '111.8.60.9:8123', 'user_pass': ''},{'ip_port': '101.71.27.120:80', 'user_pass': ''},{'ip_port': '122.96.59.104:80', 'user_pass': ''},{'ip_port': '122.224.249.122:8088', 'user_pass': ''},]proxy = random.choice(PROXIES)if proxy['user_pass'] is not None:request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)print "**************ProxyMiddleware have pass************" + proxy['ip_port']else:print "**************ProxyMiddleware no pass************" + proxy['ip_port']request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])DOWNLOADER_MIDDLEWARES = {'step8_king.middlewares.ProxyMiddleware': 500,}"""""" 20. Https訪問Https訪問時有兩種情況:1. 要爬取網站使用的可信任證書(默認支持)DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"2. 要爬取網站使用的自定義證書DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())return CertificateOptions(privateKey=v1, # pKey對象certificate=v2, # X509對象verify=False,method=getattr(self, 'method', getattr(self, '_ssl_method', None)))其他:相關類scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相關配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY"""""" 21. 爬蟲中間件class SpiderMiddleware(object):def process_spider_input(self,response, spider):'''下載完成,執行,然后交給parse處理:param response: :param spider: :return: '''passdef process_spider_output(self,response, result, spider):'''spider處理完成,返回時調用:param response::param result::param spider::return: 必須返回包含 Request 或 Item 對象的可迭代對象(iterable)'''return resultdef process_spider_exception(self,response, exception, spider):'''異常調用:param response::param exception::param spider::return: None,繼續交給后續中間件處理異常;含 Response 或 Item 的可迭代對象(iterable),交給調度器或pipeline'''return Nonedef process_start_requests(self,start_requests, spider):'''爬蟲啟動時調用:param start_requests::param spider::return: 包含 Request 對象的可迭代對象'''return start_requests內置爬蟲中間件:'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,""" # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware # Enable or disable spider middlewares # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html SPIDER_MIDDLEWARES = {# 'step8_king.middlewares.SpiderMiddleware': 543, }""" 22. 下載中間件class DownMiddleware1(object):def process_request(self, request, spider):'''請求需要被下載時,經過所有下載器中間件的process_request調用:param request::param spider::return:None,繼續后續中間件去下載;Response對象,停止process_request的執行,開始執行process_responseRequest對象,停止中間件的執行,將Request重新調度器raise IgnoreRequest異常,停止process_request的執行,開始執行process_exception'''passdef process_response(self, request, response, spider):'''spider處理完成,返回時調用:param response::param result::param spider::return:Response 對象:轉交給其他中間件process_responseRequest 對象:停止中間件,request會被重新調度下載raise IgnoreRequest 異常:調用Request.errback'''print('response1')return responsedef process_exception(self, request, exception, spider):'''當下載處理器(download handler)或 process_request() (下載中間件)拋出異常:param response::param exception::param spider::return:None:繼續交給后續中間件處理異常;Response對象:停止后續process_exception方法Request對象:停止中間件,request將會被重新調用下載'''return None默認下載中間件{'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,}""" # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware # Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # DOWNLOADER_MIDDLEWARES = { # 'step8_king.middlewares.DownMiddleware1': 100, # 'step8_king.middlewares.DownMiddleware2': 500, # }setting.py
?
2.8、scrapy配置之自動限速以及緩存?
""" 17. 自動限速算法from scrapy.contrib.throttle import AutoThrottle自動限速設置1. 獲取最小延遲 DOWNLOAD_DELAY2. 獲取最大延遲 AUTOTHROTTLE_MAX_DELAY3. 設置初始下載延遲 AUTOTHROTTLE_START_DELAY4. 當請求下載完成后,獲取其"連接"時間 latency,即:請求連接到接受到響應頭之間的時間5. 用于計算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay = latency / self.target_concurrencynew_delay = (slot.delay + target_delay) / 2.0 # 表示上一次的延遲時間new_delay = max(target_delay, new_delay)new_delay = min(max(self.mindelay, new_delay), self.maxdelay)slot.delay = new_delay """# 開始自動限速 # AUTOTHROTTLE_ENABLED = True # The initial download delay # 初始下載延遲 # AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies # 最大下載延遲 # AUTOTHROTTLE_MAX_DELAY = 10 # The average number of requests Scrapy should be sending in parallel to each remote server # 平均每秒并發數 # AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0setting.py 動態限速
?
""" 18. 啟用緩存目的用于將已經發送的請求或相應緩存下來,以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage """ # 是否啟用緩存策略 # HTTPCACHE_ENABLED = True# 緩存策略:所有請求均緩存,下次在請求直接訪問原來的緩存即可 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy" # 緩存策略:根據Http響應頭:Cache-Control、Last-Modified 等進行緩存的策略 # HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"# 緩存超時時間 # HTTPCACHE_EXPIRATION_SECS = 0# 緩存保存路徑 # HTTPCACHE_DIR = 'httpcache'# 緩存忽略的Http狀態碼 # HTTPCACHE_IGNORE_HTTP_CODES = []# 緩存存儲的插件 # HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'緩存
?
?
?
?
2.9、scrapy之默認代理及擴展代理
""" 19. 代理,需要在環境變量中設置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一:使用默認os.environ{http_proxy:http://root:woshiniba@192.168.11.11:9999/https_proxy:http://192.168.11.11:9999/}方式二:使用自定義下載中間件def to_bytes(text, encoding=None, errors='strict'):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError('to_bytes must receive a unicode, str or bytes ''object, got %s' % type(text).__name__)if encoding is None:encoding = 'utf-8'return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES = [{'ip_port': '111.11.228.75:80', 'user_pass': ''},{'ip_port': '120.198.243.22:80', 'user_pass': ''},{'ip_port': '111.8.60.9:8123', 'user_pass': ''},{'ip_port': '101.71.27.120:80', 'user_pass': ''},{'ip_port': '122.96.59.104:80', 'user_pass': ''},{'ip_port': '122.224.249.122:8088', 'user_pass': ''},]proxy = random.choice(PROXIES)if proxy['user_pass'] is not None:request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])encoded_user_pass = base64.encodestring(to_bytes(proxy['user_pass']))request.headers['Proxy-Authorization'] = to_bytes('Basic ' + encoded_user_pass)print "**************ProxyMiddleware have pass************" + proxy['ip_port']else:print "**************ProxyMiddleware no pass************" + proxy['ip_port']request.meta['proxy'] = to_bytes("http://%s" % proxy['ip_port'])DOWNLOADER_MIDDLEWARES = {'step8_king.middlewares.ProxyMiddleware': 500,}"""View Code
?
?
2.10、scrapy之自定義https證書
""" 20. Https訪問Https訪問時有兩種情況:1. 要爬取網站使用的可信任證書(默認支持)DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"2. 要爬取網站使用的自定義證書DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.key.unsecure', mode='r').read())v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open('/Users/wupeiqi/client.pem', mode='r').read())return CertificateOptions(privateKey=v1, # pKey對象certificate=v2, # X509對象verify=False,method=getattr(self, 'method', getattr(self, '_ssl_method', None)))其他:相關類scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相關配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY"""View Code
?
2.11、?scrapy之爬蟲中間件
?1)下載中間件
""" 22. 下載中間件class DownMiddleware1(object):def process_request(self, request, spider):'''請求需要被下載時,經過所有下載器中間件的process_request調用:param request::param spider::return:None,繼續后續中間件去下載;Response對象,停止process_request的執行,開始執行process_responseRequest對象,停止中間件的執行,將Request重新調度器raise IgnoreRequest異常,停止process_request的執行,開始執行process_exception'''passdef process_response(self, request, response, spider):'''spider處理完成,返回時調用:param response::param result::param spider::return:Response 對象:轉交給其他中間件process_responseRequest 對象:停止中間件,request會被重新調度下載raise IgnoreRequest 異常:調用Request.errback'''print('response1')return responsedef process_exception(self, request, exception, spider):'''當下載處理器(download handler)或 process_request() (下載中間件)拋出異常:param response::param exception::param spider::return:None:繼續交給后續中間件處理異常;Response對象:停止后續process_exception方法Request對象:停止中間件,request將會被重新調用下載'''return None默認下載中間件{'scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware': 100,'scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware': 300,'scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware': 350,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': 400,'scrapy.contrib.downloadermiddleware.retry.RetryMiddleware': 500,'scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware': 550,'scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware': 580,'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware': 590,'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': 600,'scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware': 700,'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 750,'scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware': 830,'scrapy.contrib.downloadermiddleware.stats.DownloaderStats': 850,'scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware': 900,}"""View Code
?
2)爬蟲中間件
""" 21. 爬蟲中間件class SpiderMiddleware(object):def process_spider_input(self,response, spider):'''下載完成,執行,然后交給parse處理:param response: :param spider: :return: '''passdef process_spider_output(self,response, result, spider):'''spider處理完成,返回時調用:param response::param result::param spider::return: 必須返回包含 Request 或 Item 對象的可迭代對象(iterable)'''return resultdef process_spider_exception(self,response, exception, spider):'''異常調用:param response::param exception::param spider::return: None,繼續交給后續中間件處理異常;含 Response 或 Item 的可迭代對象(iterable),交給調度器或pipeline'''return Nonedef process_start_requests(self,start_requests, spider):'''爬蟲啟動時調用:param start_requests::param spider::return: 包含 Request 對象的可迭代對象'''return start_requests內置爬蟲中間件:'scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware': 50,'scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware': 500,'scrapy.contrib.spidermiddleware.referer.RefererMiddleware': 700,'scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware': 800,'scrapy.contrib.spidermiddleware.depth.DepthMiddleware': 900,"""View Code
?
轉載于:https://www.cnblogs.com/Eric15/articles/9733824.html
總結
以上是生活随笔為你收集整理的Scrapy 学习笔记(-)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 北京一套房子多少钱一平方
- 下一篇: 4 课堂测试