PYTHON 爬虫笔记十一:Scrapy框架的基本使用
Scrapy框架詳解及其基本使用
-
scrapy框架原理
Scrapy是一個為了爬取網(wǎng)站數(shù)據(jù),提取結(jié)構(gòu)性數(shù)據(jù)而編寫的應(yīng)用框架。 其可以應(yīng)用在數(shù)據(jù)挖掘,信息處理或存儲歷史數(shù)據(jù)等一系列的程序中。
其最初是為了頁面抓取 (更確切來說, 網(wǎng)絡(luò)抓取 )所設(shè)計的, 也可以應(yīng)用在獲取API所返回的數(shù)據(jù)(例如 Amazon Associates Web Services ) 或者通用的網(wǎng)絡(luò)爬蟲。Scrapy用途廣泛,可以用于數(shù)據(jù)挖掘、監(jiān)測和自動化測試。Scrapy 使用了 Twisted異步網(wǎng)絡(luò)庫來處理網(wǎng)絡(luò)通訊。整體架構(gòu)大致如下
?
Scrapy主要包括了以下組件:
- 引擎(Scrapy)
用來處理整個系統(tǒng)的數(shù)據(jù)流處理, 觸發(fā)事務(wù)(框架核心) - 調(diào)度器(Scheduler)
用來接受引擎發(fā)過來的請求, 壓入隊列中, 并在引擎再次請求的時候返回. 可以想像成一個URL(抓取網(wǎng)頁的網(wǎng)址或者說是鏈接)的優(yōu)先隊列, 由它來決定下一個要抓取的網(wǎng)址是什么, 同時去除重復(fù)的網(wǎng)址 - 下載器(Downloader)
用于下載網(wǎng)頁內(nèi)容, 并將網(wǎng)頁內(nèi)容返回給蜘蛛(Scrapy下載器是建立在twisted這個高效的異步模型上的) - 爬蟲(Spiders)
爬蟲是主要干活的, 用于從特定的網(wǎng)頁中提取自己需要的信息, 即所謂的實體(Item)。用戶也可以從中提取出鏈接,讓Scrapy繼續(xù)抓取下一個頁面 - 項目管道(Pipeline)
負(fù)責(zé)處理爬蟲從網(wǎng)頁中抽取的實體,主要的功能是持久化實體、驗證實體的有效性、清除不需要的信息。當(dāng)頁面被爬蟲解析后,將被發(fā)送到項目管道,并經(jīng)過幾個特定的次序處理數(shù)據(jù)。 - 下載器中間件(Downloader Middlewares)
位于Scrapy引擎和下載器之間的框架,主要是處理Scrapy引擎與下載器之間的請求及響應(yīng)。 - 爬蟲中間件(Spider Middlewares)
介于Scrapy引擎和爬蟲之間的框架,主要工作是處理蜘蛛的響應(yīng)輸入和請求輸出。 - 調(diào)度中間件(Scheduler Middewares)
介于Scrapy引擎和調(diào)度之間的中間件,從Scrapy引擎發(fā)送到調(diào)度的請求和響應(yīng)。
- 引擎(Scrapy)
Scrapy運行流程大概如下:
1、從spider中獲取到初始url給引擎,告訴引擎幫我給調(diào)度器;
2、引擎將初始url給調(diào)度器,調(diào)度器安排入隊列;
3、調(diào)度器告訴引擎已經(jīng)安排好,并把url給引擎,告訴引擎,給下載器進(jìn)行下載;
4、引擎將url給下載器,下載器下載頁面源碼;
5、下載器告訴引擎已經(jīng)下載好了,并把頁面源碼response給到引擎;
6、引擎拿著response給到spider,spider解析數(shù)據(jù)、提取數(shù)據(jù);
7、spider將提取到的數(shù)據(jù)給到引擎,告訴引擎,幫我把新的url給到調(diào)度器入隊列,把信息給到Item Pipelines進(jìn)行保存;
8、Item Pipelines將提取到的數(shù)據(jù)保存,保存好后告訴引擎,可以進(jìn)行下一個url的提取了;
9、循環(huán)3-8步,直到調(diào)度器中沒有url,關(guān)閉網(wǎng)站(若url下載失敗了,會返回重新下載)。
-
基本使用
創(chuàng)建項目的基本過程
Scrapy:# 創(chuàng)建項目,在當(dāng)前目錄中創(chuàng)建中創(chuàng)建一個項目文件(類似于Django) scrapy startproject sp1生成目錄如下:sp1- sp1- spiders 目錄,放置創(chuàng)建的爬蟲應(yīng)用- middlewares.py 中間件- items.py 格式化,與pipelines.py一同做持久化- pipelines.py 持久化- settings.py 配置文件- scrapy.cfg 配置# 創(chuàng)建爬蟲應(yīng)用 cd sp1scrapy genspider xiaohuar xiaohuar.com # 創(chuàng)建了xiaohuar.pyscrapy genspider baidu baidu.com # 創(chuàng)建了baidu.py# 展示爬蟲應(yīng)用列表 scrapy list# 執(zhí)行爬蟲,進(jìn)入project scrapy crawl baiduscrapy crawl baidu --nolog文件說明:
注意:一般創(chuàng)建爬蟲文件時,以網(wǎng)站域名命名
-
項目實戰(zhàn)
實戰(zhàn)目標(biāo):對目標(biāo)站點所有語錄,作者,標(biāo)簽進(jìn)行爬取并存如MongoDB中
目標(biāo)站點分析:Quates to scrape
流程框架:
爬蟲實戰(zhàn)
明確目標(biāo)--->items.py(明確想要抓取的目標(biāo),定義需要爬取的信息(字段))
# -*- coding: utf-8 -*-import scrapyclass QuoteItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()text = scrapy.Field() #語錄內(nèi)容author = scrapy.Field() #作者tags = scrapy.Field() #標(biāo)簽
制作爬蟲--->quotes.py(解析數(shù)據(jù),并提取信息和新的url)
# -*- coding: utf-8 -*- import scrapyfrom quotetutorial.items import QuoteItemclass QuotesSpider(scrapy.Spider):name = 'quotes'allowed_domains = ['quotes.toscrape.com']start_urls = ['http://quotes.toscrape.com/']def parse(self, response):#print(response.text)quotes = response.css('.quote') #獲取每行的全部信息for quote in quotes:item = QuoteItem() #創(chuàng)建獲取對象text = quote.css('.text::text').extract_first() #*::text 用于獲取文本信息,axtract_first() 用于獲得第一個文本信息*author = quote.css('.author::text').extract_first()tags = quote.css('.tags .tag::text').extract() #沒有指定獲取第一個--->獲取所有滿足條件的item['text'] = textitem['author'] = authoritem['tags'] = tagsyield itemnext = response.css('.pager .next a::attr(href)').extract_first() #獲取元素屬性信息url = response.urljoin(next) #把連接拼接起來yield scrapy.Request(url=url,callback=self.parse) #回調(diào)函數(shù)存儲內(nèi)容--->pipelines.py(設(shè)計管道存儲內(nèi)容。當(dāng)spider收集好Item后,會將Item(由字典組成的列表)傳遞到Item Pipeline,這些Item Pipeline組件按定義的順序處理Item)
# -*- coding: utf-8 -*- import pymongofrom scrapy.exceptions import DropItemclass TextPipeline(object): #對語錄進(jìn)行處理,當(dāng)長度超過50時,截斷然后在后面加*...*def __init__(self):self.limit =50def process_item(self, item, spider):if item['text']:if len(item['text']) > self.limit:item['text'] = item['text'][0:self.limit].rstrip() + '...'return itemelse:return DropItem('Miss Text')class MongoPipeline(object): #鏈接數(shù)據(jù)庫def __init__(self ,mongo_uri, mongo_db):self.mongo_uri = mongo_uriself.mongo_db = mongo_db@classmethoddef from_crawler(cls, crawler): #從ettings中拿到需要的配置信息(類方法)return cls(mongo_uri=crawler.settings.get('MONGO_URI'),mongo_db=crawler.settings.get('MONGO_DB'))def open_spider(self,spider): #初始化數(shù)據(jù)庫self.client = pymongo.MongoClient(self.mongo_uri)self.db = self.client[self.mongo_db]def process_item(self, item ,spider): #向數(shù)據(jù)庫插入數(shù)據(jù)name = item.__class__.__name__self.db[name].insert(dict(item))return itemdef close_spider(self ,spider):self.client.close()相關(guān)配置--->settings.py(為了啟動Item Pipelines組件,必須將類添加到settings.py的ITEM_PIPELINES中,此處只有一個pipeline類,因此找到ITEM_PIPELINES,打開代碼)
# -*- coding: utf-8 -*-# Scrapy settings for quotetutorial project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://doc.scrapy.org/en/latest/topics/settings.html # https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'quotetutorial'SPIDER_MODULES = ['quotetutorial.spiders'] NEWSPIDER_MODULE = 'quotetutorial.spiders'MONGO_URI = 'localhost' MONGO_DB = 'quotestutorial'# Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'quotetutorial (+http://www.yourdomain.com)'# Obey robots.txt rules ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default) #COOKIES_ENABLED = False# Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False# Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #}# Enable or disable spider middlewares # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'quotetutorial.middlewares.QuotetutorialSpiderMiddleware': 543, #}# Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'quotetutorial.middlewares.QuotetutorialDownloaderMiddleware': 543, #}# Enable or disable extensions # See https://doc.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #}# Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = {'quotetutorial.pipelines.TextPipeline': 300,'quotetutorial.pipelines.MongoPipeline': 400, }# Enable and configure the AutoThrottle extension (disabled by default) # See https://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default) # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'注意:如果有多個item pipelines的話(多種保存方式),需要在ITEM_PIPELINES中配置類,后面的“300”隨意設(shè)置。
? ?分配給每個類的整型值,確定了它們的運行順序。數(shù)值越低,組件的優(yōu)先級越高,運行順序越靠前。
啟動項目:
scrapy crawl quotes把獲得的內(nèi)容保存
scrapy crawl quotes -o quotes.{json | jl | csv | xml | pickle | marshal}?
?
轉(zhuǎn)載于:https://www.cnblogs.com/darwinli/p/9485505.html
總結(jié)
以上是生活随笔為你收集整理的PYTHON 爬虫笔记十一:Scrapy框架的基本使用的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: tcpdump抓包分析具体解释
- 下一篇: python+selenium十:sel