日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

16-爬虫之scrapy框架手动请求发送实现全站数据爬取03

發布時間:2024/9/15 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 16-爬虫之scrapy框架手动请求发送实现全站数据爬取03 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

scrapy的手動請求發送實現全站數據爬取

  • yield scrapy.Reques(url,callback) 發起的get請求
    • callback指定解析函數用于解析數據
  • yield scrapy.FormRequest(url,callback,formdata)發起的post請求
    • formdata:字典,請求參數
  • 為什么start_urls列表中的url會被自動進行get請求的發送?
    • 因為列表中的url其實是被start_requests這個父類方法實現的get請求
# 父類方法:這個是該方法的原始實現 def start_requests(self):for u in self.start_urls:yield scrapy.Request(url=url,callback=self.parse)
  • 如何將start_urls中的url默認進行post請求發送?
# 重寫父類方法默認進行post請求發送 def start_requests(self):for u in self.start_urls:yield scrapy.FormRequest(url=url,callback=self.parse)

開始

創建一個爬蟲工程:scrapy startproject proName
進入工程目錄創建爬蟲源文件:scrapy genspider spiderName www.xxx.com
執行工程:scrapy crawl spiderName

配置pipelines.py文件

# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface from itemadapter import ItemAdapterclass GpcPipeline:def process_item(self, item, spider):print(item)return item

配置items.py文件

# Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass GpcItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()title = scrapy.Field()content = scrapy.Field()

配置settings.py配置文件

# Scrapy settings for gpc project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'gpc'SPIDER_MODULES = ['gpc.spiders'] NEWSPIDER_MODULE = 'gpc.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent #設置UA偽裝 USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36' LOG_LEVEL = 'ERROR' #指定類型日志的輸出(只輸出錯誤信息)# Obey robots.txt rules ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default) #COOKIES_ENABLED = False# Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False# Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #}# Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'gpc.middlewares.GpcSpiderMiddleware': 543, #}# Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'gpc.middlewares.GpcDownloaderMiddleware': 543, #}# Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #}# Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = {'gpc.pipelines.GpcPipeline': 300, }# Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

爬蟲源文件 una.py

import scrapy from gpc.items import GpcItemclass UnaSpider(scrapy.Spider):name = 'una'#allowed_domains = ['www.baidu.com']start_urls = ['https://duanziwang.com/category/經典段子/1/']#定義通用url模板url = 'https://duanziwang.com/category/經典段子/%d/'page_num = 2# 將段子網中所有頁碼對應的數據進行爬取def parse(self, response):# 數據解析名稱和內容article_list = response.xpath('/html/body/section/div/div/main/article')for article in article_list:# 我們可以看見解析出來的內容不是字符串數據,說明和etree中xpath使用方式不同# xpath返回的列表中存儲是Selector對象,說明我們想要的字符串數據被存儲在了該對象的data屬性中# extract()就是將data屬性值取出# 調用extract_first() 將列表中第一個列表元素表示的Selector對象中的data值取出title = article.xpath("./div[1]/h1/a/text()").extract_first()content = article.xpath("./div[2]/p/text()").extract_first()# 實例化一個item類型的對象,將解析到的數據存儲到該對象中item = GpcItem()# 不能使用item. 來調用數據item['title'] = titleitem['content'] = contentyield itemif self.page_num < 5:#結束遞歸的條件new_url = format(self.url%self.page_num)#其他頁碼對應的完整urlself.page_num += 1#對新的頁碼對應的url發起請求(手動發起get請求)yield scrapy.Request(url=new_url,callback=self.parse)#遞歸請求回調提交給parse進行解析

結果展示

總結

以上是生活随笔為你收集整理的16-爬虫之scrapy框架手动请求发送实现全站数据爬取03的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。