日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

python组件介绍_python 中的爬虫· scrapy框架 重要的组件的介绍

發布時間:2023/12/4 python 29 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python组件介绍_python 中的爬虫· scrapy框架 重要的组件的介绍 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

一 。? 去重的規則組件

去重數據,中通過set() 去重的, 留下的數據存在redis 中,

找到這個類? : from scrapy.dupefilter import RFPDupeFilter

a. 爬蟲中yield Request(...dont_filter=False)

b. 類

from scrapy.dupefilter import BaseDupeFilter

import redis

from scrapy.utils.request import request_fingerprint

class XzxDupefilter(BaseDupeFilter):

def __init__(self,key):

self.conn = None

self.key = key

@classmethod

def from_settings(cls, settings):

key = settings.get('DUP_REDIS_KEY')

return cls(key)

def open(self):

self.conn = redis.Redis(host='127.0.0.1',port=6379)

def request_seen(self, request):

fp = request_fingerprint(request)

added = self.conn.sadd(self.key, fp)

return added == 0

c. settings中配置

# 默認dupefilter

# DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'

DUPEFILTER_CLASS = 'xzx.dupfilter.XzxDupefilter' # 可以自定義的

這個類給url 添加一個唯一的標識:

from scrapy.utils.request import request_fingerprint

補充:調度器中有一段代碼來規定

def enqueue_request(self, request):

# dont_filter=True, => False -> 添加到去重規則:False,True

# dont_filter=False, => True -> 添加到去重規則: False,True

if not request.dont_filter and self.df.request_seen(request):

return False

# 添加到調度器

dqok = self._dqpush(request)

二 。調度器

1. 廣度優先 (本質就是棧)

2.深度優先 (本質就是隊列)

3. 優先級隊列 (redis的有序集合)

三? 下載中間件

這個中間件事 調度器 于 下載器之間的中間件。

a.     scrapy中下載中間件的作用?

統一對所有請求批量對request對象進行下載前的預處理。

b. 針對user-agent,默認中間件 內置的默認的執行, 獲取的是stettings 中自己配置的user-agent

class UserAgentMiddleware(object):

"""This middleware allows spiders to override the user_agent"""

def __init__(self, user_agent='Scrapy'):

self.user_agent = user_agent # USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'

@classmethod

def from_crawler(cls, crawler):

o = cls(crawler.settings['USER_AGENT'])

crawler.signals.connect(o.spider_opened, signal=signals.spider_opened)

return o

def spider_opened(self, spider):

self.user_agent = getattr(spider, 'user_agent', self.user_agent)

def process_request(self, request, spider):

if self.user_agent:

request.headers.setdefault(b'User-Agent', self.user_agent)

c. 關于重定向 內置對的默認的

class BaseRedirectMiddleware(object):

enabled_setting = 'REDIRECT_ENABLED'

def __init__(self, settings):

if not settings.getbool(self.enabled_setting):

raise NotConfigured

self.max_redirect_times = settings.getint('REDIRECT_MAX_TIMES')

self.priority_adjust = settings.getint('REDIRECT_PRIORITY_ADJUST')

@classmethod

def from_crawler(cls, crawler):

return cls(crawler.settings)

def _redirect(self, redirected, request, spider, reason):

ttl = request.meta.setdefault('redirect_ttl', self.max_redirect_times)

redirects = request.meta.get('redirect_times', 0) + 1

if ttl and redirects <= self.max_redirect_times:

redirected.meta['redirect_times'] = redirects

redirected.meta['redirect_ttl'] = ttl - 1

redirected.meta['redirect_urls'] = request.meta.get('redirect_urls', []) + \

[request.url]

redirected.dont_filter = request.dont_filter

redirected.priority = request.priority + self.priority_adjust

logger.debug("Redirecting (%(reason)s) to %(redirected)s from %(request)s",

{'reason': reason, 'redirected': redirected, 'request': request},

extra={'spider': spider})

return redirected

else:

logger.debug("Discarding %(request)s: max redirections reached",

{'request': request}, extra={'spider': spider})

raise IgnoreRequest("max redirections reached")

def _redirect_request_using_get(self, request, redirect_url):

redirected = request.replace(url=redirect_url, method='GET', body='')

redirected.headers.pop('Content-Type', None)

redirected.headers.pop('Content-Length', None)

return redirected

class RedirectMiddleware(BaseRedirectMiddleware):

"""

Handle redirection of requests based on response status

and meta-refresh html tag.

"""

def process_response(self, request, response, spider):

if (request.meta.get('dont_redirect', False) or

response.status in getattr(spider, 'handle_httpstatus_list', []) or

response.status in request.meta.get('handle_httpstatus_list', []) or

request.meta.get('handle_httpstatus_all', False)):

return response

allowed_status = (301, 302, 303, 307, 308)

if 'Location' not in response.headers or response.status not in allowed_status:

return response

location = safe_url_string(response.headers['location'])

redirected_url = urljoin(request.url, location)

if response.status in (301, 307, 308) or request.method == 'HEAD':

redirected = request.replace(url=redirected_url)

return self._redirect(redirected, request, spider, response.status)

redirected = self._redirect_request_using_get(request, redirected_url)

return self._redirect(redirected, request, spider, response.status)

d. 關于cookie 是內置的默認的就執行

用法 自己寫的邏輯里 yield 加上meta={“cookieJar”:1}}:

def start_requests(self):

for url in self.start_urls:

yield Request(url=url,callback=self.parse,meta={"cookieJar":1})

class CookiesMiddleware(object):

"""This middleware enables working with sites that need cookies"""

def __init__(self, debug=False):

self.jars = defaultdict(CookieJar)

self.debug = debug

@classmethod

def from_crawler(cls, crawler):

if not crawler.settings.getbool('COOKIES_ENABLED'):

raise NotConfigured

return cls(crawler.settings.getbool('COOKIES_DEBUG'))

def process_request(self, request, spider):

if request.meta.get('dont_merge_cookies', False):

return

# cookiejarkey = 1

cookiejarkey = request.meta.get("cookiejar")

jar = self.jars[cookiejarkey] # CookieJar對象-> 空容器

cookies = self._get_request_cookies(jar, request)

for cookie in cookies:

jar.set_cookie_if_ok(cookie, request)

# set Cookie header

request.headers.pop('Cookie', None)

jar.add_cookie_header(request)

self._debug_cookie(request, spider)

def process_response(self, request, response, spider):

if request.meta.get('dont_merge_cookies', False):

return response

# extract cookies from Set-Cookie and drop invalid/expired cookies

cookiejarkey = request.meta.get("cookiejar")

jar = self.jars[cookiejarkey]

jar.extract_cookies(response, request)

self._debug_set_cookie(response, spider)

return response

def _debug_cookie(self, request, spider):

if self.debug:

cl = [to_native_str(c, errors='replace')

for c in request.headers.getlist('Cookie')]

if cl:

cookies = "\n".join("Cookie: {}\n".format(c) for c in cl)

msg = "Sending cookies to: {}\n{}".format(request, cookies)

logger.debug(msg, extra={'spider': spider})

def _debug_set_cookie(self, response, spider):

if self.debug:

cl = [to_native_str(c, errors='replace')

for c in response.headers.getlist('Set-Cookie')]

if cl:

cookies = "\n".join("Set-Cookie: {}\n".format(c) for c in cl)

msg = "Received cookies from: {}\n{}".format(response, cookies)

logger.debug(msg, extra={'spider': spider})

def _format_cookie(self, cookie):

# build cookie string

cookie_str = '%s=%s' % (cookie['name'], cookie['value'])

if cookie.get('path', None):

cookie_str += '; Path=%s' % cookie['path']

if cookie.get('domain', None):

cookie_str += '; Domain=%s' % cookie['domain']

return cookie_str

def _get_request_cookies(self, jar, request):

if isinstance(request.cookies, dict):

cookie_list = [{'name': k, 'value': v} for k, v in \

six.iteritems(request.cookies)]

else:

cookie_list = request.cookies

cookies = [self._format_cookie(x) for x in cookie_list]

headers = {'Set-Cookie': cookies}

response = Response(request.url, headers=headers)

return jar.make_cookies(response, request)

默認中間件:

DOWNLOADER_MIDDLEWARES_BASE = {

# Engine side

'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,

'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,

'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,

'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,

'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,

'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,

'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,

'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,

'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,

'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,

'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,

'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,

'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,

'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,

# Downloader side

}

注意點:

process_request? ?不用返回,

1.?如果 有返回response,就會找最后一個process—ressponse

2. 如果返回request , 就到直接根據返回的request 到調度器中執行

process_response:必須有返回值

四? 。 爬蟲中間件

下載器組件 到 爬蟲組件中間件,

默認有 優先級的中間件 和 深度的中間件

編寫中間件

class XzxSpiderMiddleware(object):

# Not all methods need to be defined. If a method is not defined,

# scrapy acts as if the spider middleware does not modify the

# passed objects.

@classmethod

def from_crawler(cls, crawler):

# This method is used by Scrapy to create your spiders.

s = cls()

return s

def process_spider_input(self, response, spider):

# Called for each response that goes through the spider

# middleware and into the spider.

# Should return None or raise an exception.

return None

def process_spider_output(self, response, result, spider):

# Called with the results returned from the Spider, after

# it has processed the response.

# Must return an iterable of Request, dict or Item objects.

for i in result:

yield i

def process_spider_exception(self, response, exception, spider):

# Called when a spider or process_spider_input() method

# (from other spider middleware) raises an exception.

# Should return either None or an iterable of Response, dict

# or Item objects.

pass

def process_start_requests(self, start_requests, spider):

# Called with the start requests of the spider, and works

# similarly to the process_spider_output() method, except

# that it doesn’t have a response associated.

# Must return only requests (not items).

for r in start_requests:

yield r

配置文件:

SPIDER_MIDDLEWARES = {

'xzx.middlewares.XzxSpiderMiddleware': 543,

}

內置爬蟲中間件 settings 中的配置 :

深度 :

DEPTH_LIMIT = 8

優先級

DEPTH_PRIORITY = 1, 請求的優先級:0 -1 -2 -3 。。。。

DEPTH_PRIORITY = -1,請求的優先級:0 1 2 3 。。。。

SPIDER_MIDDLEWARES_BASE = {

# Engine side

'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,

'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,

'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,

'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,

'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,

# Spider side

}

總結:

1. DupeFilter

- 默認放在set集合

- url變更為唯一標記

- 將去重規則放到redis中的意義何在?

- 去重+dont_filter

2. 調度器

- 爬蟲中什么是深度和廣度優先?

- 用什么可以實現?

- 棧

- 隊列

- 優先級集合

3,開放封閉原則:

對源碼封閉,對配置文件開放, 通過修改配置文件,實現自己想要的功能.

總結

以上是生活随笔為你收集整理的python组件介绍_python 中的爬虫· scrapy框架 重要的组件的介绍的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。