日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Scrapy Architecture overview--官方文档

發布時間:2025/4/5 编程问答 23 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Scrapy Architecture overview--官方文档 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

原文地址:https://doc.scrapy.org/en/latest/topics/architecture.html

This document describes the architecture of Scrapy and how its components interact.

Overview

The following diagram shows an overview of the Scrapy architecture with its components and an outline of the data flow that takes place inside the system (shown by the red arrows). A brief description of the components is included below with links for more detailed information about them. The data flow is also described below.

Data flow

The data flow in Scrapy is controlled by the execution engine, and goes like this:

  • The?Engine?gets the initial Requests to crawl from the?Spider.
  • The?Engine?schedules the Requests in the?Scheduler?and asks for the next Requests to crawl.
  • The?Scheduler?returns the next Requests to the?Engine.
  • The?Engine?sends the Requests to the?Downloader, passing through the?Downloader Middlewares?(see?process_request()).
  • Once the page finishes downloading the?Downloader?generates a Response (with that page) and sends it to the Engine, passing through the?Downloader Middlewares?(see?process_response()).
  • The?Engine?receives the Response from the?Downloader?and sends it to the?Spider?for processing, passing through the?Spider Middleware?(see?process_spider_input()).
  • The?Spider?processes the Response and returns scraped items and new Requests (to follow) to the?Engine, passing through the?Spider Middleware?(see?process_spider_output()).
  • The?Engine?sends processed items to?Item Pipelines, then send processed Requests to the?Scheduler?and asks for possible next Requests to crawl.
  • The process repeats (from step 1) until there are no more requests from the?Scheduler.
  • Components

    Scrapy Engine

    The engine is responsible for controlling the data flow between all components of the system, and triggering events when certain actions occur. See the?Data Flow?section above for more details.

    Scheduler

    The Scheduler receives requests from the engine and enqueues them for feeding them later (also to the engine) when the engine requests them.

    Downloader

    The Downloader is responsible for fetching web pages and feeding them to the engine which, in turn, feeds them to the spiders.

    Spiders

    Spiders are custom classes written by Scrapy users to parse responses and extract items (aka scraped items) from them or additional requests to follow. For more information see?Spiders.

    Item Pipeline

    The Item Pipeline is responsible for processing the items once they have been extracted (or scraped) by the spiders. Typical tasks include cleansing, validation and persistence (like storing the item in a database). For more information see?Item Pipeline.

    Downloader middlewares

    Downloader middlewares are specific hooks that sit between the Engine and the Downloader and process requests when they pass from the Engine to the Downloader, and responses that pass from Downloader to the Engine.

    Use a Downloader middleware if you need to do one of the following:

    • process a request just before it is sent to the Downloader (i.e. right before Scrapy sends the request to the website);
    • change received response before passing it to a spider;
    • send a new Request instead of passing received response to a spider;
    • pass response to a spider without fetching a web page;
    • silently drop some requests.

    For more information see?Downloader Middleware.

    Spider middlewares

    Spider middlewares are specific hooks that sit between the Engine and the Spiders and are able to process spider input (responses) and output (items and requests).

    Use a Spider middleware if you need to

    • post-process output of spider callbacks - change/add/remove requests or items;
    • post-process start_requests;
    • handle spider exceptions;
    • call errback instead of callback for some of the requests based on response content.

    For more information see?Spider Middleware.

    Event-driven networking

    Scrapy is written with?Twisted, a popular event-driven networking framework for Python. Thus, it’s implemented using a non-blocking (aka asynchronous) code for concurrency.

    轉載于:https://www.cnblogs.com/davidwang456/p/7576227.html

    總結

    以上是生活随笔為你收集整理的Scrapy Architecture overview--官方文档的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。