日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程语言 > python >内容正文

python

python3+Scrapy爬虫入门

發(fā)布時(shí)間:2024/9/18 python 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python3+Scrapy爬虫入门 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

創(chuàng)建項(xiàng)目

scrapy startproject douban


紅框中是指出創(chuàng)建一個(gè)新爬蟲。

創(chuàng)建爬蟲

cd douban
scrapy genspider girls https://www.douban.com/group/641424/


自此,我們的項(xiàng)目算是基本創(chuàng)建好了,其中“girls”是指爬蟲的名稱,“https://www.douban.com/group/641424/”爬蟲的域名。不過為了方便我們項(xiàng)目啟動(dòng),可以在項(xiàng)目中新建一個(gè)entrypoint.py文件,文件內(nèi)容如下:

from scrapy.cmdline import executeexecute(['scrapy', 'crawl', 'girls'])

項(xiàng)目架構(gòu)圖

創(chuàng)建Item

創(chuàng)建一個(gè)新的Item方便我們保存所爬取的數(shù)據(jù)。
下面我們就來創(chuàng)建保存數(shù)據(jù)Item:

# -*- coding: utf-8 -*-# Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.htmlimport scrapyclass DoubanItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()passclass GirlItem(scrapy.Item):title = scrapy.Field() # 標(biāo)題author = scrapy.Field() # 作者url = scrapy.Field() # urllastTime = scrapy.Field() # 最近回應(yīng)時(shí)間detail_time = scrapy.Field() # 發(fā)帖時(shí)間detail_report = scrapy.Field() # 發(fā)帖內(nèi)容def __str__(self):return '{"title": "%s", "author": "%s", "url": "%s", "lastTime": "%s", "detail_time": "%s", "detail_report": "%s"}\n' %(self['title'], self['author'], self['url'], self['lastTime'], self['detail_time'], self['detail_report'])

之所以要從寫__str__方法,是因?yàn)橐獙⑺故境晌覀兿胝故镜臉幼印?/p>

上面DoubanItem是由scrapy自動(dòng)生成出來的,我們暫時(shí)先不管它,如果你想直接用系統(tǒng)創(chuàng)建的那個(gè)Item也是可以的。我這里是自己新創(chuàng)建一個(gè),看起來比較好管理。

爬取網(wǎng)頁

首先修改setting.py,添加USER_AGENT以及修改ROBOTSTXT_OBEY

USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36' ROBOTSTXT_OBEY = False

字段title author url lastTime在第一層URL中可以爬取到,而detail_time detail_report則是要根據(jù)url繼續(xù)下鉆爬取。所以在parse方法中繼續(xù)下鉆調(diào)用detail_parse方法,在detail_parse方法中將item保存至文件中。

完整代碼:

# -*- coding: utf-8 -*- import scrapy from bs4 import BeautifulSoup from items import GirlItemclass GirlsSpider(scrapy.Spider):name = 'girls'allowed_domains = ['www.douban.com']start_urls = ['https://www.douban.com/group/641424/discussion?start=25']# 重寫start_requests方法# def start_requests(self):# # 瀏覽器用戶代理# headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'}# return [scrapy.Request(url=self.start_urls[0], callback=self.parse, headers=headers)]def parse(self, response):html = response.textsoup = BeautifulSoup(html, "lxml")# print("開始打印soup")# print(soup)table = soup.tabletr_arr = table.find_all("tr")for tr in tr_arr:item = GirlItem()tds = tr.find_all('td')item['title'] = tds[0].get_text().replace('\n','').replace(' ', '')item['author'] = tds[1].get_text().replace('\n','').replace(' ', '')item['lastTime'] = tds[3].get_text().replace('\n','')try:item['url'] = tds[0].find('a',href=True)['href']# 根據(jù)內(nèi)頁地址爬取yield scrapy.Request(item['url'], meta={'item': item}, callback=self.detail_parse)except:item['url'] = ""#找到下一個(gè)鏈接,也就是翻頁next_url = soup.find(name='div', attrs={"class":"paginator"}).find(name='span', attrs={"class":"next"}).find(name='link')['href']if next_url:print("開始下一頁")yield scrapy.Request(next_url, callback=self.parse)def detail_parse(self, response):# 接收上級(jí)已爬取的數(shù)據(jù)item = response.meta['item']try:item['detail_time'] = response.xpath('//*[@id="topic-content"]/div[2]/h3/span[2]/text()').extract()[0]except BaseException as e:print(e)item['detail_time'] = ""try:item['detail_report'] = response.xpath('//*[@id="link-report"]').extract()[0].replace('\n','')except BaseException as e:print(e)item['detail_report'] = ""write_to_file('E:/douban-detail.txt', item)# return itemdef write_to_file (file_name, txt):# print("正在存儲(chǔ)文件" + str(file_name))# w 如果沒有這個(gè)文件將創(chuàng)建這個(gè)文件''''r':讀'w':寫'a':追加'r+' == r+w(可讀可寫,文件若不存在就報(bào)錯(cuò)(IOError))'w+' == w+r(可讀可寫,文件若不存在就創(chuàng)建)'a+' ==a+r(可追加可寫,文件若不存在就創(chuàng)建)'''f = open(file_name, 'a', encoding='utf-8')f.write(str(txt))f.close()

運(yùn)行項(xiàng)目

python entrypoint.py

總結(jié)

以上是生活随笔為你收集整理的python3+Scrapy爬虫入门的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。