日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

爬取小说1--高并发

發(fā)布時間:2025/4/16 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 爬取小说1--高并发 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

爬取《放開那個女巫》小說

還是采用高并發(fā)的協(xié)程來進(jìn)行開啟下載。

其實,到現(xiàn)在為止,會了并發(fā)技術(shù)之后,諸多爬蟲比較的,已經(jīng)不再是用什么庫,之類的問題了。
而是,開始研究爬蟲的策略問題了。

比如,這里,我的策略就是,要保證每一章必須要爬取到,否則就要接著等下去。(每次爬取200章,然后必須要等所有的都已經(jīng)爬取完成之后才開始合并,之后再接著爬取接下來的200章。這個策略雖然保證的健壯性,但是在速度上卻是不敢恭維。下一步,我們將對這個策略進(jìn)行改進(jìn)!)

import requests import os import gevent from gevent import monkey import random import re from lxml import etreemonkey.patch_all(select=False) from urllib import parse import timeIPs = [{'HTTPS': 'HTTPS://182.114.221.180:61202'},{'HTTPS': 'HTTPS://60.162.73.45:61202'},{'HTTPS': 'HTTPS://113.13.36.227:61202'},{'HTTPS': 'HTTPS://1.197.88.101:61202'}] HEADERS = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36','Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8','Accept-Language': 'zh-CN,zh;q=0.9','Cookie': '__cfduid=d820fcba1e8cf74caa407d320e0af6b5d1518500755; UM_distinctid=1618db2bfbb140-060057ff473277-4323461-e1000-1618db2bfbc1e4; CNZZDATA1272873873=2070014299-1518497311-https%253A%252F%252Fwww.baidu.com%252F%7C1520689081; yjs_id=5a4200a91c8aa5629ae0651227ea7fa2; ctrl_time=1; jieqiVisitTime=jieqiArticlesearchTime%3D1520693103' }def setDir():if 'Noval' not in os.listdir('./'):os.mkdir('./Noval')def getNoval(url, id):while True:try:headers = HEADERSIP = random.choice(IPs)res = requests.get(url, headers=headers, proxies=IP)res.encoding = 'GB18030'html = res.text.replace('&nbsp;', ' ') # 替換掉這個字符 換成空格~ 意思是一樣的page = etree.HTML(html)content = page.xpath('//div[@id="content"]')ps = page.xpath('//div[@class="bookname"]/h1')if len(ps) != 0:s = ps[0].text + '\n's = s + content[0].xpath("string(.)")with open('./Noval/%d.txt' % id, 'w', encoding='gb18030', errors='ignore') as f:f.write(s)except Exception:continueelse:breakdef getContentFile(url):headers = HEADERSIP = random.choice(IPs)res = requests.get(url, headers=headers, proxies=IP)res.encoding = 'GB18030'page = etree.HTML(res.text)bookname = page.xpath('//div[@id="info"]/h1')[0].xpath('string(.)')dl = page.xpath('//div[@id="list"]/dl/dd/a')splitHTTP = parse.urlsplit(url)url = splitHTTP.scheme + '://' + splitHTTP.netlocreturn list(map(lambda x: url + x.get('href'), dl)), booknamedef BuildGevent(baseurl):content, bookname = getContentFile(baseurl) # version2steps = 200beginIndex, length = steps, len(content)count = 0name = "%s.txt" % booknamewhile (count - 1) * steps < length:WaitigList = [gevent.spawn(getNoval, content[i + count * steps], i + count * steps) for i in range(steps) ifi + count * steps < length]gevent.joinall(WaitigList)NovalFile = list(filter(lambda x: x[:x.index('.')].isdigit(), os.listdir('./Noval')))NovalFile.sort(key=lambda x: int(re.match('\d+', x).group()))String = ''for dirFile in NovalFile:with open('./Noval/' + dirFile, 'r', encoding='gb18030', errors='ignore') as f:String = String + '\n' + f.read()os.remove('./Noval/%s' % dirFile)if count == 0:with open('./Noval/' + name, 'w', encoding='gb18030', errors='ignore') as ff:ff.write(String)else:with open('./Noval/' + name, 'a', encoding='gb18030', errors='ignore') as ff:ff.write(String)count += 1if __name__ == '__main__':starttime = time.time()setDir()url = 'http://www.biquge.com.tw/16_16588/'BuildGevent(url)endtime = time.time()print("Total use time: %.6f" % (endtime - starttime))

總結(jié)

以上是生活随笔為你收集整理的爬取小说1--高并发的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。