日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程语言 > python >内容正文

python

Python多线程豆瓣影评API接口爬虫

發(fā)布時(shí)間:2023/12/10 python 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Python多线程豆瓣影评API接口爬虫 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

爬蟲庫

使用簡單的requests庫,這是一個(gè)阻塞的庫,速度比較慢。
解析使用XPATH表達(dá)式
總體采用類的形式

多線程

使用concurrent.future并發(fā)模塊,建立線程池,把future對(duì)象扔進(jìn)去執(zhí)行即可實(shí)現(xiàn)并發(fā)爬取效果

數(shù)據(jù)存儲(chǔ)

使用Python ORM sqlalchemy保存到數(shù)據(jù)庫,也可以使用自帶的csv模塊存在CSV中。

API接口

因?yàn)锳PI接口存在數(shù)據(jù)保護(hù)情況,一個(gè)電影的每一個(gè)分類只能抓取前25頁,全部評(píng)論、好評(píng)、中評(píng)、差評(píng)所有分類能爬100頁,每頁有20個(gè)數(shù)據(jù),即最多為兩千條數(shù)據(jù)。

因?yàn)闀r(shí)效性原因,不保證代碼能爬到數(shù)據(jù),只是給大家一個(gè)參考思路,上代碼

from datetime import datetime import random import csv from concurrent.futures import ThreadPoolExecutor, as_completedfrom lxml import etree import pymysql import requestsfrom models import create_session, Comments#隨機(jī)UA USERAGENT = ['Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.835.163 Safari/535.1','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50','Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.9.168 Version/11.50','Mozilla/5.0 (Windows; U; Windows NT 6.1; ) AppleWebKit/534.12 (KHTML, like Gecko) Maxthon/3.0 Safari/534.12' ]class CommentFetcher:headers = {'User-Agent': ''}cookie = ''cookies = {'cookie': cookie}# cookie為登錄后的cookie,需要自行復(fù)制base_node = '//div[@class="comment-item"]'def __init__(self, movie_id, start, type=''):''':type: 全部評(píng)論:'', 好評(píng):h 中評(píng):m 差評(píng):l:movie_id: 影片的ID號(hào):start: 開始的記錄數(shù),0-480'''self.movie_id = movie_idself.start = startself.type = typeself.url = 'https://movie.douban.com/subject/{id}/comments?start={start}&limit=20&sort=new_score\&status=P&percent_type={type}&comments_only=1'.format(id=str(self.movie_id),start=str(self.start),type=self.type)#創(chuàng)建數(shù)據(jù)庫連接self.session = create_session()#隨機(jī)useragentdef _random_UA(self):self.headers['User-Agent'] = random.choice(USERAGENT)#獲取api接口,使用get方法,返回的數(shù)據(jù)為json數(shù)據(jù),需要提取里面的HTMLdef _get(self):self._random_UA()res = ''try:res = requests.get(self.url, cookies=self.cookies, headers=self.headers)res = res.json()['html']except Exception as e:print('IP被封,請(qǐng)使用代理IP')print('正在獲取{} 開始的記錄'.format(self.start))return resdef _parse(self):res = self._get()dom = etree.HTML(res)#id號(hào)self.id = dom.xpath(self.base_node + '/@data-cid')#用戶名self.username = dom.xpath(self.base_node + '/div[@class="avatar"]/a/@title')#用戶連接self.user_center = dom.xpath(self.base_node + '/div[@class="avatar"]/a/@href')#點(diǎn)贊數(shù)self.vote = dom.xpath(self.base_node + '//span[@class="votes"]/text()')#星級(jí)self.star = dom.xpath(self.base_node + '//span[contains(@class,"rating")]/@title')#發(fā)表時(shí)間self.time = dom.xpath(self.base_node + '//span[@class="comment-time "]/@title')#評(píng)論內(nèi)容 所有span標(biāo)簽class名為short的節(jié)點(diǎn)文本self.content = dom.xpath(self.base_node + '//span[@class="short"]/text()')#保存到數(shù)據(jù)庫def save_to_database(self):self._parse()for i in range(len(self.id)):try:comment = Comments(id=int(self.id[i]),username=self.username[i],user_center=self.user_center[i],vote=int(self.vote[i]),star=self.star[i],time=datetime.strptime(self.time[i], '%Y-%m-%d %H:%M:%S'),content=self.content[i])self.session.add(comment)self.session.commit()return 'finish'except pymysql.err.IntegrityError as e:print('數(shù)據(jù)重復(fù),不做任何處理')except Exception as e:#數(shù)據(jù)添加錯(cuò)誤,回滾self.session.rollback()finally:#關(guān)閉數(shù)據(jù)庫連接self.session.close()#保存到csvdef save_to_csv(self):self._parse()f = open('comment.csv', 'w', encoding='utf-8')csv_in = csv.writer(f, dialect='excel')for i in range(len(self.id)):csv_in.writerow([int(self.id[i]),self.username[i],self.user_center[i],int(self.vote[i]),self.time[i],self.content[i]])f.close()if __name__ == '__main__':with ThreadPoolExecutor(max_workers=4) as executor:futures = []for i in ['', 'h', 'm', 'l']:for j in range(25):fetcher = CommentFetcher(movie_id=26266893, start=j * 20, type=i)futures.append(executor.submit(fetcher.save_to_csv))for f in as_completed(futures):try:res = f.done()if res:ret_data = f.result()if ret_data == 'finish':print('{} 成功保存數(shù)據(jù)'.format(str(f)))except Exception as e:f.cancel()

轉(zhuǎn)載于:https://www.cnblogs.com/PyKK2019/p/10828632.html

總結(jié)

以上是生活随笔為你收集整理的Python多线程豆瓣影评API接口爬虫的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。