日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

python爬取京东评论_Python如何爬取京东的评价信息

發(fā)布時(shí)間:2025/3/11 python 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python爬取京东评论_Python如何爬取京东的评价信息 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

Python如何爬取京東的評(píng)價(jià)信息

模塊:requests,BeautifulSoup

import re

import time

import csv

import requests

from bs4 import BeautifulSoup

def write_a_row_in_csv(data, csv_doc):

"save good information into a row in csv document"

with open(csv_doc, 'a', newline='') as f:

writer = csv.writer(f)

writer.writerow(data)

# add headers, download page, check status code, return page

url = 'https://search.jd.com/Search?keyword=%E5%8D%8E%E4%B8%BAp20&enc=utf-8&suggest=1.def.0.V13&wq=%E5%8D%8E%E4%B8%BA&pvid=f47b5d05bba84d9dbfabf983575a6875'

headers = {

"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0"

}

response = requests.get(url, headers=headers)

print(response.status_code)

# save as html document

with open('html.html', 'w', encoding='utf8') as f:

f.write(response.text)

# save as csv document

with open('phone.csv', 'w', newline='') as f:

writer = csv.writer(f)

fields = ('id', '名稱', '價(jià)格', '評(píng)價(jià)人數(shù)', '好評(píng)率')

writer.writerow(fields)

# find elements, such as name, item, price, comment, goodrate, comment count

soup_all = BeautifulSoup(response.content, 'lxml')

sp_all_items = soup_all.find_all('li', attrs={'class': 'gl-item'})

for soup in sp_all_items[:3]:

print('-' * 50)

name = soup.find('div', attrs={'class': 'p-name p-name-type-2'}).find('em').text

print('name: ', name)

item = soup.find('div', attrs={'class': 'p-name p-name-type-2'}).find('a')

print('item: ', item['href'], re.search(r'(\d+)', item['href']).group())

price = soup.find_all('div', attrs={'class': 'p-price'})

print('price:', price[0].i.string)

comment = soup.find_all('div', attrs={'class': 'p-commit'})

print('comment url:', comment[0].find('a').attrs['href'])

time.sleep(0.2)

# need add referer into headers

item_id = re.search(r'(\d+)', item['href']).group()

url = f'https://sclub.jd.com/comment/productPageComments.action?productId={item_id}&score=0&sortType=5&page=0&pageSize=10&isShadowSku=0&fold=1'

headers = {

"referer": f"https://item.jd.com/{item_id}.html",

"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0"

}

response = requests.get(url, headers=headers)

with open('html.json', 'w', encoding='utf8') as f:

f.write(response.text)

data = response.json()

comment_count = data['productCommentSummary']['commentCount']

print('評(píng)價(jià)人數(shù):', comment_count)

good_rate = data['productCommentSummary']['goodRate']

print('好評(píng)率:', good_rate)

# record data into CSV sheet

write_a_row_in_csv(('id'+item_id, name, price[0].i.string, comment_count, good_rate), 'phone.csv')

總結(jié)

以上是生活随笔為你收集整理的python爬取京东评论_Python如何爬取京东的评价信息的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。