日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

京东爬虫

發(fā)布時(shí)間:2023/12/8 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 京东爬虫 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

一開始看京東商城的商品,發(fā)現(xiàn)很多信息都在網(wǎng)頁(yè)源代碼上,以為會(huì)比淘寶的大規(guī)模爬取簡(jiǎn)單點(diǎn),結(jié)果被京東欺騙了無數(shù)次,整整寫了差不多六個(gè)小時(shí),真是坑爹啊。先貼上github地址:https://github.com/xiaobeibei26/jingdong

鏈接:https://www.jianshu.com/p/e938a78b2f75

先說下這個(gè)網(wǎng)站,首先在首頁(yè)隨便輸入一個(gè)想爬取的商品類別,觀察到一般商品數(shù)目都是100頁(yè)的,除非有些比較稀少的商品,如圖

Paste_Image.png

?

介紹一下網(wǎng)站的分析過程,默認(rèn)情況下在首頁(yè)輸入一件商品時(shí),出來的搜索頁(yè)面是只有30件商品的,屏幕的右側(cè)下拉框拉到下面會(huì)觸發(fā)一個(gè)ajax的請(qǐng)求,把剩下的30個(gè)商品渲染出來,一般每頁(yè)60個(gè)商品里面是有三個(gè)左右是廣告的,也就是有效商品一般是57個(gè)。這里看一下這個(gè)AJAX請(qǐng)求,這個(gè)是爬取難點(diǎn)

Paste_Image.png

?

看一看這個(gè)請(qǐng)求頭,我當(dāng)時(shí)第一個(gè)感覺以為很多參數(shù)是可以去掉,拿到一個(gè)很簡(jiǎn)便的鏈接就可以了

?

當(dāng)時(shí)沒注意,刪了很多參數(shù)直接請(qǐng)求,結(jié)果調(diào)試了很久,獲得的商品在插進(jìn)數(shù)據(jù)庫(kù)去重的時(shí)候都是只剩網(wǎng)頁(yè)的一般,細(xì)細(xì)觀察了一下發(fā)現(xiàn)鏈接雖然不同,請(qǐng)求回來的商品卻是一樣的,然后我再細(xì)細(xì)看了看這個(gè)ajax請(qǐng)求,鼓搗了好久,最終發(fā)現(xiàn)這個(gè)URL后面的每個(gè)數(shù)字都是每一件商品的ID,而這個(gè)ID隱藏在第一次剛打開網(wǎng)頁(yè)時(shí)候最初的那些商品里面,如圖.........

Paste_Image.png

?

這里結(jié)合ajax請(qǐng)求的參數(shù)看,

Paste_Image.png

?

然后我又從新改掉爬蟲邏輯,改代碼,又花了兩個(gè)小時(shí),好慘啊.......

然后終于可以一次提取完整的網(wǎng)頁(yè)商品了,最后提示一下,京東網(wǎng)頁(yè)第一頁(yè)的商品里面頁(yè)數(shù)page是顯示1和2的,第二頁(yè)是3和4,這個(gè)有點(diǎn)特殊,最后上一張爬蟲主程序圖

# -*- coding: utf-8 -*- import scrapyfrom jingdong.items import JingdongItemclass JdSpider(scrapy.Spider):name = "jd"allowed_domains = ["www.jd.com"]start_urls = ['http://www.jd.com/']search_url1 = 'https://search.jd.com/Search?keyword={key}&enc=utf-8&page={page}'#search_url2='https://search.jd.com/s_new.php?keyword={key}&enc=utf-8&page={page}&scrolling=y&pos=30&show_items={goods_items}'search_url2= 'https://search.jd.com/s_new.php?keyword={key}&enc=utf-8&page={page}&s=26&scrolling=y&pos=30&tpl=3_L&show_items={goods_items}'shop_url ='http://mall.jd.com/index-{shop_id}.html'def start_requests(self):key = '褲子'for num in range(1,100):page1 = str(2*num-1)#構(gòu)造頁(yè)數(shù)page2 = str(2*num)yield scrapy.Request(url=self.search_url1.format(key=key,page=page1),callback=self.parse,dont_filter = True)yield scrapy.Request(url=self.search_url1.format(key=key,page=page1),callback=self.get_next_half,meta={'page2':page2,'key':key},dont_filter = True)#這里一定要加dont_filter = True,不然scrapy會(huì)自動(dòng)忽略掉這個(gè)重復(fù)URL的訪問def get_next_half(self,response):try:items = response.xpath('//*[@id="J_goodsList"]/ul/li/@data-pid').extract()key = response.meta['key']page2 =response.meta['page2']goods_items=','.join(items)yield scrapy.Request(url=self.search_url2.format(key=key, page=page2, goods_items=goods_items),callback=self.next_parse,dont_filter=True)#這里不加這個(gè)的話scrapy會(huì)報(bào)錯(cuò)dont_filter,官方是說跟allowed_domains沖突,可是第一個(gè)請(qǐng)求也是這個(gè)域名,實(shí)在無法理解except Exception as e:print('沒有數(shù)據(jù)')def parse(self, response):all_goods = response.xpath('//div[@id="J_goodsList"]/ul/li')for one_good in all_goods:item = JingdongItem()try:data = one_good.xpath('div/div/a/em')item['title'] = data.xpath('string(.)').extract()[0]#提取出該標(biāo)簽所有文字內(nèi)容item['comment_count'] = one_good.xpath('div/div[@class="p-commit"]/strong/a/text()').extract()[0]#評(píng)論數(shù)item['goods_url'] = 'http:'+one_good.xpath('div/div[4]/a/@href').extract()[0]#商品鏈接item['shops_id']=one_good.xpath('div/div[@class="p-shop"]/@data-shopid').extract()[0]#店鋪IDitem['shop_url'] =self.shop_url.format(shop_id=item['shops_id'])goods_id=one_good.xpath('div/div[2]/div/ul/li[1]/a/img/@data-sku').extract()[0]if goods_id:item['goods_id'] =goods_idprice=one_good.xpath('div/div[3]/strong/i/text()').extract()#價(jià)格if price:#有寫商品評(píng)論數(shù)是0,價(jià)格也不再源代碼里面,應(yīng)該是暫時(shí)上首頁(yè)的促銷商品,每頁(yè)有三四件,我們忽略掉item['price'] =price[0]yield itemexcept Exception as e:passdef next_parse(self,response):all_goods=response.xpath('/html/body/li')for one_good in all_goods:item = JingdongItem()try:data = one_good.xpath('div/div/a/em')item['title'] = data.xpath('string(.)').extract()[0] # 提取出該標(biāo)簽所有文字內(nèi)容item['comment_count'] = one_good.xpath('div/div[@class="p-commit"]/strong/a/text()').extract()[0] # 評(píng)論數(shù)item['goods_url'] = 'http:' + one_good.xpath('div/div[4]/a/@href').extract()[0] # 商品鏈接item['shops_id'] = one_good.xpath('div/div[@class="p-shop"]/@data-shopid').extract()[0] # 店鋪IDitem['shop_url'] = self.shop_url.format(shop_id=item['shops_id'])goods_id = one_good.xpath('div/div[2]/div/ul/li[1]/a/img/@data-sku').extract()[0]if goods_id:item['goods_id'] = goods_idprice = one_good.xpath('div/div[3]/strong/i/text()').extract() # 價(jià)格if price: # 有寫商品評(píng)論數(shù)是0,價(jià)格也不再源代碼里面,應(yīng)該是暫時(shí)上首頁(yè)的促銷商品,每頁(yè)有三四件,我們忽略掉item['price'] = price[0]yield itemexcept Exception as e:pass

pipline代碼如圖

class JingdongPipeline(object):# def __init__(self):# self.client = MongoClient()# self.database = self.client['jingdong']# self.db = self.database['jingdong_infomation']## def process_item(self, item, spider):#這里以每個(gè)用戶url_token為ID,有則更新,沒有則插入# self.db.update({'goods_id':item['goods_id']},dict(item),True)# return item## def close_spider(self,spider):# self.client.close()def __init__(self):self.conn = pymysql.connect(host='127.0.0.1',port=3306,user ='root',passwd='root',db='jingdong',charset='utf8')self.cursor = self.conn.cursor()def process_item(self, item, spider):try:#有些標(biāo)題會(huì)重復(fù),所以添加異常title = item['title']comment_count = item['comment_count'] # 評(píng)論數(shù)shop_url = item['shop_url'] # 店鋪鏈接price = item['price']goods_url = item['goods_url']shops_id = item['shops_id']goods_id =int(item['goods_id'])#sql = 'insert into jingdong_goods(title,comment_count,shop_url,price,goods_url,shops_id) VALUES (%(title)s,%(comment_count)s,%(shop_url)s,%(price)s,%(goods_url)s,%(shops_id)s,)'try:self.cursor.execute("insert into jingdong_goods(title,comment_count,shop_url,price,goods_url,shops_id,goods_id)values(%s,%s,%s,%s,%s,%s,%s)", (title,comment_count,shop_url,price,goods_url,shops_id,goods_id))self.conn.commit()except Exception as e:passexcept Exception as e:pass

運(yùn)行結(jié)果如圖

Paste_Image.png

?

運(yùn)行了幾分鐘,每頁(yè)一千條,共爬了幾萬條褲子,京東的褲子真是多
mysql數(shù)據(jù)庫(kù)插入操作



作者:蝸牛仔
鏈接:https://www.jianshu.com/p/e938a78b2f75
來源:簡(jiǎn)書
?

總結(jié)

以上是生活随笔為你收集整理的京东爬虫的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。