日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

某宝的爬虫测试

發布時間:2024/3/26 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 某宝的爬虫测试 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

內容

基于網上的代碼,設計了一個搜索關鍵字去爬取商品信息的初級demo。cookies.txt文件需要在登陸的時候,去讀取,具體如下:
登陸的時候打開F12,選擇好Network 勾選Preserve log

登陸之后,會產生下面這個文檔。


保存cookie信息到cookies.txt就可以運行下面代碼。

import requests from selenium import webdriver from selenium.webdriver.chrome.options import Options from bs4 import BeautifulSoup import re from lxml import etree import pandas as pd import random import time import numpy as np from tqdm import tqdm import requests import randomdef set_user_agent():USER_AGENTS = ["Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)","Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)","Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)","Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6","Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0","Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5"]user_agent = random.choice(USER_AGENTS)return user_agentclass TaoBao:def __init__(self,url):#test_url是我們實際登陸后才能看到的內容,為了確保模擬成功在此用它進行測試self.test_url=urlself.headers={"Origin":"https://login.taobao.com","Upgrade-Insecure-Requests":"1","Content-Type":"application/x-www-form-urlencoded","Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8","Referer":"https://login.taobao.com/member/login.jhtml?redirectURL=https%3A%2F%2Fwww.taobao.com%2F","Accept-Encoding":"gzip, deflate, br","Accept-Language":"zh-CN,zh;q=0.9","User-Agent":set_user_agent()}self.cookies = {} # 申明一個字典用于存儲手動復制的cookiesself.res_cookies_txt = "" # 申明剛開始瀏覽器返回的cookies為空字符串#讀取mycookies.txt中的cookiesdef read_cookies(self):with open("cookies.txt",'r',encoding='utf-8') as f:cookies_txt = f.read().strip(';') #讀取文本內容#由于requests只保持 cookiejar 類型的cookie,而我們手動復制的cookie是字符串需先將其轉為dict類型后利用requests.utils.cookiejar_from_dict轉為cookiejar 類型#手動復制的cookie是字符串轉為字典:for cookie in cookies_txt.split(';'):name,value=cookie.strip().split('=',1) #用=號分割,分割1次self.cookies[name]=value #為字典cookies添加內容#將字典轉為CookieJar:cookiesJar = requests.utils.cookiejar_from_dict(self.cookies, cookiejar=None,overwrite=True)return cookiesJar#保存模擬登陸成功后從服務器返回的cookies,通過對比可以發現是有所不同的def set_cookies(self,cookies):# 將CookieJar轉為字典:res_cookies_dic = requests.utils.dict_from_cookiejar(cookies)#將新的cookies信息更新到手動cookies字典for i in res_cookies_dic.keys():self.cookies[i] = res_cookies_dic[i]#將服務器返回的cookies寫入到mycookies.txt中實現更新for k in self.cookies.keys():self.res_cookies_txt += k+"="+self.cookies[k]+";"with open('cookies.txt',"w",encoding="utf-8") as f:f.write(self.res_cookies_txt)def login(self):#開啟一個session會話session = requests.session()#設置請求頭信息session.headers = self.headers#將cookiesJar賦值給會話session.cookies=self.read_cookies()#向測試站點發起請求response = session.get(self.test_url) # print(response.content.decode())self.set_cookies(response.cookies)return responsedef deal(self,wen):tir = wen.textreps = list(set(re.findall('u\d[a-z\d]{3}',tir)))for rep in reps:try:hou = '\\'+repbian = hou.encode("gb18030").decode("unicode-escape")tir = tir.replace('\\\\'+rep,bian)except:continuetext = re.findall('"icon":(.*?)"icon"',tir)detail = re.findall('"detail_url":"(.*?)",',tir)title = re.findall('"raw_title":"(.*?)",',tir)nid = re.findall('"nid":"(.*?)",',tir)pic_url = re.findall('"pic_url":"(.*?)",',tir)item_loc = re.findall('"item_loc":"(.*?)",',tir)price = re.findall('"view_price":"(.*?)",',tir)web = []for de in detail:web.append('https:' + de)pic = []for pp in pic_url:pic.append('https:' + pp)biao = pd.DataFrame()biao['id'] = nidbiao['title'] = titlebiao['price'] = pricebiao['detail'] = webbiao['pic_url'] = picbiao['location'] = item_locreturn biaoif __name__ == '__main__':something = input('找什么')a = something.encode("utf-8")b = str(a).replace("b'",'').replace('\\x','%').upper()url = 'https://s.taobao.com/search?q=' + btaobao=TaoBao(url)wen = taobao.login()biao = taobao.deal(wen)

總結

以上是生活随笔為你收集整理的某宝的爬虫测试的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。