日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

python之scrapy爬取jd和qq招聘信息

發布時間:2023/12/2 python 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python之scrapy爬取jd和qq招聘信息 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1、settings.py文件

# -*- coding: utf-8 -*-# Scrapy settings for jd project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://doc.scrapy.org/en/latest/topics/settings.html # https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'jd'SPIDER_MODULES = ['jd.spiders'] NEWSPIDER_MODULE = 'jd.spiders'LOG_LEVEL="WARNING" LOG_FILE="./jingdong1.log" # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'jd (+http://www.yourdomain.com)'# Obey robots.txt rules ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default) #COOKIES_ENABLED = False# Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False# Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #}# Enable or disable spider middlewares # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'jd.middlewares.JdSpiderMiddleware': 543, #}# Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'jd.middlewares.JdDownloaderMiddleware': 543, #}# Enable or disable extensions # See https://doc.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #}# Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html #ITEM_PIPELINES = { # 'jd.pipelines.JdPipeline': 300, #}# Enable and configure the AutoThrottle extension (disabled by default) # See https://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default) # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' View Code

2、jingdong.py文件

# -*- coding: utf-8 -*- import scrapyimport logging import json logger = logging.getLogger(__name__) class JingdongSpider(scrapy.Spider):name = 'jingdong'allowed_domains = ['zhaopin.jd.com']start_urls = ['http://zhaopin.jd.com/web/job/job_list?page=1']pageNum = 1def parse(self, response):content = response.body.decode()content = json.loads(content)##########去除列表中字典集中的空值###########for i in range(len(content)):#list(content[i].keys()獲取當前字典中的keyfor key in list(content[i].keys()): #content[i]為字典if not content[i].get(key):#content[i].get(key)根據key獲取valuedel content[i][key] #刪除空值字典for i in range(len(content)):logging.warning(content[i])self.pageNum = self.pageNum+1if self.pageNum<=355:next_url = "http://zhaopin.jd.com/web/job/job_list?page="+str(self.pageNum)yield scrapy.Request(next_url,callback=self.parse)pass

3、注意點,針對jingdong的招聘翻頁是使用javascrapt,所以無法使用crawlscrapy進行自動翻頁,但是我們再network中查看其獲取數據的方法。

如:http://zhaopin.jd.com/web/job/job_list?page=2

?

#############jingdong可以了,那么試試tencent公司的招聘信息吧###############

測試下吧!

結果知道了吧!!!!!??開始干活!!!!!!!!!!

1、settings.py

# -*- coding: utf-8 -*-# Scrapy settings for tencent project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://doc.scrapy.org/en/latest/topics/settings.html # https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'tencent'SPIDER_MODULES = ['tencent.spiders'] NEWSPIDER_MODULE = 'tencent.spiders'LOG_LEVEL="WARNING" LOG_FILE="./qq.log" # Crawl responsibly by identifying yourself (and your website) on the user-agent USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'# Obey robots.txt rules #ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default) #COOKIES_ENABLED = False# Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False# Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #}# Enable or disable spider middlewares # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'tencent.middlewares.TencentSpiderMiddleware': 543, #}# Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'tencent.middlewares.TencentDownloaderMiddleware': 543, #}# Enable or disable extensions # See https://doc.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #}# Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html #ITEM_PIPELINES = { # 'tencent.pipelines.TencentPipeline': 300, #}# Enable and configure the AutoThrottle extension (disabled by default) # See https://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default) # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' View Code

2、mahuateng.py

# -*- coding: utf-8 -*- import scrapyimport json import logging class MahuatengSpider(scrapy.Spider):name = 'mahuateng'allowed_domains = ['careers.tencent.com']start_urls = ['https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex=1&pageSize=10&language=zh-cn&area=cn']pageNum = 1def parse(self, response):content = response.body.decode()content = json.loads(content)content=content['Data']['Posts']#刪除空字典for con in content:#print(con)for key in list(con.keys()):if not con.get(key):del con[key]#記錄每一個崗位信息for con in content:logging.warning(con)#####翻頁######self.pageNum = self.pageNum+1if self.pageNum<=118:next_url = "https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex="+str(self.pageNum)+"&pageSize=10&language=zh-cn&area=cn"yield scrapy.Request(next_url,callback=self.parse)

個人測試是可以的,你們的就看運氣了,哈哈!

這些都是個人玩的,碼的比較丑陋。

轉載于:https://www.cnblogs.com/ywjfx/p/11101091.html

總結

以上是生活随笔為你收集整理的python之scrapy爬取jd和qq招聘信息的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 日韩色吧| 福利色播| 日韩国产三级 | 手机看片中文字幕 | 日日操狠狠干 | 国产大片av | 日本福利网站 | 亚洲视频 欧美视频 | 九色国产精品 | 特黄做受又粗又大又硬老头 | 99热网 | 国产亚洲精品久久久久四川人 | 日本老熟妇毛茸茸 | 久久久久国产精 | 亚洲精品乱码久久久久久写真 | 欧美性猛交xxxxx水多 | 玩弄人妻少妇500系列 | 国产天堂精品 | 好吊色视频一区二区三区 | 中文字幕免费一区 | 福利一区福利二区 | 午夜亚洲aⅴ无码高潮片苍井空 | 青青草狠狠操 | 亚洲日本韩国在线 | www.日韩一区 | 成年网站 | 字幕网在线 | 国产剧情av在线播放 | 日本三级黄色录像 | 欧美性生活一区二区 | 熟睡人妻被讨厌的公侵犯 | 久久久久97 | 亚洲国产一区二区三区四区 | 叶全真三级 | 激情视频激情小说 | 日韩一卡 | 日韩欧美高清在线 | 强伦人妻一区二区三区 | 欧美成人精品三级网站 | 日本色影院 | 中文字字幕在线中文 | 天堂中文在线免费观看 | 色爽爽一区二区三区 | 最好看的2019年中文在线观看 | 国产精品扒开腿做爽爽爽男男 | 狠狠操在线视频 | 国产精品91久久久 | 国产精品成人在线 | 中文字幕在线观看高清 | 青青草成人免费视频 | 色噜噜色综合 | 波多野结衣在线观看一区二区 | 好吊妞一区二区三区 | 国产农村妇女毛片精品 | 欧美黄页在线观看 | 男女无套免费视频网站动漫 | 国产精品传媒视频 | 亚洲精品欧洲 | 亚洲人妻一区二区三区 | 国产伦理一区 | 日韩中文字幕视频在线观看 | 久久久久成人网 | 成人h动漫精品一区二 | 国产女厕一区二区三区在线视 | 澳门黄色录像 | 久久国产成人 | 一区二区视频在线观看免费 | eeuss鲁片一区二区三区在线观看 | 毛利兰被扒开腿做同人漫画 | 日本一区二区高清不卡 | 免费无码不卡视频在线观看 | 久久久激情 | 少妇裸体视频 | 亚洲福利视频网站 | 韩国精品久久久 | 四虎国产成人永久精品免费 | 日韩欧美一区二区三区在线 | 在线国产一区 | 亚洲啪啪av | 欧美综合久久久 | 国产麻豆精品久久一二三 | 成人av网址在线观看 | 成人做受视频试看60秒 | 最新极品jizzhd欧美 | 另类一区二区三区 | 色婷婷亚洲一区二区三区 | 91黄色片| 打开免费观看视频在线 | 无码国产伦一区二区三区视频 | 亚洲av毛片一区二二区三三区 | 婷婷久久一区 | 成人免费自拍视频 | 无码人妻精品一区二区三区9厂 | 欧洲中文字幕日韩精品成人 | 一区二区三区视频在线观看免费 | 中文字幕在线观看线人 | 国产亚洲欧美一区二区三区 | 国产美女91| 日韩精品一二三四 |