Python爬虫爬取电影网站种子,让你以后再也不寂寞
前言
本文的文字及圖片來源于網(wǎng)絡(luò),僅供學(xué)習(xí)、交流使用,不具有任何商業(yè)用途,版權(quán)歸原作者所有,如有問題請(qǐng)及時(shí)聯(lián)系我們以作處理。
作者: imBobby
到了周末,寫點(diǎn)簡(jiǎn)單加愉快的東西吧,下午健身回來,想看個(gè)電影,于是來到熟悉的網(wǎng)站:
btbtt.me我覺得這個(gè)網(wǎng)站中文資源比較全,而海盜灣就是英文資源全一些,今天做個(gè)電影資源爬蟲吧,進(jìn)入btbtt.me首頁:
?
這濃烈的的山寨風(fēng)格,有一絲絲上頭,先觀察一下,點(diǎn)進(jìn)高清電影區(qū),我的思路是進(jìn)入高清電影區(qū),逐個(gè)訪問頁面內(nèi)的電影標(biāo)簽,并將電影詳情頁面的種子下載到本地,所以先觀察一下:
?
?
發(fā)現(xiàn)電影詳情頁的URL都在class為subject_link thread-new和subject_link thread-old的標(biāo)簽下存儲(chǔ),接下來點(diǎn)進(jìn)電影詳情頁看看:
?
發(fā)現(xiàn)下載鏈接存儲(chǔ)在屬性rel為nofollow的標(biāo)簽a中,點(diǎn)擊一下下載鏈接試試看:
?
竟然還有一層,有點(diǎn)難受了,想靠標(biāo)簽篩選這個(gè)下載鏈接有點(diǎn)難受,但是可以觀察到:
下載鏈接其實(shí)就是把URL內(nèi)的attach換成了download,這就省了很多事兒啊~
思路大概有了,那就寫代碼唄:
import requests import bs4 import os import time# 設(shè)置代理,這個(gè)網(wǎng)站也需要科學(xué)上網(wǎng) proxies = { "http": "http://127.0.0.1:41091", "https": "http://127.0.0.1:41091", }headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36 Edg/84.0.522.50" }def init_movie_list(nums):"""根據(jù)傳入數(shù)字決定爬取多少頁電影,每頁的電影大概幾十個(gè):param nums:爬取頁面數(shù)量:return:頁面url組成的list"""movie_list = []if nums < 2:return movie_listfor num in range(1, nums + 1):url = "http://btbtt.me/forum-index-fid-1183-page-" + str(num) + ".htm"movie_list.append(url)return movie_listdef get_movie_detail_url(url):"""根據(jù)傳入的URL獲取頁面內(nèi)電影詳情鏈接并存儲(chǔ)進(jìn)List:param url:目標(biāo)頁面URL:return:電影詳情頁的URL和電影名字組成tuple,各個(gè)tuple再連成list"""context = requests.get(url=url, headers=headers, proxies=proxies).contenttime.sleep(1)bs4_result = bs4.BeautifulSoup(context, "html.parser")new_read_details = bs4_result.find_all("a", class_="subject_link thread-new")all_details = bs4_result.find_all("a", class_="subject_link thread-old") + new_read_detailsif not all_details:return []url_list =[]for item in all_details:url_list.append((item.get("title"), "http://btbtt.me/" + item.get("href")))return url_listdef get_movie_download_url(url_tuple):"""傳入的tuple為文件夾名和下載鏈接組合:param url_tuple::return:"""folder_name = replace_folder_name(url_tuple[0])url = url_tuple[1]resp = requests.get(url=url, headers=headers, proxies=proxies)time.sleep(1)bs4_result = bs4.BeautifulSoup(resp.content, "html.parser")result = bs4_result.find_all("a", rel="nofollow", target="_blank", ajaxdialog=False)if not result:return ('', '', '')file_name = replace_folder_name(result[-1].text)download_url = "http://btbtt.me/" + result[-1].get("href").replace("dialog", "download")return (folder_name, file_name, download_url)def replace_folder_name(folder_name):"""按照windows系統(tǒng)下的文件命名規(guī)則規(guī)整化文件夾名:param folder_name::return:"""illegal_str = ["?",",","/","\\","*","<",">","|"," ", "\n", ":"]for item in illegal_str:folder_name = folder_name.replace(item, "")return folder_namedef download_file(input_tuple):"""下載文件:param input_tuple::return:"""folder_name = input_tuple[0]if not folder_name:folder_name = str(int(time.time()))file_name = input_tuple[1]if not file_name:file_name = str(int(time.time())) + ".zip"download_url = input_tuple[2]if not download_url:returnresp = requests.get(url=download_url, headers=headers, proxies=proxies)time.sleep(1)# D:/torrent是我的存儲(chǔ)路徑,這里可以修改if not os.path.exists('D:/torrent/' + folder_name):os.mkdir('D:/torrent/' + folder_name)with open('D:/torrent/' + folder_name + "/" + file_name, 'wb') as f:f.write(resp.content)if __name__ == '__main__':url = init_movie_list(5)url_list = []for item in url:url_list = get_movie_detail_url(item) + url_listfor i in url_list:download_tuple = get_movie_download_url(i)download_file(download_tuple)?
PS:如有需要Python學(xué)習(xí)資料的小伙伴可以加下方的群去找免費(fèi)管理員領(lǐng)取
?
可以免費(fèi)領(lǐng)取源碼、項(xiàng)目實(shí)戰(zhàn)視頻、PDF文件等
總結(jié)
以上是生活随笔為你收集整理的Python爬虫爬取电影网站种子,让你以后再也不寂寞的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: B端产品的业务调研
- 下一篇: python 爬取种子_利用python