日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

赞!原来还有这些免费图片下载网站(内附部分爬虫代码)

發布時間:2023/12/8 编程问答 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 赞!原来还有这些免费图片下载网站(内附部分爬虫代码) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

大兄弟你進來一定不是看我廢話的吧,嗯,安排上,咱用圖和爬蟲說話,這些網站真的很不錯!
emm…順便和想看詳細解析的兄弟說一句,這里有個詳細的樣例, 爬蟲爬取精美圖片詳介傳送門

內容檢索

        • 1. hippopx
        • 2. colorhub
        • 3. pikrepo
        • 4. wallhaven
        • 5. 還有這些不錯的網站
          • 5.1 pixabay
          • 5.2 ssyer
          • 5.3 不錯的插畫
          • 5.4 visualhunt
          • 5.5 pexels
          • 5.6 unsplash
          • 5.7 極簡壁紙

1. hippopx

https://www.hippopx.com/

噓~悄悄附上爬蟲代碼:

非高清:

#這里以爬取小貓圖片為例,倘若兄弟想爬取其他的,改一下參數就成噢 from bs4 import BeautifulSoup import requests gHeads = {"User-Agent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Mobile Safari/537.36" } for i in range(1,65):url = "https://www.hippopx.com/zh/query?q=cat&page=%s"%(i) #q表示你要找到的名稱,這里是cat,page用來確定第幾頁print(url)html = requests.get(url,headers=gHeads)html = html.contentsoup = BeautifulSoup(html, 'lxml')img_all = soup.find_all('link',{"itemprop": "thumbnail"})for img4 in img_all:urlimg = img4['href']print(urlimg)r = requests.get(urlimg, stream=True)image_name = urlimg.split('/')[-1]with open('F:/Cat/%s' % image_name, 'wb') as f:for chunk in r.iter_content(chunk_size=128):f.write(chunk)print('Saved %s' % image_name)print("end.....................")

高清圖片下載

#Cat下載High-Definition from bs4 import BeautifulSoup import requests gHeads = {"User-Agent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Mobile Safari/537.36" } for i in range(2,100):url = "https://www.hippopx.com/zh/query?q=cat&page=%s"%(i)print(url)html = requests.get(url,headers=gHeads)html = html.contentsoup = BeautifulSoup(html, 'lxml')img_all = soup.find_all('img',{"itemprop": "contentUrl"})for img in img_all:urlimg = img['src']r = requests.get(urlimg, stream=True)image_name = urlimg.split('/')[-1]with open('F:/Cat_HighDefinition/%s' % image_name, 'wb') as f:for chunk in r.iter_content(chunk_size=128):f.write(chunk)print('Saved %s' % image_name)print("end.....................")

2. colorhub

https://www.colorhub.me/

非高清下載:

from bs4 import BeautifulSoup import requests gHeads = {"User-Agent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Mobile Safari/537.36" } for i in range(1,10):url = "https://www.colorhub.me/search?tag=dog&page=%s"%(i)print(url)html = requests.get(url,headers=gHeads)html = html.contentsoup = BeautifulSoup(html, 'lxml')img_all = soup.find_all('img',{"class": "card-img-top"})for img4 in img_all:urlimg ="http:"+img4['src']r = requests.get(urlimg, stream=True)image_name = urlimg.split('/')[-1]with open('F:/Image_experiment/DOG/%s' % image_name, 'wb') as f:for chunk in r.iter_content(chunk_size=128):f.write(chunk)print('Saved %s' % image_name)print("end.....................")

高清下載:

from bs4 import BeautifulSoup import requests import regHeads = {"User-Agent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Mobile Safari/537.36" } for i in range(1,2):url = "https://www.colorhub.me/search?tag=dog&page=%s"%(i)print(url)html = requests.get(url,headers=gHeads)html = html.contentsoup = BeautifulSoup(html, 'lxml')href_all = soup.find_all('div',{"class":"card"})for href in href_all:href_url = href.a['href']html4 = requests.get(href_url,headers=gHeads).contentsoup4 = BeautifulSoup(html4, 'lxml')img4 = soup4.find('a',{"data-magnify":"gallery"})urlimg ="http:"+img4['href']r = requests.get(urlimg, stream=True)image_name = urlimg.split('/')[-1]with open('F:/Image/DOG/%s' % image_name, 'wb') as f:for chunk in r.iter_content(chunk_size=128):f.write(chunk)print('Saved %s' % image_name)print("end.....................")

3. pikrepo

https://www.pikrepo.com/

非高清下載:

from bs4 import BeautifulSoup import requests gHeads = {"User-Agent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Mobile Safari/537.36" } for i in range(1,100):url = "https://www.pikrepo.com/search?q=mountain&page=%s"%(i)print(url)html = requests.get(url,headers=gHeads)html = html.contentsoup = BeautifulSoup(html, 'lxml')img_all = soup.find_all('img',{"itemprop": "thumbnail"})for img4 in img_all:urlimg = img4['data-src']r = requests.get(urlimg, stream=True)image_name = urlimg.split('/')[-1]with open('F:/Image_experiment/mountain/%s' % image_name, 'wb') as f:for chunk in r.iter_content(chunk_size=128):f.write(chunk)print('Saved %s' % image_name)print("end.....................")

高清下載:

from bs4 import BeautifulSoup import requests gHeads = {"User-Agent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Mobile Safari/537.36" } for i in range(3,10):url = "https://www.pikrepo.com/search?q=mountain&page=%s"%(i)print(url)html = requests.get(url,headers=gHeads)html = html.contentsoup = BeautifulSoup(html, 'lxml')img_all = soup.find_all('link',{"itemprop": "contentUrl"})for img in img_all:urlimg = img['href']r = requests.get(urlimg, stream=True)image_name = urlimg.split('/')[-1]with open('F:/Image/Mountain/%s' % image_name, 'wb') as f:for chunk in r.iter_content(chunk_size=128):f.write(chunk)print('Saved %s' % image_name)print("end.....................")

4. wallhaven

https://wallhaven.cc/

非高清下載:

from bs4 import BeautifulSoup import requests import re gHeads = {"User-Agent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Mobile Safari/537.36" } for i in range(1,200):url = "https://wallhaven.cc/search?q=FOG&page=%s"%(i)print(url)html = requests.get(url,headers=gHeads).contentsoup = BeautifulSoup(html, 'lxml')img_ul = soup.find_all("img",{"alt":"loading"})for img in img_ul:imgstr = str(img)url = img['data-src']r = requests.get(url, stream=True)image_name = url.split('/')[-1]with open('F:/Image_experiment/FOG/%s' % image_name, 'wb') as f:for chunk in r.iter_content(chunk_size=128):f.write(chunk)print('Saved %s' % image_name) print('end...........')

高清圖片下載:

from bs4 import BeautifulSoup import requests gHeads = {"User-Agent":"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Mobile Safari/537.36" } for i in range(1,20):url = "https://wallhaven.cc/search?q=DOG&page=%s"%(i)print(url)html = requests.get(url,headers=gHeads)html = html.contentsoup = BeautifulSoup(html, 'lxml')href_all = soup.find_all( 'a',{"class": "preview"})for href in href_all:href_url = href['href']html4 = requests.get(href_url,headers=gHeads).contentsoup4 = BeautifulSoup(html4, 'lxml')img4 = soup4.find( 'img',{"id": "wallpaper"})urlimg = img4['data-cfsrc']r = requests.get(urlimg, stream=True)image_name = urlimg.split('/')[-1]with open('F:/Image/DOG/%s' % image_name, 'wb') as f:for chunk in r.iter_content(chunk_size=128):f.write(chunk)print('Saved %s' % image_name)print("end.....................")

5. 還有這些不錯的網站

5.1 pixabay

https://pixabay.com/images/search/sea/?pagi=2
圖片高清且不涉及版權問題,哇,抱歉我的能力有限,不能近距離觀賞啦(沒爬成功,囧…),那…就遠觀吧

5.2 ssyer

https://www.ssyer.com/

5.3 不錯的插畫

https://mixkit.co/free-stock-art/discover/dog/

5.4 visualhunt

https://visualhunt.com/

5.5 pexels

https://www.pexels.com/zh-cn/search/DOG/

5.6 unsplash

https://unsplash.com/

5.7 極簡壁紙

https://bz.zzzmh.cn/index

總結

以上是生活随笔為你收集整理的赞!原来还有这些免费图片下载网站(内附部分爬虫代码)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。