这个链接里面有很多热心网友贡献的链接,有哪些无版权、免费、高清图片素材网站? - 知乎 需要素材就随便点开一个网站去找很方便,今天想用爬虫方式来抓取一下这些图片

确定目标网站:Fireworks Celebration Free Stock CC0 Photo - StockSnap.io

打开界面可以发现,在界面上有个下载按钮,直接点击下载就可以了,但是在很多网站上是没有下载按钮的,这个时候就需要我们另想办法了:

方案一:野生的request

右键检查会发现里边有个url,但是里边并不是高清大图的url地址,

通过抓包工具点击download按钮,会发现有个download的文件,在network工具中查看本次请求的url为Free Stock Photos and Images - StockSnap.io

首先直接用url去请求,试图获取网页源代码

import requestsdef download_pic(url):resp = requests.post(url)print("状态码:", resp)if __name__ == '__main__':url = r'https://stocksnap.io/photo/download'download_pic(url)

输出:

状态码: <Response [403]>

403状态码代表的含义:

状态码 403 Forbidden 代表客户端错误,指的是服务器端有能力处理该请求,但是拒绝授权访问。. 这个状态类似于 401 ,但进入该状态后不能再继续进行验证。. 该访问是长期禁止的,并且与应用逻辑密切相关(例如不正确的密码)。

显然不正确

于是尝试加上请求头

还是会返回403

再加上请求信息试试

依然会返回403

再加上cookie试试

import requestsdef download_pic(url):headers = {"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36","Cookie": "_csrf=pfQRFAJQyujrBcvmEYV-m918; _ga=GA1.2.1764522094.1637402941; _gid=GA1.2.1840794345.1637402941; _hjSessionUser_2571802=eyJpZCI6IjNlNzM2NDUyLTVkMGMtNTZjNC1iMDM4LWJmZjRjMDg3MmQyMSIsImNyZWF0ZWQiOjE2Mzc0MDI5NjAxOTgsImV4aXN0aW5nIjp0cnVlfQ==; photoViews=KB3VPMZBOX,LTYITHITMX; photoDownloads=KB3VPMZBOX,LTYITHITMX"}data = {"_csrf": "STITz8bf-5-SrqMqkg9jyJen_UkbW-zFznmg","photoId": "LTYITHITMX"}resp = requests.post(url,headers=headers, data=data)print("状态码:", resp)if __name__ == '__main__':url = r'https://stocksnap.io/photo/download'download_pic(url)

输出:这下OK了

状态码: <Response [200]>

接下来就是对图片内容的保存了

import requestsdef download_pic(url):headers = {"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36","Cookie": "_csrf=pfQRFAJQyujrBcvmEYV-m918; _ga=GA1.2.1764522094.1637402941; _gid=GA1.2.1840794345.1637402941; _hjSessionUser_2571802=eyJpZCI6IjNlNzM2NDUyLTVkMGMtNTZjNC1iMDM4LWJmZjRjMDg3MmQyMSIsImNyZWF0ZWQiOjE2Mzc0MDI5NjAxOTgsImV4aXN0aW5nIjp0cnVlfQ==; photoViews=KB3VPMZBOX,LTYITHITMX; photoDownloads=KB3VPMZBOX,LTYITHITMX"}data = {"_csrf": "STITz8bf-5-SrqMqkg9jyJen_UkbW-zFznmg","photoId": "LTYITHITMX"}resp = requests.post(url,headers=headers, data=data)print("状态码:", resp)with open('1.jpg', mode='wb') as f:f.write(resp.content)if __name__ == '__main__':url = r'https://stocksnap.io/photo/download'download_pic(url)

运行完成打开图片看看,确实是高清大图

在每次请求一个图片的时候都要进行cookie配置,会很麻烦,下面将用管理session的方式处理

方案二:进阶的request

在主页面刷新network工具查看请求的方式为get,于是通过get方式发送请求

import requestsdef download_pic(url):headers = {"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36",}# data = {#     "_csrf": "STITz8bf-5-SrqMqkg9jyJen_UkbW-zFznmg",#     "photoId": "LTYITHITMX"# }session = requests.session()resp = session.get(url, headers=headers)print(resp)resp.encoding = "utf-8"print(resp.text)if __name__ == '__main__':url = r'https://stocksnap.io/'download_pic(url)

输出内容太长

拿到网页源代码之后进行解析获取图片的url

获取到子页面后在子页面中寻找两个请求的参数

拿到两个请求参数就可以发起请求了

import requests
from lxml import etree
from urllib.parse import urljoindef download_pic(url):headers = {"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36",}session = requests.session()resp = session.get(url, headers=headers)  # 获取主页面resp.encoding = "utf-8"source_page = resp.texttree = etree.HTML(source_page)hrefs = tree.xpath('//div[@id="main"]/div/a/@href')[1:]for href in hrefs[0:len(hrefs)-1]:# print(href)child_url = urljoin(url, href)child_resp = session.get(child_url)  # 获取到子页面的urlchild_resp.encoding = 'utf-8'child_page = child_resp.text   # 拿到子页面child_tree = etree.HTML(child_page)# 获取两个请求参数_csrf = child_tree.xpath('//form[@action="/photo/download"]/input[1]/@value')[0]photoId = child_tree.xpath('//form[@action="/photo/download"]/input[2]/@value')[0]# 图片名在保存图片的时候使用img_name = child_tree.xpath('//img[@itemprop="url"]/@src')[0].split('/')[-1]print(img_name)data = {"_csrf":_csrf,"photoId":photoId}download_url = r'https://stocksnap.io/photo/download'dwon_resp = session.post(download_url, data=data)     # 请求图片的地址with open(img_name, mode='wb') as f:f.write(dwon_resp.content)if __name__ == '__main__':url = r'https://stocksnap.io/'download_pic(url)

运行之后会获得一批高清图片

这样就下载下来了,但是下载速度有点慢, 改进一下,使用异步协程进行下载

import aiohttp
import asyncio
import aiofiles
import requests
from lxml import etree
from urllib.parse import urljoin
import os# 本地写入
async def save_one(name,data, href,session):# async with aiohttp.ClientSession() as session:async with session.post(href,data=data) as resp:img = await resp.read()async with aiofiles.open(os.path.join('save',name),mode='wb') as f:await f.write(img)# 请求单个子页面,从子页面中 ,获取图片的请求数据
async def get_one_data(url):headers = {"user-agent": "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",}async with aiohttp.ClientSession() as session:async with session.get(url,headers=headers) as child_resp:child_page = await child_resp.text(encoding='utf-8', errors='ignore')child_tree = etree.HTML(child_page)# 获取两个请求参数_csrf = child_tree.xpath('//form[@action="/photo/download"]/input[1]/@value')[0]photoId = child_tree.xpath('//form[@action="/photo/download"]/input[2]/@value')[0]# 图片名在保存图片的时候使用img_name = child_tree.xpath('//img[@itemprop="url"]/@src')[0].split('/')[-1]print(img_name)data = {"_csrf": _csrf,"photoId": photoId}href = r"https://stocksnap.io/photo/download"await asyncio.create_task(save_one(img_name, data,href ,session))# 从主页面中获取所有的子页面url
def get_urls(main_url):headers = {"user-agent": "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",}session = requests.session()resp = session.get(main_url, headers=headers)  # 获取主页面resp.encoding = "utf-8"source_page = resp.texttree = etree.HTML(source_page)hrefs = tree.xpath('//div[@id="main"]/div/a/@href')[1:]child_urls = []for href in hrefs[0:len(hrefs) - 1]:print(href)child_url = urljoin(main_url, href)child_urls.append(child_url)return child_urlsasync def get_all_content_url(all_urls):tasks = []for url in all_urls:task = asyncio.create_task(get_one_data(url))tasks.append(task)await asyncio.wait(tasks)if __name__ == '__main__':main_url = r'https://stocksnap.io/'all_urls = get_urls(main_url)asyncio.run(get_all_content_url(all_urls))

方案三:scrapy

在终端运行

scrapy startproject pic
cd pic
scrapy genspider imgs stocksnap.io

将img.py中pase中的参数加入**kwargs可以使pycharm不报黄色警告

将settings.py中设置警告类型为LOG_LEVEL = "WARNING",并将robots协议设置为False

修改imgs.py并在终端运行scrapy crawl imgs运行以查看能够成功请求到数据

返回

<200 https://stocksnap.io/>

说明可以获取到网页源代码,接下来就是对网页源代码进行解析了

解析部分正在imgs.py中,其中pase函数是对主页面进行解析,返回一个request对象给子页面,子页面得到后通过parse_child函数进行解析,获取图片的高清大图地址,图片名,请求参数等信息

img.py

import scrapy
from scrapy.http import HtmlResponsefrom pic.items import PicItem
class ImgsSpider(scrapy.Spider):name = 'imgs'allowed_domains = ['stocksnap.io']start_urls = ['http://stocksnap.io']def parse(self, response:HtmlResponse, **kwargs):hrefs = response.xpath('//div[@id="main"]/div/a/@href').extract()for href in hrefs[0:len(hrefs) - 1]:child_url = response.urljoin(href)print(child_url)yield scrapy.Request(url=child_url,method = 'get',callback=self.parse_child,)# breakdef parse_child(self, resp, **kwargs):# print('子页面:', resp)_csrf = resp.xpath('//form[@action="/photo/download"]/input[1]/@value').extract_first()photoId = resp.xpath('//form[@action="/photo/download"]/input[2]/@value').extract_first()# print("_csrf", _csrf)# print("photoId", photoId)# 图片名在保存图片的时候使用img_name = resp.xpath('//img[@itemprop="url"]/@src').extract_first().split('/')[-1]# print('img_name:',img_name)download_url = r'https://stocksnap.io/photo/download'item = PicItem()item['_csrf']= _csrfitem["photoId"] = photoIditem["img_name"]= img_nameitem['src'] = download_urlyield item

因为在返回中指定了标准格式的字典,因此在items.py中需要设置与之对应的Field

items.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass PicItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()_csrf = scrapy.Field()photoId = scrapy.Field()img_name = scrapy.Field()src = scrapy.Field()

两个文件是相互对应的

配置好img.py后,进到pipelines.py中进行配置,配置如下,其中get_media_requests函数指定一个请求对象,file_path函数指定图片的保存路径

pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
from scrapy.pipelines.images import ImagesPipeline
from pic import settings
import scrapy
import json
import osclass PicPipeline(ImagesPipeline):def get_media_requests(self, item, info):src = item['src']data = {"_csrf":item["_csrf"],"photoId":item["photoId"]}yield scrapy.FormRequest(url=src, formdata=data,meta={"item": item})def file_path(self, request, response=None, info=None, *, item=None):item = request.meta['item']img_name = item['img_name']return os.path.join('save1',img_name)

接下来是队settings.py的设置,其中ITEM_PIPELINES会指定管道的执行顺序

# Scrapy settings for pic project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'pic'SPIDER_MODULES = ['pic.spiders']
NEWSPIDER_MODULE = 'pic.spiders'LOG_LEVEL = "WARNING"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'pic (+http://www.yourdomain.com)'# Obey robots.txt rules
ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'pic.middlewares.PicSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'pic.middlewares.PicDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'pic.pipelines.PicPipeline': 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'IMAGES_STORE = './down_imgs'

所有文件都配置完成之后就可以下载了,终端运行

scrapy crawl imgs

图片很快就下载完成了

数据采集---高清壁纸相关推荐

  1. Python爬虫实战 | 利用多线程爬取 LOL 高清壁纸

    来源:公众号[杰哥的IT之旅] 作者:阿拉斯加 ID:Jake_Internet 如需获取本文完整代码及 LOL 壁纸,请为本文右下角点赞并添加杰哥微信:Hc220088 获取. 一.背景介绍 随着移 ...

  2. python代码壁纸-Python爬取高清壁纸

    需要准备的东西: 用到的环境:python3.7 用到的ide:pycharm 用到的库 time.BeautifulSoup.requests 本次的目标: 下载几百张海贼王的高清壁纸 练习爬虫 分 ...

  3. android 高清壁纸设置慢

    由于项目的需要最近在解决一个 bug  在1080p 的手机上面设置壁纸会很慢慢,慢的原因是和壁纸 的大小 有关,壁纸越大,时间直越长,一般1080 p 的壁纸大概有10M左右, 所以通过文件流 来保 ...

  4. 苹果原壁纸高清_周易壁纸 | 八卦图阵高清壁纸

    点击[苹果X高清壁纸]右上角找到[-] 关注设我为✨星标/置顶 ✨爱你们哟   找图加小编微信(AJ-099999) 查看封面图,请点击底部查看. 每日推送不一样的壁纸.如果你喜欢今天的壁纸 Push ...

  5. ios framework 找不到.h_找不到好看的壁纸?上万张「高清壁纸」,都在iOS捷径里...

    所需工具:iOS捷径 获取方法:后台私信回复「363」 ===== 不和大家废话,今天给大家分享一个超好用的ios壁纸捷径,用了它之后再也不怕找不到喜欢的壁纸了~ 将克拉壁纸的捷径链接在Safari浏 ...

  6. 鼬电脑壁纸_火影忍者高清壁纸需要自取1080p

    更新了,火影忍者高清壁纸.(图片源于网络,侵权必删) 别忘,关注.点赞.收藏哦. 需要原图评论点赞,我给你喔. 传送门: 可爱的橙猪猪:鬼刀高清壁纸,电脑1080p,无水印.​zhuanlan.zhi ...

  7. 别人用钱,而我用python爬虫爬取了一年的4K高清壁纸

    前言 本文的文字及图片来源于网络,仅供学习.交流使用,不具有任何商业用途,版权归原作者所有,如有问题请及时联系我们以作处理. 爬虫是什么? 网络爬虫,也叫网络蜘蛛(Web Spider).它根据网页地 ...

  8. 游戏英雄联盟高清壁纸,人物角色都包括

    在如今的网络游戏中,永远不缺的就是各类土豪玩家,<英雄联盟>也不例外.除了到处炫耀 自己的英雄池外,各式各样的英雄皮肤壁纸也是他们重要的炫富资本. 想要的英雄联盟精美图片? 高图网 www ...

  9. 小清新风高清壁纸,让你一天心情轻松!

    清新的图片给人带来美好的心情!手机壁纸和电脑桌面是日常用处最多的图片! 小清新风高清壁纸,让你一天心情轻松! 找壁纸不用那么麻烦,只要到高图网 www.gaopic.com 都可以找到你喜欢的图片! ...

最新文章

  1. PlanAhead 与时序分析
  2. unity3d 动画中断并重新播放的解决办法
  3. 已经搭载华为鸿蒙,阿尔法S或将搭载华为鸿蒙OS , 4月17极狐带来真相!
  4. SQL语言之DQL语言学习(三)排序查询
  5. oracle中判断空,Oracle中判断空游标的方法
  6. jQuery 常用方法总结
  7. 催人泪下!一个程序员的悲惨故事
  8. 【H5营销活动】近期捷微H5营销活动大盘点
  9. C语言丨定积分的近似计算
  10. iOS Swift 2 2 监听耳机的 插拔的事件
  11. 基于Python实现身份证号码验证
  12. 一年前,月薪两万被人叫老总,如今35岁在美团送外卖
  13. python入门小程序:华氏度和摄氏度换算
  14. 遇到mysqladmin flush-hosts报错解决思路
  15. 【论文阅读笔记】语义三维重建CVPR2011:Semantic Structure from Motion
  16. 数据结构与算法学习——基础知识(一)
  17. 所有外包项目威客网站列表----来自程序员接私活网qxj.me
  18. nodejs实现微信小程序支付功能及相关问题总结
  19. Virtualbox Linux 主机与虚拟机复制粘贴
  20. 四年级下册计算机教学目录,最新版人教版小学数学四年级下册目录

热门文章

  1. widowns上从chrome上抓取图片
  2. 广发证券基于分布式架构的新一代估值系统实践
  3. 联想拯救者u盘安装linux系统,联想拯救者Y7000系统盘重装如何设置U盘启动
  4. win10自带ie和Edge浏览器无法上网解决方法 第三方浏览器和QQ可以使用
  5. Excel排序与查重
  6. 【UV打印机】理光喷头组合说明(24H)
  7. HTML简单静态页面的编写
  8. 道路交通车路协同信息服务通用技术要求
  9. Java游戏培训机构哪家专业
  10. 获取天气预报ajax,Ajax 通过城市名获取数据(全国天气预报API)