Scrapy爬虫实例——校花网
学习爬虫有一段时间了,今天使用Scrapy框架将校花网的图片爬取到本地。Scrapy爬虫框架相对于使用requests库进行网页的爬取,拥有更高的性能。
Scrapy官方定义:Scrapy是用于抓取网站并提取结构化数据的应用程序框架,可用于广泛的有用应用程序,如数据挖掘,信息处理或历史存档。
建立Scrapy爬虫工程
在安装好Scrapy框架后,直接使用命令行进行项目的创建:
E:\ScrapyDemo>scrapy startproject xiaohuar New Scrapy project 'xiaohuar', using template directory 'c:\\users\\lei\\appdata\\local\\programs\\python\\python35\\lib \\site-packages\\scrapy\\templates\\project', created in: E:\ScrapyDemo\xiaohuarYou can start your first spider with:cd xiaohuarscrapy genspider example example.com
创建一个Scrapy爬虫
创建工程的时候,会自动创建一个与工程同名的目录,进入到目录中执行如下命令:
E:\ScrapyDemo\xiaohuar>scrapy genspider -t basic xiaohua xiaohuar.com Created spider 'xiaohua' using template 'basic' in module:xiaohuar.spiders.xiaohua
命令中"xiaohua"是生成Spider中*.py文件的文件名,"xiaohuar.com"是将要爬取网站的URL,可以在程序中更改。
编写Spider代码
编写E:\ScrapyDemo\xiaohuar\xiaohuar\spiders中的xiaohua.py文件。主要是配置URL和对请求到的页面的解析方式。
# -*- coding: utf-8 -*- import scrapy from scrapy.http import Request import reclass XiaohuaSpider(scrapy.Spider):name = 'xiaohua'allowed_domains = ['xiaohuar.com']start_urls = []for i in range(43):url = "http://www.xiaohuar.com/list-1-%s.html" %istart_urls.append(url)def parse(self, response):if "www.xiaohuar.com/list-1" in response.url:# 下载的html源代码html = response.text# 网页中图片存储地址:src="/d/file/20160126/905e563421921adf9b6fb4408ec4e72f.jpg"# 通过正则匹配到所有的图片# 获取的是图片的相对路径的列表img_urls = re.findall(r'/d/file/\d+/\w+\.jpg',html)# 使用循环对图片页进行请求for img_url in img_urls:# 将图片的URL补全if "http://" not in img_url:img_url = "http://www.xiaohuar.com%s" %img_url# 回调,返回responseyield Request(img_url)else:# 下载图片 url = response.url# 保存的图片文件名title = re.findall(r'\w*.jpg',url)[0]# 保存图片with open('E:\\xiaohua_img\\%s' % title, 'wb') as f:f.write(response.body)
这里使用正则表达式对图片的地址进行匹配,其他网页也都大同小异,需要根据具体的网页源代码进行分析。
运行爬虫
![](/assets/blank.gif)
![](/assets/blank.gif)
E:\ScrapyDemo\xiaohuar>scrapy crawl xiaohua 2017-10-22 22:30:11 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: xiaohuar) 2017-10-22 22:30:11 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'xiaohuar', 'SPIDER_MODULES': ['xiaohuar. spiders'], 'ROBOTSTXT_OBEY': True, 'NEWSPIDER_MODULE': 'xiaohuar.spiders'} 2017-10-22 22:30:11 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.telnet.TelnetConsole','scrapy.extensions.corestats.CoreStats','scrapy.extensions.logstats.LogStats'] 2017-10-22 22:30:12 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware','scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware','scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware','scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware','scrapy.downloadermiddlewares.useragent.UserAgentMiddleware','scrapy.downloadermiddlewares.retry.RetryMiddleware','scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware','scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware','scrapy.downloadermiddlewares.redirect.RedirectMiddleware','scrapy.downloadermiddlewares.cookies.CookiesMiddleware','scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware','scrapy.downloadermiddlewares.stats.DownloaderStats'] 2017-10-22 22:30:12 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware','scrapy.spidermiddlewares.offsite.OffsiteMiddleware','scrapy.spidermiddlewares.referer.RefererMiddleware','scrapy.spidermiddlewares.urllength.UrlLengthMiddleware','scrapy.spidermiddlewares.depth.DepthMiddleware'] 2017-10-22 22:30:12 [scrapy.middleware] INFO: Enabled item pipelines: [] 2017-10-22 22:30:12 [scrapy.core.engine] INFO: Spider opened 2017-10-22 22:30:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min ) 2017-10-22 22:30:12 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 2017-10-22 22:30:12 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/robots.txt> (referer: None) 2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/list-1-0.html> (referer: None ) 2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170721/cb96f1b106b3d b4a6bfcf3d2e880dea0.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170824/dcc166b0eba6a 37e05424cfc29023121.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170916/7f78145b1ca16 2eb814fbc03ad24fbc1.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170919/2f728d0f110a2 1fea95ce13e0b010d06.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170819/9c3dfeef7e08c c0303ce233e4ddafa7f.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170917/715515e7fe1f1 cb9fd388bbbb00467c2.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170628/f3d06ef49965a edbe18286a2f221fd9f.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170513/6121e3e90ff3b a4c9398121bda1dd582.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170516/6e295fe48c332 45be858c40d37fb5ee6.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170707/f7ca636f73937 e33836e765b7261f036.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170528/b352258c83776 b9a2462277dec375d0c.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170527/4a7a7f1e6b69f 126292b981c90110d0a.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170715/61110ba027f00 4fb503ff09cdee44d0c.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170520/dd21a21751e24 a8f161792b66011688c.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170529/8140c4ad797ca 01f5e99d09c82dd8a42.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170603/e55f77fb3aa3c 7f118a46eeef5c0fbbf.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170529/e5902d4d3e408 29f9a0d30f7488eab84.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170604/ec3794d0d42b5 38bf4461a84dac32509.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170603/c34b29f68e8f9 6d44c63fe29bf4a66b8.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170701/fb18711a6af87 f30942d6a19f6da6b3e.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170619/e0456729d4dcb ea569a1acbc6a47ab69.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.xiaohuar.com/d/file/20170626/0ab1d89f54c90 df477a90aa533ceea36.jpg> (referer: http://www.xiaohuar.com/list-1-0.html) 2017-10-22 22:30:15 [scrapy.core.engine] INFO: Closing spider (finished) 2017-10-22 22:30:15 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 8785,'downloader/request_count': 24,'downloader/request_method_count/GET': 24,'downloader/response_bytes': 2278896,'downloader/response_count': 24,'downloader/response_status_count/200': 24,'finish_reason': 'finished','finish_time': datetime.datetime(2017, 10, 22, 14, 30, 15, 892287),'log_count/DEBUG': 25,'log_count/INFO': 7,'request_depth_max': 1,'response_received_count': 24,'scheduler/dequeued': 23,'scheduler/dequeued/memory': 23,'scheduler/enqueued': 23,'scheduler/enqueued/memory': 23,'start_time': datetime.datetime(2017, 10, 22, 14, 30, 12, 698874)} 2017-10-22 22:30:15 [scrapy.core.engine] INFO: Spider closed (finished)
scrapy crawl xiaohua
图片保存
在图片保存过程中"\"需要进行转义。
>>> import requests>>> r = requests.get("https://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1508693697147&di=23eb655d8e450f84cf39453bc1029bc0&imgtype=0&src=http%3A%2F%2Fb.hiphotos.baidu.com%2Fimage%2Fpic%2Fitem%2Fc9fcc3cec3fdfc038b027f7bde3f8794a5c226fe.jpg") >>> open("E:\xiaohua_img\01.jpg",'wb').write(r.content)File "<stdin>", line 1 SyntaxError: (unicode error) 'unicodeescape' codec can't decode by >>> open("E:\\xiaohua_img\1.jpg",'wb').write(r.content) Traceback (most recent call last):File "<stdin>", line 1, in <module> OSError: [Errno 22] Invalid argument: 'E:\\xiaohua_img\x01.jpg' >>> open("E:\\xiaohua_img\\1.jpg",'wb').write(r.content) 34342
转载于:https://www.cnblogs.com/yan-lei/p/7712521.html
Scrapy爬虫实例——校花网相关推荐
- scrapy 爬取校花网
原文链接: scrapy 爬取校花网 上一篇: scrapy 安装和简单命令 下一篇: scrapy 腾讯 招聘信息爬取 网址,爬取名称和对应的图片链接,并保存为json格式 http://www.x ...
- 一篇入门级Scrapy框架(校花网)
心血来潮,写一篇校花网爬取. 准备工作: 要用到request模块 需要的库和包(twisted,scrapy,pypiwin32)自行安装,python环境自行安装. url:'http://www ...
- scrapy爬取校花网男神图片保存到本地
爬虫四部曲,本人按自己的步骤来写,可能有很多漏洞,望各位大神指点指点 1.创建项目 scrapy startproject xiaohuawang scrapy.cfg: 项目的配置文件 xiaohu ...
- scrapy爬虫实例:凤凰网
前言 本文的文字及图片来源于网络,仅供学习.交流使用,不具有任何商业用途,如有问题请及时联系我们以作处理. PS:如有需要Python学习资料的小伙伴可以加点击下方链接自行获取 python免费学习资 ...
- Python爬虫框架 scrapy 入门经典project 爬取校花网资源、批量下载图片
####1.安装scrapy 建议:最好在新的虚拟环境里面安装scrapy 注意:博主是在 Ubuntu18.04 + Python3.6 环境下进行开发的,如果遇到安装scrapy不成功请自行百度/ ...
- Python爬虫实战一 | 抓取取校花网的所有妹子
今天晚上顺带就实际的写写工具,我们刚学完Python的基础语法!抓点妹子带回家~ 总结一下之前的吧,我写了关于Python爬虫的六节课程,也就是六篇文章,文章有点简洁,但是很细节,如果还有不懂的请加 ...
- Python 爬虫 校花网
爬虫:是一种按照一定的规则,自动地抓取万维网信息的程序或者脚本. 福利来了 校花网 ,首先说为什么要爬这个网站呢,第一这个网站简单爬起来容易,不会受到打击,第二呢 你懂得.... 1.第一步,需要下 ...
- Python之爬虫-校花网
Python之爬虫-校花网 #!/usr/bin/env python # -*- coding:utf-8 -*-import re import requests# 拿到校花网主页的内容 resp ...
- day01 初见python爬虫之“爬校花网”和“自动登录github”
首先我们来解释一下几个概念: 1.什么是爬虫? 爬取数据. 2.什么是互联网? 由一堆网络设备.把一台台的计算机互联到一起称之为互联网. 3.互联网建立的目的: 数据的传递与数据的共享. 4.什么是数 ...
最新文章
- 基于visual Studio2013解决面试题之0209最大堆排序
- php self 内存,php导致内存溢出
- c语言字体取模软件下载,非常好用的lcd汉字取模软件下载_非常好用的lcd汉字取模软件官方下载-太平洋下载中心...
- VLAN TAG 实例
- 问题root@localhost's password:localhost:permission denied,please try again
- C#操作IIS完整解析
- 学习zookeeper基础知识
- 3Dshader之膨胀与收缩
- UVA10679 I Love Strings!!【字符串匹配】
- 数据平台作业调度系统详解-理论篇
- android 跨应用服务,跨应用启动Service,出现空指针
- 用PS做的一个个人logo
- 台式计算机规格型号怎么查,台式电脑主板型号在哪里看
- ODC20:更开放的行业解决方案,进击的OPPO IoT生态与云能力
- 用Python做数据分析之数据筛选及分类汇总
- mfc对话框ok没效果_利用PS制作逼真双重曝光效果案例演示,合成紫色城市建筑风格海报图片...
- 交房后,如何办理产权证、土地证?
- 安灯(Andon)系统,生产车间的得力助手
- 计算机四级网络工程师考点总结
- android google 登录登出接入
热门文章
- lex实现的简单词法分析
- 微软服务器为何时间总是慢,Windows时间总不对? 简易手段让它永远正确
- 王力宏妻子提前诞下一女
- 【XA.DAY.5】物理与环境安全
- 【详解】IIC通讯上拉电阻的选择和计算公式
- dubbo(开源分布式服务框架)1---------Dubbo需要四大基本组件
- (2020)指代消解ontoNotes_Release_5.0处理详细流程
- 开关电源之光耦隔离反馈
- 迅为RK3568开发板Ubuntu系统编写运行Qt工程
- boost 框架及基础类库的编译(FCL and BCL on Boost C++)