CrawlSpider

是Spider的一个子类,Spider是爬虫文件中的爬虫父类
- 之类的功能一定是对于父类

  • 作用:被作用于专业实现全站数据爬取

    • 将一个页面下的所有页码对应的数据进行爬取
  • 基本使用
    • 创建一个爬虫工程:scrapy startproject proName
    • 进入工程创建一个基于CrawlSpider的爬虫文件
      • scrapy genspider -t crawl spiderName www.xxx.com
    • 执行工程:scrapy crawl spiderName

注意

  • 一个链接提取器对应一个规则解析器(多个链接提取器和多个规则解析器)
  • 在实现深度爬取的过程中,需要和scrapy.Request()结合使用
  • link = LinkExtractor(allow=r’’)# allow是空follow是True那么我们就能取出全站所有链接

普通爬取

爬虫源码

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Ruleclass TestSpider(CrawlSpider):name = 'test'#allowed_domains = ['www.xxx.com']start_urls = ['http://www.521609.com/daxuemeinv/']# 链接提取器:根据指定规则(allow参数)在页面中进行链接(url)提取# allow = "正则":提取链接的规则link = LinkExtractor(allow=r'list8\d+\.html')# 实例化LinkExtractor对象# link = LinkExtractor(allow=r'')# allow是空follow是True那么我们就能取出全站所有链接rules = (# 是实例化一个Rule对象#规则解析器:接收链接提取器提取到的链接,对其发起请求,然后根据指定规则(callback)解析数据Rule(link, callback='parse_item', follow=True),)# follow = True# 将链接提取器,继续作用到链接 提取器提取到的页码所对应的页面中def parse_item(self, response):print(response)# 基于response实现数据解析

深度爬取

CrawlSpider实现深度爬取

  • 通用方式:CrawlSpider + Spider实现深度爬取

  • 创建一个爬虫工程:scrapy startproject proName

  • 进入工程创建一个基于CrawlSpider的爬虫文件

    • scrapy genspider -t crawl spiderName www.xxx.com
  • 执行工程:scrapy crawl spiderName

settings.py

# Scrapy settings for sunPro project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'sunPro'SPIDER_MODULES = ['sunPro.spiders']
NEWSPIDER_MODULE = 'sunPro.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL = "ERROR"
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {#    'sunPro.middlewares.SunproSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {#    'sunPro.middlewares.SunproDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'sunPro.pipelines.SunproPipeline': 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

items.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass SunproItem(scrapy.Item):title = scrapy.Field()status = scrapy.Field()class SunProItemDetail(scrapy.Item):content = scrapy.Field()

pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface
from itemadapter import ItemAdapterclass SunproPipeline:def process_item(self, item, spider):if item.__class__.__name__=='SunproItem':title = item['title']status = item['status']print(title+":"+status)else:content = item['content']print(content)return item

sun.py 爬虫源文件

此方法进行数据爬取,,持久化存储目前无法将title和content进行一一匹配,我们需要手动发起请求

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from sunPro.items import SunproItem,SunProItemDetailclass TestSpider(CrawlSpider):name = 'sun'#allowed_domains = ['www.xxx.com']start_urls = ['http://wz.sun0769.com/political/index/politicsNewest?id=1&page=1']# 链接提取器:根据指定规则(allow参数)在页面中进行链接(url)提取# allow = "正则":提取链接的规则link = LinkExtractor(allow=r'id=1&page=\d+')# 实例化LinkExtractor对象link_detail = LinkExtractor(allow=r'dindex\?id=\d+') #详情页url# link = LinkExtractor(allow=r'')# allow是空follow是True那么我们就能取出全站所有链接rules = (# 是实例化一个Rule对象#规则解析器:接收链接提取器提取到的链接,对其发起请求,然后根据指定规则(callback)解析数据Rule(link, callback='parse_item', follow=True),Rule(link_detail, callback='parse_detail'),)# follow = True# 将链接提取器,继续作用到链接 提取器提取到的页码所对应的页面中#标题&状态def parse_item(self, response):li_list = response.xpath('/html/body/div[2]/div[3]/ul[2]/li')for li in li_list:title = li.xpath('./span[3]/a/text()').extract_first()status = li.xpath('./span[2]/text()').extract_first()item = SunproItem()item['title']=titleitem['status'] =statusyield item# 实现深度爬取:爬取详情页中的数据# 1,对详情页的url进行捕获# 2,对详情页的url发起请求获取数据def parse_detail(self,response):content = response.xpath('/html/body/div[3]/div[2]/div[2]/div[2]/pre/text()').extract_first()item = SunProItemDetail()item['content']=contentyield item# 爬虫文件会向管道提交两个不同形式的item,管道会接受到两个不同形式的item。#我们需要在管道中判断接收到item到底是哪一个#此方法进行数据爬取,,持久化存储目前无法将title和content进行一一匹配,我们需要手动发起请求

CrawlSpider+Spider全站深度爬取

items.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass SunproItem(scrapy.Item):title = scrapy.Field()status = scrapy.Field()content = scrapy.Field()

pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface
from itemadapter import ItemAdapterclass SunproPipeline:def process_item(self, item, spider):print(item)return item

sun.py 爬虫源文件

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from sunPro.items import SunproItemclass TestSpider(CrawlSpider):name = 'sun'#allowed_domains = ['www.xxx.com']start_urls = ['http://wz.sun0769.com/political/index/politicsNewest?id=1&page=1']# 链接提取器:根据指定规则(allow参数)在页面中进行链接(url)提取# allow = "正则":提取链接的规则link = LinkExtractor(allow=r'id=1&page=\d+')# 实例化LinkExtractor对象#link_detail = LinkExtractor(allow=r'dindex\?id=\d+') #详情页urlrules = (# 是实例化一个Rule对象#规则解析器:接收链接提取器提取到的链接,对其发起请求,然后根据指定规则(callback)解析数据Rule(link, callback='parse_item', follow=True),# Rule(link_detail, callback='parse_detail'),)# #follow = True# 将链接提取器,继续作用到链接 提取器提取到的页码所对应的页面中#标题&状态def parse_item(self, response):li_list = response.xpath('/html/body/div[2]/div[3]/ul[2]/li')for li in li_list:title = li.xpath('./span[3]/a/text()').extract_first()status = li.xpath('./span[2]/text()').extract_first()detail_url ="http://wz.sun0769.com" + li.xpath('./span[3]/a/@href').extract_first()#详情页urlitem = SunproItem()item['title'] = titleitem['status'] = statusyield scrapy.Request(url=detail_url,callback=self.parse_detail,meta={'item':item})def parse_detail(sele,response):content = response.xpath('/html/body/div[3]/div[2]/div[2]/div[2]/pre/text()').extract_first()item = response.meta['item']item['content'] = contentyield item

20-爬虫之scrapy框架CrawlSpider07相关推荐

  1. python爬虫之Scrapy框架的post请求和核心组件的工作 流程

    python爬虫之Scrapy框架的post请求和核心组件的工作 流程 一 Scrapy的post请求的实现 在爬虫文件中的爬虫类继承了Spider父类中的start_urls,该方法就可以对star ...

  2. scrapy获取a标签的连接_python爬虫——基于scrapy框架爬取网易新闻内容

    python爬虫--基于scrapy框架爬取网易新闻内容 1.需求[前期准备] 2.分析及代码实现(1)获取五大板块详情页url(2)解析每个板块(3)解析每个模块里的标题中详情页信息 点击此处,获取 ...

  3. Python爬虫之scrapy框架360全网图片爬取

    Python爬虫之scrapy框架360全网图片爬取 在这里先祝贺大家程序员节快乐,在此我也有一个好消息送给大家,本人已开通了微信公众号,我会把资源放在公众号上,还请大家小手动一动,关注过微信公众号, ...

  4. Python爬虫之Scrapy框架爬虫实战

    Python爬虫中Scrapy框架应用非常广泛,经常被人用于属于挖掘.检测以及自动化测试类项目,为啥说Scrapy框架作为半成品我们又该如何利用好呢 ?下面的实战案例值得大家看看. 目录: 1.Scr ...

  5. 19. python爬虫——基于scrapy框架爬取网易新闻内容

    python爬虫--基于scrapy框架爬取网易新闻内容 1.需求 [前期准备] 2.分析及代码实现 (1)获取五大板块详情页url (2)解析每个板块 (3)解析每个模块里的标题中详情页信息 1.需 ...

  6. Python3爬虫之Scrapy框架的下载器中间件

    Python爬虫之Scrapy框架的下载器中间件 基本介绍 下载器中间键可以为我们设置多个代理ip与请求头,达到反反爬虫的目的 下面是scrapy为我们创建好的中间件的类 下面是中间件中的下载器函数, ...

  7. python爬虫——用Scrapy框架爬取阳光电影的所有电影

    python爬虫--用Scrapy框架爬取阳光电影的所有电影 1.附上效果图 2.阳光电影网址http://www.ygdy8.net/index.html 3.先写好开始的网址 name = 'yg ...

  8. 爬虫之Scrapy框架爬取彼岸壁纸案例分享

    爬虫之Scrapy框架爬取彼岸壁纸案例分享 前段时间在网上看到有人爬取了彼岸壁纸的案例,由于爬取的图片较多,爬取速度感觉不快,所以就自己写了个Scrapy框架,个人觉得爬取速度快多了. 代码如下. 文 ...

  9. 14. python爬虫——基于scrapy框架爬取糗事百科上的段子内容

    python爬虫--基于scrapy框架爬取糗事百科上的段子内容 1.需求 2.分析及实现 3.实现效果 4.进行持久化存储 (1)基于终端指令 (2)基于管道 [前置知识]python爬虫--scr ...

  10. 18.Python爬虫之Scrapy框架

    scrapy 框架 01. Scrapy 链接 02. Scrapy 的爬虫流程 03. Scrapy入门 04. setting.py文件中的常用设置 4.1. logging模块的使用 4.2. ...

最新文章

  1. 【Qt】Qt再学习(五):HTTP Example(HTTP下载文件的示例)
  2. 为什么不要用uuid做主键
  3. 流式处理框架storm浅析(下篇)
  4. RocketMQ(五):教你如何调试源代码
  5. Codeforces 85D Sum of Medians
  6. python是外部还是编程_Python网络编程(绑定到外部地址)
  7. 用Eclipse给安卓应用进行签名
  8. 1044 火星数字 PAT乙级 (C++)
  9. SurfaceFlinger与Surface概述
  10. 11、jeecg 笔记之 界面常用整理 - 方便复制粘贴
  11. ASP.NET MVC 入门8、ModelState与数据验证
  12. REVERSE-PRACTICE-BUUCTF-1
  13. mysql的主从(AB)复制
  14. CentOS系统使用yum安装配置MariaDB数据库
  15. Java代码实现时钟
  16. 大华条码秤数据同步发送数据格式
  17. 计算机常用的输出设备有什么作用,常用的多媒体输入输出设备有哪些,主要功能是什么...
  18. Foxmail中配置O365邮箱和Hotmail邮箱
  19. 最简单的基于FFmpeg的视频编码器-更新版(YUV编码为HEVC(H.265))
  20. python下载迅雷资源_PYTHON实现迅雷、FLASHGET、QQ旋风转真实链接、磁链转种子文件、迅雷快传链接抓取 | 学步园...

热门文章

  1. 不再迷惑,无值和 NULL 值
  2. java运行机制以及 运行流程
  3. java 年历_逆转监督漫画
  4. request获取页面html内容,request、request-promise、cheerio抓取网页内容
  5. java udp传输文件_JAVA使用UDP收发文件
  6. cae计算机仿真分析技术,厉害了 揭秘汽车设计中CAE仿真技术
  7. c语言全国二级考试全对,全国计算机c语言二级考试通用复习资料.doc
  8. java ArrayList排序
  9. 【java】带时区的时间格式化
  10. 输入一个数判断是否对称java_判断对称矩阵 - osc_4mawo3g6的个人空间 - OSCHINA - 中文开源技术交流社区...