文章目录

  • 一. CrawlSpider
  • 二. CrawlSpider案例
    • 1. 目录结构
    • 2. wxapp_spider.py
    • 3. items.py
    • 4. pipelines.py
    • 5. settings.py
    • 6. start.py
  • 三. 重点总结

一. CrawlSpider

现实情况下,我们需要对满足某个特定条件的url进行爬取,这时候就可以通过CrawlSpider完成。

CrawlSpider继承自Spider,只不过在之前的基础上增加了新的功能,可以定义爬取的url规则,Scrapy碰到满足条件的url都进行爬取,而不用手动的yield Request。


二. CrawlSpider案例

1. 目录结构

2. wxapp_spider.py

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from wxapp.items import WxappItemclass WxappSpiderSpider(CrawlSpider):name = 'wxapp_spider'allowed_domains = ['wxapp-union.com']start_urls = ['http://www.wxapp-union.com/portal.php?mod=list&catid=1&page=1']rules = (Rule(LinkExtractor(allow=r'.+mod=list&catid=1&page=\d'),  follow=True),Rule(LinkExtractor(allow=r'.+article-.+\.html'), callback='parse_detail', follow=False))def parse_detail(self, response):title = response.xpath("//h1[@class='ph']/text()").get()# print(title)author_p = response.xpath(".//p[@class='authors']")author = author_p.xpath("./a/text()").get()pub_time = author_p.xpath("./span/text()").get()# print('author:%s/pub_time:%s'%(author,pub_time))article_content = response.xpath(".//td[@id='article_content']//text()").getall()content = "".join(article_content).strip()# print(content)# print('-'*30)item = WxappItem(title=title,author=author,pub_time=pub_time,content=content)yield item

3. items.py

# -*- coding: utf-8 -*-# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass WxappItem(scrapy.Item):# define the fields for your item here like:title = scrapy.Field()author = scrapy.Field()pub_time = scrapy.Field()content = scrapy.Field()

4. pipelines.py

# -*- coding: utf-8 -*-# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.exporters import JsonLinesItemExporterclass WxappPipeline(object):def __init__(self):self.fp = open('wxjc.json','wb')self.exporter = JsonLinesItemExporter(self.fp,ensure_ascii=False,encoding='utf-8')def process_item(self, item, spider):self.exporter.export_item(item)return itemdef close_spider(self,spider):self.fp.close()

5. settings.py

# -*- coding: utf-8 -*-# Scrapy settings for wxapp project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'wxapp'SPIDER_MODULES = ['wxapp.spiders']
NEWSPIDER_MODULE = 'wxapp.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'wxapp (+http://www.yourdomain.com)'# Obey robots.txt rules
ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 1
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Language': 'en','User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36'
}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {#    'wxapp.middlewares.WxappSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {#    'wxapp.middlewares.WxappDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'wxapp.pipelines.WxappPipeline': 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

6. start.py

from scrapy import cmdlinecmdline.execute("scrapy crawl wxapp_spider".split())

三. 重点总结

网络爬虫--22.【CrawlSpider实战】实现微信小程序社区爬虫相关推荐

  1. python爬取微信小程序源代码_【实战】CrawlSpider实现微信小程序社区爬虫

    概述: 在人工智能来临的今天,数据显得格外重要.在互联网的浩瀚大海洋中,隐藏着无穷的数据和信息.因此学习网络爬虫是在今天立足的一项必备技能.本路线专门针对想要从事Python网络爬虫的同学而准备的,并 ...

  2. CrawlSpider实现微信小程序社区爬虫

    在新建的包目录下面创建一个爬虫项目,cmd-->scrapy startproject wxapp 创建成功后,cd wxapp 创建wxapp_spider爬虫 scrapy genspide ...

  3. CrawlSpider实现微信小程序社区爬虫【python爬虫入门进阶】(18)

    您好,我是码农飞哥,感谢您阅读本文,欢迎一键三连哦.

  4. HaaS EDU物联网项目实战:微信小程序实现云养花

    HaaS EDU K1是一款高颜值.高性能.高集成度的物联网开发板,板载功能强大的4核(双核300Mhz M33+双核1GHz A7)主芯片,2.4G/5G双频Wi-Fi,双模蓝牙(经典蓝牙/BLE) ...

  5. 计算机实战项目、毕业设计、课程设计之[含论文+辩论PPT+源码等]微信小程序社区疫情防控+后台管理|前后分离VUE[包运行成功

    <微信小程序社区疫情防控+后台管理系统|前后分离VUE>该项目含有源码.论文等资料.配套开发软件.软件安装教程.项目发布教程等 本系统包含微信小程序前台和Java做的后台管理系统,该后台采 ...

  6. 计算机实战项目之 [含论文+辩论PPT+源码等]微信小程序社区疫情防控+后台管理|前后分离VUE[包运行成功

    <微信小程序社区疫情防控+后台管理系统|前后分离VUE>该项目含有源码.论文等资料.配套开发软件.软件安装教程.项目发布教程等 本系统包含微信小程序前台和Java做的后台管理系统,该后台采 ...

  7. 件工程项目开发最全文档模板_一文带你了解微信小程序社区和小程序开发

    微信小程序越来越受欢迎,很多小白也想制作自己的小程序.小白若想自己顺利制作,有两种方法:一种是下载安装微信官方开发者工具,然后写代码开发:一种是使用第三方小程序制作工具,选个现成的小程序模板,自己再稍 ...

  8. 基于微信小程序社区疫情防控系统

    基于微信小程序社区疫情防控系统 小程序社区疫情防控系统的设计主要是对系统所要实现的功能进行详细考虑,确定所要实现的功能后进行界面的设计,在这中间还要考虑如何可以更好的将功能及页面进行很好的结合,方便用 ...

  9. 最新WordPress微信小程序社区论坛源码多端应用

    WordPress多端应用 WordPress作为后端生成多端小程序.快应用及APP,可用于资讯.新闻.博客.企业官网等 后端 使用开源博客建站系统wordpress和微慕小程序开源版插件 前端使用u ...

最新文章

  1. @所有人,CSDN 粉丝专属福利来啦!
  2. 用MODELLER构建好模型后对loop区域进行自动的优化过程
  3. 第四周项目四-程序分析(1)
  4. j90度度复数运算_看得懂的复数
  5. python3.7安装包百度云_Python-3.7.0软件安装包以及安装教程
  6. c语言编程被当作病毒,为什么这个微不足道的C程序被检测为病毒?
  7. iF.SVNAdmin
  8. Android音视频从入门到提高---任务列表
  9. Atlas Control Toolkit更新发布V1.0.60914.0
  10. 型钢计算机电脑打不开,型钢计算软件
  11. setuna截图怎么放大缩小_手机中的望远镜 华为P30pro是怎么做到50倍变焦?
  12. php目录结构 modules,目录结构
  13. 大前端课程学习心得体会+学习笔记
  14. 木门企业最典型的十八个问题
  15. macos可以升级到指定版本吗_[macOS]如何升级更新 Mac 系统
  16. Android O/P/Q 版本移植iperf网络性能测试工具
  17. 【UEFI】---关于BIOS,EIST和PStateCState和CPU主频变化得关系
  18. Linux系统/etc/fstab文件损坏如何处理
  19. stm32全彩LED屏显示
  20. 我十年的程序员生涯----雷军(写于1996年)

热门文章

  1. 使用命令行的方式,将ini配置文件中的配置信息传递给程序
  2. 概率论 条件概率 全概率 贝叶斯公式
  3. Android Glide图片加载框架(四)回调与监听
  4. C语言struct关键字详解—结构体
  5. 循序渐进学好编程,不要太急!!!
  6. 私有云促进企业管理变革 助力企业快步前行
  7. predict_16x16[i_mode]( p_dst, i_stride )lowres
  8. 解决:Cannot read property ‘component‘ of undefined ( 即 vue-router 0.x 转化为 2.x)
  9. 获取BGR颜色的HSV值
  10. Lyft Level 5 Challenge 2018 - Elimination Round翻车记