阳光热线问政平台

URL地址:http://wz.sun0769.com/index.php/question/questionType?type=4&page=

爬取字段:帖子的编号、投诉类型、帖子的标题、帖子的URL地址、部门、状态、网友、时间。

1.items.py

# -*- coding: utf-8 -*-# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.htmlimport scrapyclass SunwzspiderItem(scrapy.Item):# define the fields for your item here like:# 爬取投诉帖子的编号、投诉类型、帖子的标题、帖子的URL、部门、状态、网友、时间。# 帖子的编号post_id = scrapy.Field()# 投诉类型post_type = scrapy.Field()# 帖子的标题post_title = scrapy.Field()# 帖子的URLpost_url = scrapy.Field()# 部门sector = scrapy.Field()# 状态post_state = scrapy.Field()# 网友net_friend = scrapy.Field()# 时间post_time = scrapy.Field()

2.spiders/sunwz.py

# -*- coding: utf-8 -*-
import scrapy
from sunwzSpider.items import SunwzspiderItemclass SunwzSpider(scrapy.Spider):name = 'sunwz'allowed_domains = ['wz.sun0769.com']url = "http://wz.sun0769.com/index.php/question/questionType?type=4&page="offset = 0start_urls = [url + str(offset)]def parse(self, response):table = response.xpath("//table[@width='98%']")[0]trs = table.xpath("./tr")# 是否爬取下一页的标记next_flag = Falsefor tr in trs:next_flag = Truetry:item = SunwzspiderItem()# 帖子的编号post_id = tr.xpath("./td/text()").extract()[0]td2 = tr.xpath("./td")[1]# 投诉类型post_type = td2.xpath("./a/text()").extract()[0]# 帖子的标题post_title = td2.xpath("./a/text()").extract()[1]# 帖子的URLpost_url = td2.xpath("./a/@href").extract()[1]# 部门sector = td2.xpath("./a/text()").extract()[2]td3 = tr.xpath("./td")[2]# 状态post_state = td3.xpath("./span/text()").extract()[0]# 网友net_friend = tr.xpath("./td/text()").extract()[3]# 时间post_time = tr.xpath("./td/text()").extract()[4]item["post_id"] = post_iditem["post_type"] = post_typeitem["post_title"] = post_titleitem["post_url"] = post_urlitem["sector"] = sectoritem["post_state"] = post_stateitem["net_friend"] = net_frienditem["post_time"] = post_timeyield itemexcept:pass# 判断是否继续爬取下一页if next_flag:self.offset += 30yield scrapy.Request(self.url + str(self.offset), callback = self.parse)

3.pipelines.py

# -*- coding: utf-8 -*-# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.htmlimport jsonclass SunwzspiderPipeline(object):def __init__(self):self.file = open("阳光问政平台.json", "w", encoding = "utf-8")self.first_flag = Truedef process_item(self, item, spider):if self.first_flag:self.first_flag = Falsecontent = "[\n" + json.dumps(dict(item), ensure_ascii = False)else:content = ",\n" + json.dumps(dict(item), ensure_ascii = False)self.file.write(content)return itemdef close_spider(self, spider):self.file.write("\n]")self.file.close()

4.settings.py

# -*- coding: utf-8 -*-# Scrapy settings for sunwzSpider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'sunwzSpider'SPIDER_MODULES = ['sunwzSpider.spiders']
NEWSPIDER_MODULE = 'sunwzSpider.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'sunwzSpider (+http://www.yourdomain.com)'# Obey robots.txt rules
ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 2
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36'
#   'Accept-Language': 'en',
}# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'sunwzSpider.middlewares.SunwzspiderSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'sunwzSpider.middlewares.MyCustomDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'sunwzSpider.pipelines.SunwzspiderPipeline': 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

转载于:https://www.cnblogs.com/mayi0312/p/7270286.html

爬虫——Scrapy框架案例二:阳光问政平台相关推荐

  1. 爬虫——Scrapy框架案例一:手机APP抓包

    以爬取斗鱼直播上的信息为例: URL地址:http://capi.douyucdn.cn/api/v1/getVerticalRoom?limit=20&offset=0 爬取字段:房间ID. ...

  2. python scrapy爬虫视频_python爬虫scrapy框架的梨视频案例解析

    之前我们使用lxml对梨视频网站中的视频进行了下载 下面我用scrapy框架对梨视频网站中的视频标题和视频页中对视频的描述进行爬取 分析:我们要爬取的内容并不在同一个页面,视频描述内容需要我们点开视频 ...

  3. python3 scrapy框架,Python3爬虫(十八) Scrapy框架(二)

    对Scrapy框架(一)的补充 Infi-chu: Scrapy优点: 提供了内置的 HTTP 缓存 ,以加速本地开发 . 提供了自动节流调节机制,而且具有遵守 robots.txt 的设置的能力. ...

  4. python爬虫——用Scrapy框架爬取阳光电影的所有电影

    python爬虫--用Scrapy框架爬取阳光电影的所有电影 1.附上效果图 2.阳光电影网址http://www.ygdy8.net/index.html 3.先写好开始的网址 name = 'yg ...

  5. python爬虫--Scrapy框架--Scrapy+selenium实现动态爬取

    python爬虫–Scrapy框架–Scrapy+selenium实现动态爬取 前言 本文基于数据分析竞赛爬虫阶段,对使用scrapy + selenium进行政策文本爬虫进行记录.用于个人爬虫学习记 ...

  6. python cookie池_Python爬虫scrapy框架Cookie池(微博Cookie池)的使用

    下载代码Cookie池(这里主要是微博登录,也可以自己配置置其他的站点网址) 下载代码GitHub:https://github.com/Python3WebSpider/CookiesPool 下载 ...

  7. Python爬虫 scrapy框架爬取某招聘网存入mongodb解析

    这篇文章主要介绍了Python爬虫 scrapy框架爬取某招聘网存入mongodb解析,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下 创建项目 sc ...

  8. Python爬虫—Scrapy框架—Win10下载安装

    Python爬虫-Scrapy框架-Win10下载安装 1. 下载wheel 2.下载twisted 3. 下载pywin32 4. 下载安装Scrapy 5. 创建一个scrapy项目 6. fir ...

  9. scrapy框架(二)

    scrapy框架(二) 一.Scrapy shell 二.Scrapy 选择器 三.scrapy.Spider

  10. Python爬虫-Scrapy框架(四)- 内置爬虫文件 - 4.2 初探Crawl Spider

    Python爬虫-Scrapy框架(四)- 内置爬虫文件 - 4.2 初探Crawl Spider 写在前面 初探Crawl Spider 创建Crawl Spider项目 对比Basic与Crawl ...

最新文章

  1. Spring Cloud Greenwich版本已发布!
  2. Framebuffer的配置及应用——先转载留着,以后一定要弄懂
  3. python鸢尾花数据集聚类_R语言鸢尾花iris数据集的层次聚类分析
  4. 有的时候入门只是一瞬间
  5. python dataframe索引_pandas DataFrame 行列索引及值的获取的方法
  6. 中科院发布了目标追踪数据集,1万多条视频,150万个边界框 | 快来下载
  7. 在struct 中使用string,赋值会报错
  8. BZOJ3235 [Ahoi2013]好方的蛇 【单调栈 + dp】
  9. canvas需要gpu_提高HTML5 canvas性能的几种方法(转)
  10. asp版的简单留言板
  11. python贪吃蛇游戏无法运行_Python贪吃蛇游戏编写代码
  12. 第五章-2 dns劫持 - Ettercap
  13. 一个屌丝程序猿的人生(十六)
  14. 林子雨大数据实验八Flink部分代码
  15. 虚拟现实VR Occulus手柄按键操作
  16. Python的import this 惊喜彩蛋:Python之禅(The Zen of Python)
  17. 【论文夜读】陈天琦神作Neural Ordinary Differential Equations(NuerIPS2018最佳paper)
  18. 京东app优惠券python抓取_京东app商品信息爬取
  19. 2G金士顿U盘修复工具迈科微MW6208E/8208_v1.2.0.8
  20. 整理兼职网站资源分享

热门文章

  1. OpenWRT 使用USB 4G上网卡
  2. 植被覆盖度时空变化规律分析实例
  3. android开发中TabHost使用方法
  4. 通过自媒体赚钱的13种方式,来看看你适合哪种
  5. 数学模板-欧几里德算法扩展欧几里德算法
  6. deepin20.7安装mysql8.0.30教程
  7. 一个项目经理成长的心路历程,是谁在孤独的夜里抹眼泪
  8. dva是什么游戏_守望先锋:DVA这个皮肤小蛮腰不算什么,全部细节是这个小脚丫...
  9. Word中插入分隔线
  10. 做XH2.54杜邦线材料-导线