本篇文章可能基于管道的文件保存有点问题,可以关闭管道直接在终端保存数据
保存为json
scrapy crawl db -o ./douban.json
保存为csv
scrapy crawl db -o ./douban.csv
spider.py

import scrapy
from douban.items import DoubanItem
import reclass DbSpider(scrapy.Spider):name = 'db'allowed_domains = ['movie.douban.com']start_urls = ['https://movie.douban.com/top250']def parse(self, response):list_li=response.xpath('//*[@id="content"]/div/div[1]/ol/li')for item in list_li:douban_item=DoubanItem()#序号douban_item["serial_number"]=item.xpath('./div/div[1]/em/text()').extract_first()#电影名称douban_item["movie_name"]=item.xpath('./div/div[2]/div[1]/a/span[1]/text()').extract_first()#电影介绍douban_item["introduce"]= item.xpath('./div/div[2]/div[2]/p/text()').extract_first().strip().replace("\xa0","")# 星级douban_item["star"]=item.xpath('./div/div[2]/div[2]/div/span[2]/text()').extract_first()#评论数douban_item["evaluate"]=item.xpath('./div/div[2]/div[2]/div/span[4]/text()').extract_first()# 电影描述douban_item['describe']=item.xpath('./div/div[2]/div[2]/p[2]/span/text()').extract_first()items={"序号":douban_item["serial_number"],"电影名称":douban_item["movie_name"],"电影介绍":douban_item["introduce"],"星级":douban_item["star"],"评论数":douban_item["evaluate"],"电影描述":douban_item['describe'],}print(items)yield items# 翻页请求next_url=response.xpath('//*[@id="content"]/div/div[1]/div[2]/a/@href').extract()for i in next_url:yield scrapy.Request("https://movie.douban.com/top250"+i,callback=self.parse)

items.py

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass DoubanItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()#序号serial_number=scrapy.Field()#电影名称movie_name=scrapy.Field()#电影介绍introduce=scrapy.Field()#星级star=scrapy.Field()#评论数evaluate=scrapy.Field()#电影描述describe=scrapy.Field()

pipelines.py

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface
from itemadapter import ItemAdapterimport json
class DoubanPipeline:fp=Nonedef open_spider(self,spider):self.fp=open('./douban.json','w',encoding='utf-8')def process_item(self, item, spider):json_itme=json.dumps(item,ensure_ascii=False)self.fp.write(json_itme+"\n")print(item)return itemdef close_spider(self,spider):self.fp.close()

settings.py

# Scrapy settings for douban project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'douban'SPIDER_MODULES = ['douban.spiders']
NEWSPIDER_MODULE = 'douban.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36'# Obey robots.txt rules
ROBOTSTXT_OBEY = False
LOG_LEVEL="ERROR"# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 0.5
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {#    'douban.middlewares.DoubanSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {#    'douban.middlewares.DoubanDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'douban.pipelines.DoubanPipeline': 300,
}
FEED_EXPORTENCODING="utf-8"
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

scrapy实例三 【豆瓣电影Top250】相关推荐

  1. python爬取豆瓣电影top250_【Python3爬虫教程】Scrapy爬取豆瓣电影TOP250

    今天要实现的就是使用是scrapy爬取豆瓣电影TOP250榜单上的电影信息. 步骤如下: 一.爬取单页信息 首先是建立一个scrapy项目,在文件夹中按住shift然后点击鼠标右键,选择在此处打开命令 ...

  2. scrapy框架实现豆瓣电影top250

    Scrapy 是用纯 Python 实现一个为了爬取网站数据.提取结构性数据而编写的应用框架,用户只需要定制开发几个模块就可以实现一个网络爬虫程序,其采用了异步通讯的方式可以快速进行数据爬取.在pyt ...

  3. python爬取豆瓣电影top250_Python爬虫 - scrapy - 爬取豆瓣电影TOP250

    0.前言 新接触爬虫,经过一段时间的实践,写了几个简单爬虫,爬取豆瓣电影的爬虫例子网上有很多,但都很简单,大部分只介绍了请求页面和解析部分,对于新手而言,我希望能够有一个比较全面的实例.所以找了很多实 ...

  4. Python爬虫实例-爬取豆瓣电影Top250

    这是本人Python爬虫实例的第二个实例,不过想来好像没有很大的难度所以适合当做新手入门的第一个爬虫.放在这里供大家参考. 本次实例爬取的网站为豆瓣电影Top250,使用到的第三方库有urllib,B ...

  5. 03_使用scrapy框架爬取豆瓣电影TOP250

    前言: 本次项目是使用scrapy框架,爬取豆瓣电影TOP250的相关信息.其中涉及到代理IP,随机UA代理,最后将得到的数据保存到mongoDB中.本次爬取的内容实则不难.主要是熟悉scrapy相关 ...

  6. python爬虫实例-运用requests抓取豆瓣电影TOP250(详解)

    目录 开发工具 目标 网页分析 正则匹配分析 代码实例 总结 开发工具 python版本: python-3.8.1-amd64 python开发工具: JetBrains PyCharm 2018. ...

  7. Python 采用Scrapy爬虫框架爬取豆瓣电影top250

    scrapy 简介 在此,默认已经安装好Scrapy,如果没有安装可以到scrapy 官网下载安装. 注意: 在安装Scrapy之前首先需要安装一下python第三方库:(安装方法并不在本文讲解范围, ...

  8. python爬取豆瓣电影top250的代码_Python爬虫——爬取豆瓣电影Top250代码实例

    利用python爬取豆瓣电影Top250的相关信息,包括电影详情链接,图片链接,影片中文名,影片外国名,评分,评价数,概况,导演,主演,年份,地区,类别这12项内容,然后将爬取的信息写入Excel表中 ...

  9. python爬取豆瓣电影top250_Python爬虫——爬取豆瓣电影Top250代码实例

    利用python爬取豆瓣电影Top250的相关信息,包括电影详情链接,图片链接,影片中文名,影片外国名,评分,评价数,概况,导演,主演,年份,地区,类别这12项内容,然后将爬取的信息写入Excel表中 ...

  10. python爬取豆瓣电影并分析_爬取豆瓣电影top250提取电影分类进行数据分析

    标签(空格分隔):python爬虫 一.爬取网页,获取需要内容 我们今天要爬取的是豆瓣电影top250 页面如下所示: 我们需要的是里面的电影分类,通过查看源代码观察可以分析出我们需要的东西.直接进入 ...

最新文章

  1. [每日一问]虚拟化网络设计中为什么建议采用链路聚合
  2. Django(models中字段+参数)
  3. win7下不能使用dnw烧写的解决办法——韦东山嵌入式Linux学习笔记05
  4. cJSON源码及解析流程详解
  5. 爆牙齿的世界杯日记(阿根疼啦)
  6. C++成员函数指针的应用
  7. windows上搭建NFS服务器
  8. 串口读写flash_老司机带路:LPC82x 存储器及读写保护 手到擒来!
  9. idea引入本地jar包及打包
  10. 还原oracle控制文件位置,oracle 11.2 控制文件还原
  11. matlab打乱矩阵行,matlab 中,怎么让一个矩阵按某一列排列,并且行也跟着变动?...
  12. [PHP] 解决:hex2bin(): Hexadecimal input string must have an even length
  13. 新手CrossApp 之ScrollView小结
  14. 埃特金方法c语言,C语言通用范例开发金典(含光盘1张)
  15. Django~1 一 什么是web框架?
  16. Ubuntu Desktop LTS - 快速显示桌面
  17. 拥塞窗口cwnd的理解
  18. Ajax响应前和完成事件 / beforeSend complete / 判断ajax是否执行完毕
  19. python func函数用法_python函数局部变量用法实例分析
  20. CIKM 2021 | 推荐系统相关论文分类整理

热门文章

  1. nginx开启密码认证
  2. 如何区分国内上网环境中不同的人为网络故障
  3. IIS与Tomcat的区别
  4. Eclipse中启动tomcat访问404解决及原因
  5. 8.这就是搜索引擎:核心技术详解 --- 网页反作弊
  6. 4. 集中式vs分布式
  7. 原生 遍历_ECMAScript 6 入门教程—异步遍历器
  8. href=“javascript:void(0);”和href=void(change_code(this));
  9. win2008server R2 x64 部署.net core到IIS--ASP .NET Core HTTP Error 502.5 – Process Failure
  10. BeautifulSoup实现博文简介与过滤恶意标签(xxs攻击)