scrapy分布式写入到mysql_scrapy-redis分布式爬虫去重异步写入mysql数据库实例代码...
首先创建一个爬虫文件dgrds.py
# -*- coding: utf-8 -*-
import scrapy
from scrapy_redis.spiders import RedisSpider
class DgrdsSpider(RedisSpider):
name = 'dgrds'
redis_key = 'dgrds:start_urls'
def parse(self, response):
for i in range(2499930, 2499940):
yield scrapy.Request('https://www.douguo.com/cookbook/' + str(i) + '.html', callback=self.parse2)
def parse2(self, response):
if (response.status == 200):
title = response.css('.rinfo h1.title::text').get('')
view_nums = response.css('.vcnum span:first-of-type::text').get('')
collection_nums = response.css('.vcnum .collectnum::text').get('')
user_name = response.css('.author-info .nickname::text').get('')
user_image = response.css('.author-img img::attr(src)').get('')
tags = ''
tag_arr = response.css('.fenlei span')
if tag_arr is not None:
for tg in tag_arr:
tags += ';' + tg.css('a::text').get('')
basic_url = ''
youku = ''
id = 0
isvideo = response.css('#banner + a')
if isvideo is not None:
next_url = response.css('#banner + a::attr(href)').get('')
id = next_url.replace('/recipevideo/', '')
basic_url = 'https://www.douguo.com/cookbook/' + id + '.html'
item = {
'cate': '',
'title': title,
'view_nums': view_nums,
'collection_nums': collection_nums,
'user_name': user_name,
'user_image': user_image,
'tags': tags,
'basic_url': basic_url
}
yield scrapy.Request(response.urljoin(next_url), meta=item, callback=self.parse4)
def parse4(self, response):
url = response.css('embed::attr(src)').get('')
item = {
'cate': response.meta['cate'],
'title': response.meta['title'],
'view_nums': response.meta['view_nums'],
'collection_nums': response.meta['collection_nums'],
'user_name': response.meta['user_name'],
'user_image': response.meta['user_image'],
'tags': response.meta['tags'],
'basic_url': response.meta['basic_url']
}
item['video_url'] = url
yield item
然后修改setting.py配置文件
# -*- coding: utf-8 -*-
# Scrapy settings for dgredis project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'dgredis'
SPIDER_MODULES = ['dgredis.spiders']
NEWSPIDER_MODULE = 'dgredis.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'dgredis (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'dgredis.middlewares.DgredisSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'dgredis.middlewares.DgredisDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'dgredis.pipelines.DgredisPipeline': 300,
'scrapy_redis.pipelines.RedisPipeline': 400,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
MYSQL_HOST = '127.0.0.1'
MYSQL_PORT = '3306'
MYSQL_USER = 'root'
MYSQL_PASS = ''
MYSQL_DB = 'test'
HTTPERROR_ALLOWED_CODES = [404, 301]
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
#SCHEDULER_QUEUE_CLASS = "scrapy_redis.queue.SpiderPriorityQueue"
#SCHEDULER_QUEUE_CLASS = "scrapy_redis.queue.SpiderQueue"
#SCHEDULER_QUEUE_CLASS = "scrapy_redis.queue.SpiderStack"
LOG_LEVEL = 'DEBUG'
# Introduce an artifical delay to make use of parallelism. to speed up the
# crawl.
DOWNLOAD_DELAY = 1
然后修改管道文件pipeline.py
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import pymysql
from twisted.enterprise import adbapi
class DgredisPipeline:
def __init__(self, dbpool):
self.dbpool = dbpool
@classmethod
def from_settings(cls, settings):
adbparams = dict(
host=settings['MYSQL_HOST'],
db=settings['MYSQL_DB'],
user=settings['MYSQL_USER'],
password=settings['MYSQL_PASS'],
charset='utf8',
cursorclass=pymysql.cursors.DictCursor
)
dbpool = adbapi.ConnectionPool('pymysql', **adbparams)
return cls(dbpool)
def process_item(self, item, spider):
query = self.dbpool.runInteraction(self.do_insert, item) # 指定操作方法和操作数据
# 添加异常处理
query.addCallback(self.handle_error) # 处理异常
def do_insert(self, cursor, item):
# 对数据库进行插入操作,并不需要commit,twisted会自动commit
insert_sql = """
insert into douguoaa(title,user_name,user_image,view_nums,collection_nums,basic_url,video_url,tags,cate_name) VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s)
"""
cursor.execute(insert_sql, (
item['title'], item['user_name'], item['user_image'], item['view_nums'], item['collection_nums'],
item['basic_url'],
item['video_url'], item['tags'], item['cate']))
def handle_error(self, failure):
if failure:
print(failure)
修改完这三个文件就可以支持分布式爬取。去重。异步写入数据库功能了
cd切换到爬虫项目的spider文件夹,执行命令 scrapy runspider dgrds.py ,这时候爬虫出去等待发送指令状态
然后在redis-cli端发送爬取指令,也就是指定start_urls,命令:lpush dgrds:start_urls http://www.douguo.com,点击回车后上面的图片代码接收到指令后会自动执行爬取
这个是基础爬虫文件,规则爬取crawSpider跟这个类似,只需指定Rule规则即可
That's All
Thanks!
原文链接:https://blog.csdn.net/muziduoxi/article/details/106422873
scrapy分布式写入到mysql_scrapy-redis分布式爬虫去重异步写入mysql数据库实例代码...相关推荐
- python selenium爬虫代码示例_python3通过selenium爬虫获取到dj商品的实例代码
先给大家介绍下python3 selenium使用 其实这个就相当于模拟人的点击事件来连续的访问浏览器.如果你玩过王者荣耀的话在2016年一月份的版本里面就有一个bug. 安卓手机下载一个按键精灵就可 ...
- php mysql表单源码_PHP表单数据写入MySQL数据库的代码
废话不多说了,直接给大家贴代码了,具体代码如下所示: if(!isset($_POST['submit'])){ //如果没有表单提交,显示一个表单 ?> 国家: 动物名称(英文): 动物名称( ...
- Scrapy框架-redis分布式(从Scrapy框架创建项目到redis分布式)
盗墓笔记案例: 目标网址:http://www.daomubiji.com/ scrapy项目: 1. 创建项目 2. 创建爬虫 3. 定义item 数据内容: 1. 书的名称 2. 章节名称 3. ...
- zookeeper 分布式锁_关于redis分布式锁,zookeeper分布式锁原理的一些学习与思考
编辑:业余草来源:https://www.xttblog.com/?p=4946 首先分布式锁和我们平常讲到的锁原理基本一样,目的就是确保,在多个线程并发时,只有一个线程在同一刻操作这个业务或者说方法 ...
- scrapy爬取上海宝山安居客房产信息并存到mysql数据库中
源码下载:https://download.csdn.net/download/dabao87/11997988 首先搭建虚拟环境和安装python这里就不说了,不会的请移步我的其他文章 安装虚拟环境 ...
- php写入大文件内容_用PHP读取超大文件的实例代码
数据量大带来的问题就是单个文件很大,能够打开这个文件相当不容易,记事本就不要指望了,果断死机 去年年底的各种网站帐号信息的数据库泄漏,很是给力啊,趁机也下载了几个数据库,准备学学数据分析家来分析一下这 ...
- c#中将集合写入文本_在C#中将记录插入MySQL数据库
c#中将集合写入文本 In the last tutorial (how to connect with MySQL database in C#?), we learned about making ...
- redis分布式缓存php,基于redis分布式缓存实现
第一:Redis 是什么? Redis是基于内存.可持久化的日志型.Key-Value数据库 高性能存储系统,并提供多种语言的API. 第二:出现背景数据结构(Data Structure)需求越来越 ...
- Spring aop优雅实现redis分布式锁 aop应用redis分布式锁
https://blog.csdn.net/qq_36800514/article/details/98239868
- Scrapy爬取北京公交并保存MYSQL数据库实例
前言就不过多赘述了,大家只要把scrapy的基本了解之后就可以完成这个项目. 一:创建scrapy项目: 打开控制台输入 scrapy startproject beibus(这个是项目名称,可以自己 ...
最新文章
- 4、python简单线性回归代码案例(完整)_4、python简单线性回归代码案例(完整)...
- Laravel教程 一:安装及环境配置
- 多线程编程3 - NSOperationQueue
- QT学习——Tcp客户端通信(本地回环)
- 稳定工作和创业之间的抉择
- sql server 日期类型
- github 著名php,工欲善其事必先利其器,盘点Github上那些优秀的PHP项目
- JAVASCRIPT视频教程推荐==李炎恢JavaScript教程 第一季
- 新学期个人作息时间安排
- Prometheus metric
- 用计算机画统计图,统计图
- 实验一 简单静态网页设计
- 传奇服务器怎么修改升级武器成功,传奇论坛服务端教程原创升级武器不碎完整脚本...
- python的help()
- Android蓝牙音乐(基于Android10)
- IE浏览器打开闪退,自动切换MicrosoftEdge浏览器
- 动易如何预防ASP木马防止网页被黑
- J-Link下载烧录提示Failed to read back RAMCode for verification
- Rom Redmi Note 4升级到Android11
- 秀米编辑后复制粘贴格式发生变形的解决方法
热门文章
- 滚轮控制摄像机移动_缩放视角_限制上下限
- 阿里云服务器安全组授权对象ip设置
- T0.Games欢迎高端加密NFT入驻
- 2019CBA全明星周末大幕落下 南方明星队大比分获胜
- 机器学习笔记最大熵之NER
- 面试题64:computer和watch的区别
- 使用官方APi获取Youtube视频资源
- 码云仓库第一次上传代码流程和git相关操作合集(持续更新)
- 戴尔服务器怎么win7系统安装系统,戴尔 DELLVostro3400能不能安装windows7系统_戴尔 DELLVostro3400怎么安装win7系统-win7之家...
- 项目#npm install #cnpm install #yarn安装包报错