目录

前言

环境部署

实现过程

创建项目

定义Item实体

关键词提取工具

爬虫构造

中间件代码构造

制作自定义pipeline

settings配置

执行主程序

执行结果

总结


前言

接着我的上一篇:如何爬取CSDN全站综合热榜标题,顺便统计关键词词频 | 爬虫案例_阿良的博客-CSDN博客

我换成Scrapy架构也实现了一遍。获取页面源码底层原理是一样的,Scrapy架构更系统一些。下面我会把需要注意的问题,也说明一下。

提供一下GitHub仓库地址:github本项目地址

环境部署

scrapy安装

pip install scrapy -i https://pypi.douban.com/simple

selenium安装

pip install selenium -i https://pypi.douban.com/simple

jieba安装

pip install jieba -i https://pypi.douban.com/simple

IDE:PyCharm

google chrome driver下载对应版本:google chrome driver下载地址

检查浏览器版本,下载对应版本。

实现过程

下面开始搞起。

创建项目

使用scrapy命令创建我们的项目。

scrapy startproject csdn_hot_words

项目结构,如同官方给出的结构。

定义Item实体

按照之前的逻辑,主要属性为标题关键词对应出现次数的字典。代码如下:

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass CsdnHotWordsItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()words = scrapy.Field()

关键词提取工具

使用jieba分词获取工具。

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2021/11/5 23:47
# @Author  : 至尊宝
# @Site    :
# @File    : analyse_sentence.pyimport jieba.analysedef get_key_word(sentence):result_dic = {}words_lis = jieba.analyse.extract_tags(sentence, topK=3, withWeight=True, allowPOS=())for word, flag in words_lis:if word in result_dic:result_dic[word] += 1else:result_dic[word] = 1return result_dic

爬虫构造

这里需要给爬虫初始化一个浏览器参数,用来实现页面的动态加载。

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2021/11/5 23:47
# @Author  : 至尊宝
# @Site    :
# @File    : csdn.pyimport scrapy
from selenium import webdriver
from selenium.webdriver.chrome.options import Optionsfrom csdn_hot_words.items import CsdnHotWordsItem
from csdn_hot_words.tools.analyse_sentence import get_key_wordclass CsdnSpider(scrapy.Spider):name = 'csdn'# allowed_domains = ['blog.csdn.net']start_urls = ['https://blog.csdn.net/rank/list']def __init__(self):chrome_options = Options()chrome_options.add_argument('--headless')  # 使用无头谷歌浏览器模式chrome_options.add_argument('--disable-gpu')chrome_options.add_argument('--no-sandbox')self.browser = webdriver.Chrome(chrome_options=chrome_options,executable_path="E:\\chromedriver_win32\\chromedriver.exe")self.browser.set_page_load_timeout(30)def parse(self, response, **kwargs):titles = response.xpath("//div[@class='hosetitem-title']/a/text()")for x in titles:item = CsdnHotWordsItem()item['words'] = get_key_word(x.get())yield item

代码说明

1、这里使用的是chrome的无头模式,就不需要有个浏览器打开再访问,都是后台执行的。

2、需要添加chromedriver的执行文件地址。

3、在parse的部分,可以参考之前我文章的xpath,获取到标题并且调用关键词提取,构造item对象。

中间件代码构造

添加js代码执行内容。中间件完整代码:

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlfrom scrapy import signals
from scrapy.http import HtmlResponse
from selenium.common.exceptions import TimeoutException
import timefrom selenium.webdriver.chrome.options import Options# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapterclass CsdnHotWordsSpiderMiddleware:# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the spider middleware does not modify the# passed objects.@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_spider_input(self, response, spider):# Called for each response that goes through the spider# middleware and into the spider.# Should return None or raise an exception.return Nonedef process_spider_output(self, response, result, spider):# Called with the results returned from the Spider, after# it has processed the response.# Must return an iterable of Request, or item objects.for i in result:yield idef process_spider_exception(self, response, exception, spider):# Called when a spider or process_spider_input() method# (from other spider middleware) raises an exception.# Should return either None or an iterable of Request or item objects.passdef process_start_requests(self, start_requests, spider):# Called with the start requests of the spider, and works# similarly to the process_spider_output() method, except# that it doesn’t have a response associated.# Must return only requests (not items).for r in start_requests:yield rdef spider_opened(self, spider):spider.logger.info('Spider opened: %s' % spider.name)class CsdnHotWordsDownloaderMiddleware:# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the downloader middleware does not modify the# passed objects.@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_request(self, request, spider):js = '''let height = 0let interval = setInterval(() => {window.scrollTo({top: height,behavior: "smooth"});height += 500}, 500);setTimeout(() => {clearInterval(interval)}, 20000);'''try:spider.browser.get(request.url)spider.browser.execute_script(js)time.sleep(20)return HtmlResponse(url=spider.browser.current_url, body=spider.browser.page_source,encoding="utf-8", request=request)except TimeoutException as e:print('超时异常:{}'.format(e))spider.browser.execute_script('window.stop()')finally:spider.browser.close()def process_response(self, request, response, spider):# Called with the response returned from the downloader.# Must either;# - return a Response object# - return a Request object# - or raise IgnoreRequestreturn responsedef process_exception(self, request, exception, spider):# Called when a download handler or a process_request()# (from other downloader middleware) raises an exception.# Must either:# - return None: continue processing this exception# - return a Response object: stops process_exception() chain# - return a Request object: stops process_exception() chainpassdef spider_opened(self, spider):spider.logger.info('Spider opened: %s' % spider.name)

制作自定义pipeline

定义按照词频统计最终结果输出到文件。代码如下:

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface
from itemadapter import ItemAdapterclass CsdnHotWordsPipeline:def __init__(self):self.file = open('result.txt', 'w', encoding='utf-8')self.all_words = []def process_item(self, item, spider):self.all_words.append(item)return itemdef close_spider(self, spider):key_word_dic = {}for y in self.all_words:print(y)for k, v in y['words'].items():if k.lower() in key_word_dic:key_word_dic[k.lower()] += velse:key_word_dic[k.lower()] = vword_count_sort = sorted(key_word_dic.items(),key=lambda x: x[1], reverse=True)for word in word_count_sort:self.file.write('{},{}\n'.format(word[0], word[1]))self.file.close()

settings配置

配置上要做一些调整。如下调整:

# Scrapy settings for csdn_hot_words project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'csdn_hot_words'SPIDER_MODULES = ['csdn_hot_words.spiders']
NEWSPIDER_MODULE = 'csdn_hot_words.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'csdn_hot_words (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0'# Obey robots.txt rules
ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 30
# The download delay setting will honor only one of:
# CONCURRENT_REQUESTS_PER_DOMAIN = 16
# CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Language': 'en','User-Agent': 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.94 Safari/537.36'
}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {'csdn_hot_words.middlewares.CsdnHotWordsSpiderMiddleware': 543,
}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {'csdn_hot_words.middlewares.CsdnHotWordsDownloaderMiddleware': 543,
}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
# }# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'csdn_hot_words.pipelines.CsdnHotWordsPipeline': 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
# HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

执行主程序

可以通过scrapy的命令执行,但是为了看日志方便,加了一个主程序代码。

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2021/11/5 22:41
# @Author  : 至尊宝
# @Site    :
# @File    : main.py
from scrapy import cmdlinecmdline.execute('scrapy crawl csdn'.split())

执行结果

执行部分日志

得到result.txt结果。

总结

看,java还是yyds。不知道为什么2021这个关键词也可以排名靠前。于是我觉着把我标题也加上2021。

GitHub项目地址在发一遍:github本项目地址

申明一下,本文案例仅研究探索使用,不是为了恶意攻击。

分享:

凡夫俗子不下苦功夫、死力气去努力做成一件事,根本就没资格去谈什么天赋不天赋。

——烽火戏诸侯《剑来》

如果本文对你有用的话,请不要吝啬你的赞,谢谢。

(Scrapy框架)爬虫2021年CSDN全站综合热榜标题热词 | 爬虫案例相关推荐

  1. 16-爬虫之scrapy框架手动请求发送实现全站数据爬取03

    scrapy的手动请求发送实现全站数据爬取 yield scrapy.Reques(url,callback) 发起的get请求 callback指定解析函数用于解析数据 yield scrapy.F ...

  2. 使用scrapy框架实现,房天下网站全站爬取,详情,动态,评论,户型,图片.

    scrapy  实现代码,代码有点多,没有优化,,下面有链接,不懂得留言 Github全部代码,https://github.com/Agile929/scrapy_fang # -*- coding ...

  3. 【Scrapy框架实战】爬取网易严选-苹果12手机热评

    Scrapy爬取网易严选-苹果手机热评 1. 前言 2. Scrapy项目创建 3. 网页分析 4. 发送请求 5. 提取信息 6. 模拟翻页 7. 数据保存 8. 结果展示 9. 数据分析 1. 前 ...

  4. (Scrapy框架)爬虫获取百度新冠疫情数据 | 爬虫案例

    目录 前言 环境部署 插件推荐 爬虫目标 项目创建 webdriver部署 项目代码 Item定义 中间件定义 定义爬虫 pipeline输出结果文本 配置文件改动 验证结果 总结 前言 闲来无聊,写 ...

  5. python_爬虫 16 Scrapy框架之(二)快速入门

    目录 一.安装和文档: 二.快速入门: 1.创建项目: 2.目录结构介绍: 3.使用Scrapy框架爬取糗事百科段子: 使用命令创建一个爬虫: 爬虫代码解析: 修改settings.py代码: 简单运 ...

  6. 【Python爬虫系列教程 28-100】小姐姐带你入门爬虫框架Scrapy、 使用Scrapy框架爬取糗事百科段子

    文章目录 Scrapy快速入门 安装和文档: 快速入门: 创建项目: 目录结构介绍: Scrapy框架架构 Scrapy框架介绍: Scrapy框架模块功能: Scrapy Shell 打开Scrap ...

  7. python3 scrapy框架,Python3爬虫(十八) Scrapy框架(二)

    对Scrapy框架(一)的补充 Infi-chu: Scrapy优点: 提供了内置的 HTTP 缓存 ,以加速本地开发 . 提供了自动节流调节机制,而且具有遵守 robots.txt 的设置的能力. ...

  8. scrapy框架_入门Scrapy框架看这一篇文章就够了

    前言 Scrapy是一个非常优秀的框架,操作简单,拓展方便,是比较流行的爬虫解决方案. Scrapy是一个用Python写的Crawer Framework,简单轻巧而且非常方便.Scrapy使用Tw ...

  9. 通过爬取美剧天堂详细介绍Scrapy 框架入门

    通过爬取美剧天堂并详细介绍Scrapy 框架 前言(了解) 全文写了很多注释在标题处,时间充裕的可以详细看,需要找主要知识点的我已经标注明白了,直接翻到具体位置就行. Scrapy是用纯Python实 ...

  10. 最全最简单scrapy框架搭建(附源码案例)

    最近在做项目中,需要网页的大批数据,查询数据是一项体力劳动,原本的我 然而,奋斗了一天的我查到的数据却寥寥无几,后来的我是这样的 作为一个cv工程师,复制粘贴原本是一件很快乐的事情但是它缺给了我无尽的 ...

最新文章

  1. Maya与Substance Painter风格化材质阴影和照明学习教程
  2. JS 实现MVC的写法
  3. 爱立信总裁表示欧洲网络始终趋于落后,网站推广之下5G发展需加快步伐
  4. rabbitmq的相关知识
  5. 梯度下降法-深度AI
  6. bash 命令提示符_命令行上每天的Bash提示
  7. 前端ajax怎么样遍历list_五大前端小白入门时最容易掉的坑,可得提防点!
  8. 腾讯云首发智能网关流控,公有云进入网络精细管控时代
  9. Vue3 的新特性(二) —— Composition-Api
  10. L298N模块接线纪实
  11. win7家庭版如何升级到专业版和旗舰版
  12. 记录安装Ubuntu16.04后必须要做的事,杂篇
  13. JAVA图形化界面设计
  14. 干货|仿古瓷砖的专业知识总结
  15. 查看电脑是否开启虚拟化
  16. Mac必备效率软件|Alfred的基础使用教程
  17. 从入门到精通:掌握Python核心知识,解锁编程新世界!
  18. ASEMI代理AD8061ARTZ-REEL7原装ADI车规级AD8061ARTZ-REEL7
  19. 翻煎饼 swustoj 254
  20. DSOD: Learning Deeply Supervised Object Detectors from Scratch

热门文章

  1. 西门子step7安装注册表删除_如何完全卸载step7
  2. 软件测试项目复盘,如何开始复盘测试_测试_复盘大师官网
  3. 天气和气象数据网站集合
  4. HenCoder 3-1 触摸反馈,以及 HenCoder Plus
  5. nfc卡模式与标准模式_NFC相关标准
  6. python离线录音转文字软件_语音转文字工具 v2.0免费版
  7. 使用键盘操作将桌面计算机图标隐藏,windows7系统中怎么隐藏桌面图标提高工作效率保持桌面整洁...
  8. docker阿里云镜像加速器
  9. 软著申请说明书及源程序模板
  10. 思科命令配置使用方法介绍