今天作者分享的是工作中所做的舆情系统,爬取百度贴吧中的各个物流贴吧网站的信息,scrapy用的不是特别多可能代码会有点low,希望大家可以见谅哈。直接上代码如下:
items文件中的信息:

class LouzhuItem(scrapy.Item):"""记录楼主信息"""lzhu_name = scrapy.Field()     # 楼主名字lzhu_id = scrapy.Field()       # 楼主IDlzhu_level = scrapy.Field()    # 楼主等级title = scrapy.Field()         # 帖子标题title_id = scrapy.Field()      # 帖子的标题idtieba_name = scrapy.Field()    # 贴吧的名字lcreate_time = scrapy.Field()  # 帖子创建时间tz_url = scrapy.Field()        # 帖子URLpages = scrapy.Field()         # 帖子总页数update_time = scrapy.Field()   # 更新帖子的时间class CengzhuItem(scrapy.Item):"""记录层主信息"""czhu_name = scrapy.Field()     # 层主名字czhu_id = scrapy.Field()       # 层主IDczhu_level = scrapy.Field()    # 层主等级content = scrapy.Field()       # 该层文字内容content_id = scrapy.Field()    # 该层文字内容的idtieba_name = scrapy.Field()    # 贴吧的名字ccreate_time = scrapy.Field()  # 该层创建时间tiezi_num = scrapy.Field()     # 帖子的回复量url = scrapy.Field()           # 帖子的地址update_time = scrapy.Field()   # 更新帖子的时间
middlewares.py 文件这里没做修改你也可以直接使用默认的。
pipelines.py  文件用于数据的插入作者这里直接上代码如下:
import re
import time
import json
import sqlalchemy
import requests
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from cn56_net.items import Cn56NetItem,LouzhuItem,CengzhuItem
from cn56_net.manage_db.models import LouZhuInfo, CengZhuInfo, News# engine = create_engine("mysql+pymysql://root:root@ip:3306/test?charset=utf8")
engine = create_engine("mysql+pymysql://root:@ip:3306/test?charset=utf8")
Session = sessionmaker(bind=engine)class MysqlPipeline(object):def __init__(self):self.session = Session()@staticmethoddef filter_emoji(desstr,restr=''):       // 过滤表情,当然这里没有用到,python2可以用这个方法try:co = re.compile('[\U00010000-\U0010ffff]')except re.error:co = re.compile(u'[\uD800-\uDBFF][\uDC00-\uDFFF]')return co.sub(restr, desstr)def process_item(self, item, spider):if isinstance(item, LouzhuItem):# safe_name = self.filter_emoji(item['lzhu_name'])# safe_title = self.filter_emoji(item['title'])safe_name = item['lzhu_name']safe_title = item['title']add_lz = LouZhuInfo(lz_name=safe_name,lz_id=item['lzhu_id'],lz_level=item['lzhu_level'],title=safe_title,tieba_name=item['tieba_name'],create_time=item['lcreate_time'],url=item['tz_url'],pages=item['pages'],title_id=item['title_id'],update_time = item['update_time'])self.session.add(add_lz)self.session.commit()self.session.close()elif isinstance(item, CengzhuItem):# safe_name = self.filter_emoji(item['czhu_name'])safe_name = item['czhu_name']# safe_content = self.filter_emoji(item['content'])safe_content = item['content']add_cz = CengZhuInfo(czhu_name=safe_name,czhu_id=item['czhu_id'],czhu_level=item['czhu_level'],content=safe_content,content_id = item['content_id'],tieba_name=item['tieba_name'],ccreate_time=item['ccreate_time'],tiezi_num=item['tiezi_num'],url=item['url'],update_time=item['update_time'])self.session.add(add_cz)self.session.commit()self.session.close()elif isinstance(item, Cn56NetItem):query = self.session.query(News)r = query.filter_by(url=item['url'], pub_time=item["pub_time"]).first()if r:print ("\n\n", '*'*10, 'NewsItem no changes, update', '*'*10)passelse:create_time = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))add_news = News(source=item["source"],title=item["title"],web=item["web"],column=item["column"],author=item["author"],content=item["content"],forward_amount=item["forward_amount"],comment_amount=item["comment_amount"],read_amount=item["read_amount"],url=item["url"],pub_time=item["pub_time"],create_time=create_time,update_time=item['update_time'])self.session.add(add_news)self.session.commit()self.session.close()return item
settings文件需要修改的有:'''限制爬取的速度延迟1秒爬取一个'''
DOWNLOAD_DELAY = 1ITEM_PIPELINES = {# 'cn56_net.pipelines.Cn56NetPipeline': 300,'cn56_net.pipelines.MysqlPipeline': 300,
}# 添加MySQL到setting中
MYSQL_HOST='IP'
MYSQL_DBNAME='test'
MYSQL_USER='root'
MYSQL_PASSWORD='root'# 自己添加的内容
HTTPERROR_ALLOWED_CODES = [400]#上面报的是403,就把403加入。# 定时关闭爬虫的代码
CLOSESPIDER_TIMEOUT = 82800 # 23小时后结束爬虫

好了上边已经做好了,紧接着现在开始做代码spiders.py文件代码如下:

# -*- coding: utf-8 -*-
import scrapy
import time
import demjson
import re
from cn56_net.items import LouzhuItem, CengzhuItemclass TiebaSpider(scrapy.Spider):name = 'tieba'allowed_domains = ['tieba.baidu.com']custom_settings = {'LOG_LEVEL': 'DEBUG','LOG_FILE': 'tieba_log_%s.txt' % time.time(),}start_urls = ['http://tieba.baidu.com/f?ie=utf-8&kw=中通&fr=search','http://tieba.baidu.com/f?ie=utf-8&kw=德邦物流&fr=search','http://tieba.baidu.com/f?ie=utf-8&kw=圆通&fr=search','http://tieba.baidu.com/f?ie=utf-8&kw=申通&fr=search','http://tieba.baidu.com/f?ie=utf-8&kw=韵达&fr=search','https://tieba.baidu.com/f?ie=utf-8&kw=顺丰&fr=search','http://tieba.baidu.com/f?ie=utf-8&kw=天天快递&fr=search','https://tieba.baidu.com/f?ie=utf-8&kw=ems&fr=search','https://tieba.baidu.com/f?ie=utf-8&kw=百事汇通&fr=search',]def parse(self, response):# 贴吧的名字tieba_name1 = response.xpath('//div[@class="card_title"]/a/text()').extract_first().strip()teiba_name = {'teiba_name': tieba_name1}# 具体贴吧网址的获取details_url = response.xpath('//div[@class="threadlist_title pull_left j_th_tit "]//a[@rel="noreferrer"]/@href').extract()for each_url in details_url:infos_url = 'http://tieba.baidu.com'+each_urlyield scrapy.Request(url=infos_url,callback=self.details_infos,meta=teiba_name)# 下一页的路径的获取next_url = response.xpath('//a[@class="next pagination-item "]/@href').extract_first()self.logger.info('打印的是下一页的路径%s'%next_url)if next_url:next_url = 'http:'+next_urlyield scrapy.Request(url=next_url,callback=self.parse)def details_infos(self,details_response):teiba_name = details_response.meta# 贴吧的名字pre_tieba_name = teiba_name['teiba_name']print (pre_tieba_name)lou_zhu = LouzhuItem()ceng_zhu = CengzhuItem()# 楼主的信息的获取# lou_zhu_infos = details_response.xpath('//div[@class="l_post j_l_post l_post_bright noborder "]/@data-field').extract_first()lou_zhu_infos = details_response.xpath('//div[@id="j_p_postlist"]/div[1]/@data-field').extract_first()try:lou_zhu_infos = demjson.decode(lou_zhu_infos)except BaseException as e:self.logger.exception("json.loads error: %s" % details_response.url)return# 楼主的idlou_zhu_id = lou_zhu_infos['author']lou_zhu_id = lou_zhu_id.get('user_id')# 楼主的名字lou_zhu_name = lou_zhu_infos['author']lou_zhu_name = lou_zhu_name.get('user_name')# 楼主的等级lou_zhu_level = lou_zhu_infos['author']lou_zhu_level = lou_zhu_level.get('level_id')# 楼主发表的贴子的标题# title = details_response.xpath('//h1//text()').extract_first()title = details_response.xpath('//div[@id="j_core_title_wrap"]//h3/text()|//div[@id="j_core_title_wrap"]//h1/text()').extract_first()# 标题内容的idcontent = lou_zhu_infos['content']title_id = content.get('post_id')# 帖子的创建时间create_time = lou_zhu_infos['content']create_time = create_time.get('date')# 帖子的评论的总页数lou_zhu_page = details_response.xpath('//span[@class="red"]//text()').extract()lou_zhu_page = lou_zhu_page[-1]lou_zhu_url = details_response.url'''生成楼主的对象'''lou_zhu['lzhu_name'] = lou_zhu_namelou_zhu['lzhu_id'] = lou_zhu_idlou_zhu['lzhu_level'] = lou_zhu_levellou_zhu['title'] = titlelou_zhu['title_id'] = title_idlou_zhu['lcreate_time'] = create_timelou_zhu['tieba_name'] = pre_tieba_namelou_zhu['pages'] = lou_zhu_pagelou_zhu['tz_url'] = lou_zhu_urllou_zhu['update_time'] = time.strftime('%Y-%m-%d')yield lou_zhu# 回复贴的信息的获取reply_infos = details_response.xpath('//div[@class="l_post j_l_post l_post_bright  "]')if reply_infos:for each_cengzhu in reply_infos:# 获取每个层主的信息ceng_zhu_infos = each_cengzhu.xpath('./@data-field').extract_first()ceng_zhu_infos = demjson.decode(ceng_zhu_infos)# 层主的姓名ceng_zhu_name = ceng_zhu_infos['author']ceng_zhu_name = ceng_zhu_name.get('user_name') if ceng_zhu_name else ''# 层主的idceng_zhu_id = ceng_zhu_infos['author']ceng_zhu_id = ceng_zhu_id.get('user_id') if ceng_zhu_id else ''# 层主的等级ceng_zhu_level = ceng_zhu_infos['author']ceng_zhu_level = ceng_zhu_level.get('level_id')# 评论量ceng_zhu_comment_num = ceng_zhu_infos['content']ceng_zhu_comment_num = ceng_zhu_comment_num.get('comment_num')# 发表的内容ceng_zhu_content = each_cengzhu.xpath('.//div[@class="d_post_content j_d_post_content  clearfix"]//text()').extract_first('')ceng_zhu_content = ceng_zhu_content.strip()     # 除去首行缩进# 内容的idcontent_id = ceng_zhu_infos['content']content_id = content_id.get('post_id')# 帖子的创建时间ceng_zhu_time = ceng_zhu_infos['content']ceng_zhu_time = ceng_zhu_time.get('date')# 帖子的地址ceng_zhu_adress = details_response.url'''生成层主的对象'''ceng_zhu['tieba_name'] = pre_tieba_nameceng_zhu['czhu_name'] = ceng_zhu_nameceng_zhu['czhu_id'] = ceng_zhu_idceng_zhu['czhu_level'] = ceng_zhu_levelceng_zhu['content'] = ceng_zhu_contentceng_zhu['content_id'] = content_idceng_zhu['ccreate_time'] = ceng_zhu_timeceng_zhu['tiezi_num'] = ceng_zhu_comment_numceng_zhu['url'] = ceng_zhu_adressceng_zhu['update_time'] = time.strftime('%Y-%m-%d')yield ceng_zhu

好了,上述的代码算是分享结束了,可能scrapy用的还是有点low,因为工作中牵涉的比较少,后续会努力学习的。

scrapy-爬取百度贴吧之物流内容。相关推荐

  1. 经典爬虫:用Scrapy爬取百度股票

    前言 今天我们编写一个用 Scrapy 框架来爬取百度股票的代码,之前写过一篇爬取百度股票的文章(点我),代码的逻辑和这篇文章的逻辑是一样的,用到的解析器不同罢了. Scrapy 爬虫框架 Scrap ...

  2. 三十一、Scrapy爬取百度图片

    @Author:Runsen Runsen近段时间进入Scrapy,写了几个爬虫练练手,就找百度图片入手了.本文就是Scrapy的入门文章. 文章目录 目标 创建项目 分析逻辑 代码 目标 爬取 百度 ...

  3. python爬百度新闻_13、web爬虫讲解2—Scrapy框架爬虫—Scrapy爬取百度新闻,爬取Ajax动态生成的信息...

    crapy爬取百度新闻,爬取Ajax动态生成的信息,抓取百度新闻首页的新闻rul地址 有多网站,当你浏览器访问时看到的信息,在html源文件里却找不到,由得信息还是滚动条滚动到对应的位置后才显示信息, ...

  4. 【Python网络编程】爬取百度贴吧、小说内容、豆瓣小说、Ajax爬微博、多线程爬淘宝

    一.爬取百度贴吧 import re titleR ='<a rel="noreferrer" href=".*?" title=".*?&qu ...

  5. python tkinter界面 多进程启动scrapy爬取百度贴吧的回复,显示爬取进度,并可以搜索回帖人,指定时间生成词云图,用pyinstaller打包成exe(七)

    爬取单个帖子one_tiezi_spider.py ''' 这个是爬取 单个帖子的爬虫 大概思路:          1.进入该帖子的第1页,获取帖子的初始信息(标题.发帖人.tid.总页数)等    ...

  6. Scrapy 爬取百度贴吧指定帖子的发帖人和回帖人

    转载请注明出处:http://blog.csdn.net/gamer_gyt 博主微博:http://weibo.com/234654758 Github:https://github.com/thi ...

  7. 二十一、Python爬取百度文库word文档内容

    @Author:Runsen 百度文库在我们需要查找复制一些文档的时候经常用到,下载要收费,开会员,一个字都不给复制,这个时候初学python的小伙伴肯定有个写个百度文库爬虫的想法,这里我给各位分享一 ...

  8. 爬取百度贴吧帖子页内容

    >>>>>>>>>>>>>>>>>>>>>>>>> ...

  9. Python scrapy爬取京东,百度百科出现乱码,解决方案

    Python scrapy爬取京东 百度百科出现乱码 解决方案 十分想念顺店杂可... 抓取百度百科,出现乱码 把页面源码下载下来之后,发现全是乱码,浏览器打开 但是浏览器链接打开就没有乱码 以下是浏 ...

最新文章

  1. 使用maven导入jar包
  2. oracle分页数据,oracle 分页 数据重复 数据不正确
  3. eclipse配置jdk问题
  4. java生成小图片_JAVA生成缩略小图片类
  5. BugkuCTF-MISC题细心的大象
  6. 详细地图_一目了然:蒙城学区划分详细地图
  7. 讨物联网的隐私解决方案_2017年数据隐私日:日常隐私解决方案
  8. 初步使用计算机学设计,幼儿园计算机教学设计参考
  9. Java开发工具插件配置记录
  10. 探索处理数据新方法,8 个重点搞懂云数据库 DBaaS 到底是什么!
  11. eclipse fat jar 打包插件
  12. 什么舱位_飞机的舱位究竟是怎么一回事儿
  13. 如何用MATLAB编写简单的音乐程序
  14. 腾讯云实时音视频( TRTC)通话质量监控仪表盘
  15. 计算机专业生看过来,程序员普遍薪资待遇怎么样?10K仅是起点!
  16. w13scan 扫描器的安装及应用
  17. java 判断字符 不等于 或者_java中字符串不等于怎么判断
  18. c语言程序设计1253,1253c语言程序设计a(2010年1月)
  19. debian系统安装dig和nslookup工具
  20. 人工智能安全(二)—攻击

热门文章

  1. anti-Nim游戏(反Nim游戏)简介
  2. IT技术开发人员获得成功的六大步骤
  3. anyRTC - 模仿微信呼叫邀请通话
  4. Questions And Answers About The Swine Flu
  5. 房贷办不下来首付能退回吗?
  6. android登陆界面设计方案,011android初级篇之android登录界面的设计
  7. 认识JDBC以及数据库增删查操作
  8. 摩托车E 绕桩技巧 半坡起步技巧 过单边桥技巧
  9. 云帆起航-助力“上云”筑梦服务
  10. leetcode日记