以爬取斗鱼直播上的信息为例:

URL地址:http://capi.douyucdn.cn/api/v1/getVerticalRoom?limit=20&offset=0

爬取字段:房间ID、房间名、图片链接、存储在本地的图片路径、昵称、在线人数、城市

1.items.py

# -*- coding: utf-8 -*-# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.htmlimport scrapyclass DouyuspiderItem(scrapy.Item):# define the fields for your item here like:# 房间IDroom_id = scrapy.Field()# 房间名room_name = scrapy.Field()# 图片链接vertical_src = scrapy.Field()# 存储图片的本地地址image_path = scrapy.Field()# 昵称nickname = scrapy.Field()# 在线人数online = scrapy.Field()# 城市anchor_city = scrapy.Field()

2.spiders/douyu.py

# -*- coding: utf-8 -*-
import scrapy
from douyuSpider.items import DouyuspiderItem
import jsonclass DouyuSpider(scrapy.Spider):name = 'douyu'allowed_domains = ['capi.douyucdn.cn']url = 'http://capi.douyucdn.cn/api/v1/getVerticalRoom?limit=20&offset='offset = 0start_urls = [url + str(offset)]def parse(self, response):# 是否爬取下一页的标记next_flag = Falsedata = json.loads(response.text)["data"]for each in data:item = DouyuspiderItem()# 房间IDitem['room_id'] = each["room_id"]# 房间名item['room_name'] = each["room_name"]# 图片链接item['vertical_src'] = each["vertical_src"]# 昵称item['nickname'] = each["nickname"]# 在线人数item['online'] = each["online"]# 城市item['anchor_city'] = each["anchor_city"]next_flag = Trueyield item# 判断是否继续爬取下一页if next_flag:self.offset += 20yield scrapy.Request(self.url + str(self.offset), callback = self.parse)

3.pipelines.py

在学习过程中有什么不懂得可以加
我的python学习交流扣扣qun,688244617
群里有不错的学习教程、开发工具与电子书籍。
与你分享python企业当下人才需求及怎么从零基础学习好python,和学习什么内容。 # -*- coding: utf-8 -*-# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.htmlimport json
import scrapy
from scrapy.pipelines.images import ImagesPipeline
from scrapy.utils.project import get_project_settings
import os# 存储信息的json文件中间件
class DouyuspiderPipeline(object):def __init__(self):self.file = open("斗鱼.json", "w", encoding = "utf-8")self.first_flag = Truedef process_item(self, item, spider):if self.first_flag:self.first_flag = Falsecontent = "[\n" + json.dumps(dict(item), ensure_ascii = False)else:content = ",\n" + json.dumps(dict(item), ensure_ascii = False)self.file.write(content)return itemdef close_spider(self, spider):self.file.write("\n]")self.file.close()# 下载图片的中间件
class ImagesPipeline(ImagesPipeline):IMAGES_STORE = get_project_settings().get("IMAGES_STORE")def get_media_requests(self, item, info):image_url = item["vertical_src"]yield scrapy.Request(image_url)def item_completed(self, results, item, info):# 固定写法,获取图片路径,同时判断这个路径是否正确,如果正确,就放到 image_path里,ImagesPipeline源码剖析可见image_path = [x["path"] for ok, x in results if ok]os.rename(self.IMAGES_STORE + "/" + image_path[0], self.IMAGES_STORE + "/" + item["nickname"] + ".jpg")item["image_path"] = self.IMAGES_STORE + item["nickname"] + ".jpg"return item

4.settings.py

# -*- coding: utf-8 -*-# Scrapy settings for douyuSpider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'douyuSpider'SPIDER_MODULES = ['douyuSpider.spiders']
NEWSPIDER_MODULE = 'douyuSpider.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'douyuSpider (+http://www.yourdomain.com)'# Obey robots.txt rules
ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','User-Agent': 'DYZB/2.290 (iPhone; iOS 9.3.4; Scale/2.00)'
#   'Accept-Language': 'en',
}# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {#    'douyuSpider.middlewares.DouyuspiderSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {#    'douyuSpider.middlewares.MyCustomDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'douyuSpider.pipelines.DouyuspiderPipeline': 300,'douyuSpider.pipelines.ImagesPipeline': 200,
}# Images 的存放位置,之后会在pipelines.py里调用
IMAGES_STORE = "Images\\"# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

爬虫——Scrapy框架案例一:手机APP抓包相关推荐

  1. python爬虫实战案例-Python爬虫实战案例:手机APP抓包爬虫

    1. items.pyclass DouyuspiderItem(scrapy.Item): name = scrapy.Field()# 存储照⽚的名字 imagesUrls = scrapy.Fi ...

  2. (实战项目一)手机App抓包爬虫

    手机App抓包爬虫 1. items.py class DouyuspiderItem(scrapy.Item):name = scrapy.Field()# 存储照片的名字imagesUrls = ...

  3. 常用工具(一)——安卓手机app抓包burpsuite

    手机app抓包 工具:burpsuite 前提条件 1.保证手机与电脑在同一个无线局域网下 2.找到手机网络位置>打开详情>打开代理选择手动>主机名设为电脑IP(如图) 3.burp ...

  4. 使用Fiddler对手机APP抓包详细教程

    使用Fiddler对手机APP抓包详细教程 在实现用Fiddler对手机APP抓包过程中遇到了两个很棘手的问题,一个是设置代理后手机就上不了网,在手机上下载证书一直提示下载失败,第二个就是在fiddl ...

  5. Fiddler 网页采集抓包利器__手机app抓包

    用curl技术开发了一个微信文章聚合类产品,把抓取到的数据转换成json格式,并在android端调用json数据接口加以显示: 基于weiphp做了一个掌上头条插件,也是用的网页采集技术:和一个创业 ...

  6. 爬虫之手机APP抓包教程-亲测HTTP和HTTPS均可实现

    当下很多网站都有做自己的APP端产品,一个优秀的爬虫工程师,必须能够绕过难爬取点而取捷径,这是皆大欢喜的.但是在网上收罗和查阅了无数文档和资料,本人亲测无数次,均不能正常获取HTTPS数据,究其原因是 ...

  7. 手机APP抓包问题总结及相关解决方案

    App抓包问题总结及相关解决方案 文章目录 App抓包问题总结及相关解决方案 前言 1 抓包工具的选择 2 常见问题解决 2.1 App正常运行,但是抓包工具中没有对应的请求记录 2.1.1 非HTT ...

  8. 使用Fiddler实现手机APP抓包

    手机上无法直接查看网络请求数据,需要使用抓包工具.Fiddler是一个免费的web调试代理,可以用它实现记录.查看和调试手机终端和远程服务器之间的http/https通信. fiddler没有手机客户 ...

  9. Spider爬虫--手机App抓包爬虫

    需求:手机抓包和下载图片(图片重命名) 1. 抓包工具准备 1.1 Fiddler 该软件端口默认是8888 1.2 猎豹免费WiFi: 1.3 手机设置代理服务器 使用命令ipconfig在wind ...

  10. 手机app抓包https请求信息,解决SSL Pinning验证

    抓包工具: Charles,fiddler,wireshark 其中,前两个用于抓取https请求,wireshark则是包含tcp/udp在内的所有请求,本文中以Charles为例 或者移动端(An ...

最新文章

  1. 单目和多目视觉统一标定
  2. DT大数据梦工厂 第51讲
  3. java 中流的使用
  4. 高德 Serverless 平台建设及实践
  5. 推荐一个 React 技术揭秘的项目,自顶向下的 React 源码分析
  6. PAT (Basic Level) Practice (中文)1005 继续(3n+1)猜想 (25 分)
  7. MySQL报错113_mysql 2003 (113)
  8. ef生成mysql字段注释_EFcore+MySql 数据迁移的时候,怎么给表结构加注释?
  9. ubuntun系统mysql数据库同步_Canal 实现 Mysql数据库实时数据同步
  10. pythonmatplotlib刷新_matplotlib:如何刷新figure.canvas
  11. Nginx基本数据结构之ngx_array_t
  12. mysql数据库建order,group表时的错误
  13. 【图像隐写】基于matlab GUI DCT变换图像隐写【含Matlab源码 1380期】
  14. 干货分享 | 工业信息数据库安全现状与技术分析
  15. 最新python腾讯文档界面自动打卡
  16. 什么是大数据挖掘技术
  17. 京瓷1125打印机清零_怎么设置京瓷1125MFP打印机ip地址
  18. xml的三种解析方法
  19. MQTT网关是什么?
  20. 六(1) Python之列表

热门文章

  1. 张继群,创青春-数字经济赛道,中国创翼临沂市决赛,创客中国-中小企业创客比赛-临沂市决赛
  2. 免费生信课程|多组学数据整合分析之转录组和蛋白质组分析
  3. 华硕rt-n16无线打印服务器,无线打印好拍档 华硕RT-N16赠照片打印机(图)
  4. 图片水印怎么加?图片加水印方法分享
  5. 无耻,无知的韩国人!
  6. glibc 2.17升级2.28,gcc 4.8.5升级9.2.0,GNU Make 3.82 升级到4.2.1,安装bison
  7. jetson windows_Jetson nano 配置远程桌面
  8. python运维自动化脚本案例-python自动化运维脚本范例
  9. 绘画教程:伤口疤痕应该怎么画?如何画出皮肤质感?
  10. 前端js通过图片路径,展示图片