用命令在终端创建一个项目: scrapy startproject myvipspider

进入到myvipspider项目下运行命令: scrapy genspider weipin "vip.com"

项目下有这几个文件,当

settings.py文件设置:

# -*- coding: utf-8 -*-

# Scrapy settings for weipinhui project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'weipinhui'

SPIDER_MODULES = ['weipinhui.spiders']
NEWSPIDER_MODULE = 'weipinhui.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36'

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'weipinhui.middlewares.WeipinhuiSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {'weipinhui.middlewares.WeipinhuiDownloaderMiddleware': 543,
   'scrapy.downloadmiddlewares.useragent.UserAgentMiddleware':None,
}# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'weipinhui.pipelines.WeipinhuiPipeline': 300,
   'weipinhui.pipelines.MysqlPipeline': 299,
}DB_HOST = "127.0.0.1"
DB_PORT = 3306
DB_USER = "root"
DB_PWD = 'root'
DB_NAME = 'weipin'
DB_CHARSET = "utf8"

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

items.py文件代码如下:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapyclass WeipinhuiItem(scrapy.Item):# define the fields for your item here like:
    brand = scrapy.Field()  # 品牌
    title = scrapy.Field()  # 标题
    old_price = scrapy.Field()  # 原价
    new_price = scrapy.Field()  # 现价
    discount = scrapy.Field()  # 折扣
    img_url = scrapy.Field()  # 图片地址
    url = scrapy.Field()  # 链接

用scrapy+selenium + phantomjs 爬取vip网页,保存为json格式,写入到mysql数据库,下载图片(一)相关推荐

  1. 用scrapy+selenium + phantomjs 爬取vip网页,保存为json格式,写入到mysql数据库,下载图片(二)

    接上一编 weipin.py文件的代码 : # -*- coding: utf-8 -*- import scrapy from weipinhui.items import WeipinhuiIte ...

  2. selenium+ Phantomjs爬取动态网页

    对于动态加载,Selenium+Phantomjs的强大打开网页查看网页源码(注意不是检查元素)会发现要爬取的信息并不在源码里面.Selenium+Phantomjs的强大一方面就在于能将完整的源码抓 ...

  3. python Scrapy Selenium PhantomJS 爬取微博图片

    1,创建项目 scrapy startproject weibo #创建工程 scrapy genspider -t basic weibo.com weibo.com #创建spider 目录结构 ...

  4. Python爬虫根据公司名称爬取信息并保存为json格式的txt文件qcc

    使用BeautifulSoup 根据公司名称来爬取企查查网站中该公司的详细信息 本篇文章主要参考了BeautifulSoup 根据输入的公司名称来爬取公司的详细信息 所提供的代码,后续根据自己的需求对 ...

  5. python+selenium+phantomJS爬取国家地表水水质自动监测实时数据发布系统——动态网页爬虫

    一.关于phantomjs 1.介绍 PhantomJS是一个为自动化而生的利器,它本质上是一个基于webkit内核的无界面浏览器,并可使用JavaScript或CoffeeScript进行编程.由于 ...

  6. python爬取bilibili数据_python基础教程之selenium+phantomjs爬取bilibili

    selenium+phantomjs爬取bilibili 首先我们要下载phantomjs 你可以到 http://phantomjs.org/download.html 这里去下载 下载完之后解压到 ...

  7. 爬取腾讯新闻中省份疫情数据到Mysql数据库

    爬取腾讯新闻中省份疫情数据到Mysql数据库 本人是一个中职学生,第一次发表自己所学到技术-- 本篇文章所用到的语言及工具等: python 3.8 pycharm Mysql Navicat Pre ...

  8. Windows下利用python+selenium+firefox爬取动态网页数据(爬取东方财富网指数行情数据)

    由于之前用urlib和request发现只能获取静态网页数据,目前爬取动态网页有两种方法, (1)分析页面请求 (2)Selenium模拟浏览器行为(霸王硬上弓),本文讲的就是此方法 一.安装sele ...

  9. akshare批量爬取数据并保存为excel格式

    作用:根据aa.txt内的代码,爬取数据,保存在以代码为名的xls文件中,注意"aa.txt"路径,路径中有"\"时,需用"\\"替代: 爬 ...

最新文章

  1. java 构造函数内部的多态方法 完全剖析
  2. 【创新应用】小图像,大图景:AI彻底改变了显微镜技术
  3. 代码 优化 指南 实践
  4. 【Codeforces 63C】Bulls and Cows
  5. SQL Server 2008每天自动备份数据库
  6. 团队项目——测试心得
  7. CorelDraw x6【Cdr x6】官方简体中文破解版(64位)安装图文教程、破解注册方法...
  8. matlab与或非语句,Matlab与或非等逻辑运算符使用教程分享
  9. Roman to Integer:转换罗马数字到阿拉伯数字
  10. CSS基础——看这一篇就够了
  11. 我的jQuery学习之路_笔记(五)
  12. SwiftUI macOS源码大全之倒计时App基于coredata(教程含源码)
  13. 解决小程序插槽slot内容显示不对,无论是原生小程序还是uniapp开发的,解决办法如下
  14. 比亚迪DiLink深体验:让科幻般的车生活都成为实现,智能网联集大成者张这样?...
  15. Linux下开源打包工具fpm的安装与使用(超详细)
  16. oracle通信通道的文件结尾_Oracle错误——ORA-03113:在通信信道文件的末尾 解决方案...
  17. 计算机二级C语言编程题解读:计算学生成绩的标准差
  18. [Unity+Android]横版扫描二维码
  19. 一文带你看懂JAVA IO流(一),史上最全面的IO教学啦(附送JAVA IO脑图)
  20. 【ShardingSphere】ShardingSphere概览

热门文章

  1. ADaM---Analyis Data Model implementation Guide V1.3
  2. 研华数字量输入输出模块通讯——ADAM4050
  3. 深圳市怎么样 官田学校介绍
  4. C++为何那么复杂?
  5. Javaweb学习(一)
  6. 【nodejs】npm与包
  7. 2022/03/21hackthebox取证emo
  8. Ubuntu 18.04 安装ns-3.30
  9. Excel如何快速对多个不连续区域求和汇总
  10. 南京计算机学校李鑫,南京邮电大学李鑫等:一种基于参数扰动的芯片成品率双目标优化框架...