目标:通过爬取酒店信息保存至本地mysql数据库中

目标网址:https://hotels.ctrip.com/hotel/Haikou42

首先新建scrapy项目

命令行输入:crapy startproject MyScrapy

接着添加爬虫名:scrapy genspider hotel_spider

在PyCharm引入MyScrapy项目

1、配置settings文件

ROBOTSTXT_OBEY = False
# SPIDER抓取了指定数目后自动关闭,默认: 100
CLOSESPIDER_ITEMCOUNT = 2500
# 固定下载延迟
DOWNLOAD_DELAY = 5
# 随机下载延迟
# 延迟一共6秒至12秒的随机延迟
RANDOM_DELAY = 5
# 并发请求的最大值。
CONCURRENT_REQUESTS = 10
# 同时处理(每个response的)item的最大值
CONCURRENT_ITEMS = 2500
# The download delay setting will honor only one of:
# 对单个网站进行并发请求的最大值
CONCURRENT_REQUESTS_PER_DOMAIN = 16
# 对单个IP进行并发请求的最大值
# CONCURRENT_REQUESTS_PER_IP = 5
# 对单个IP进行并发请求的最大值。如果非0,则忽略 CONCURRENT_REQUESTS_PER_DOMAIN 设定, 使用该设定。 也就是说,并发限制将针对IP,而不是网站。
# 该设定也影响 DOWNLOAD_DELAY: 如果 CONCURRENT_REQUESTS_PER_IP 非0,下载延迟应用在IP而不是网站上DOWNLOADER_MIDDLEWARES = {# 'MyScrapy.middlewares.MyScrapyDownloaderMiddleware': 543,# 数字越低表示优先级越高,优先执行'MyScrapy.middlewares.my_useragent': 544,'MyScrapy.middlewares.my_proxy': 543,
}# mysql数据库连接
mysql_host = 'ip地址'
mysql_port = 3368
mysql_db = 'hotel'
mysql_user = 'username'
mysql_password = 'password'
mysql_charset = "utf8"#代理IP
PROXY_LIST = [{'ip_port': 'https://ip:端口'},{'ip_port': 'https://ip:端口'},
]# user agent 列表
USER_AGENT_LIST = ["Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)","Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)","Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)","Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)","Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)","Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6","Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0","Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20","Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 LBBROWSER","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; QQBrowser/7.0.3698.400)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)","Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1","Mozilla/5.0 (iPad; U; CPU OS 4_2_1 like Mac OS X; zh-cn) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5","Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.0b13pre) Gecko/20110307 Firefox/4.0b13pre","Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11","Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10","Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36",
]

2、配置Item.py

# 酒店Item
class HotelItem(scrapy.Item):# 城市city = scrapy.Field()# 酒店名hotelName = scrapy.Field()# 酒店地址address = scrapy.Field()# 星级标准hotelStar = scrapy.Field()# 创建时间createTime = scrapy.Field()

3、编写middlewares.py

# 进行代理IP的伪装
class my_proxy(object):def process_request(self,request,spider):# 随机取出一个代理ipproxy = random.choice(PROXY_LIST)proxy = proxy['ip_port']request.meta['proxy'] = proxyprint(proxy)# 进行USER_AGENT的伪装
class my_useragent(object):def process_request(self,request,spider):# 随机取出一个USER_AGENTagent = random.choice(USER_AGENT_LIST)request.headers['User-Agent'] = agentrequest.headers[':authority'] = 'hotels.ctrip.com'request.headers[':method'] = 'POST'request.headers[':scheme'] = 'https'request.headers['accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8'request.headers['accept-encoding'] = 'gzip, deflate, br'request.headers['accept-language'] = 'zh-CN,zh;q=0.9'request.headers['cache-control'] = 'max-age=0'request.headers['content-type'] = 'application/x-www-form-urlencoded'request.headers['origin'] = 'https://hotels.ctrip.com'# request.headers['referer'] = 'https://hotels.ctrip.com/hotel/Haikou42'print(agent)

4、配置pipelines.py,进行数据库保存操作

import pymysql
from MyScrapy.settings import mysql_host,mysql_port,mysql_db,mysql_user,mysql_password,mysql_charsetclass HotelPipeline(object):def __init__(self):dbparams = {'host' : mysql_host,'port' : mysql_port,'db' : mysql_db,'user' : mysql_user,'password' : mysql_password,'charset' : mysql_charset}# 获取连接对象self.conn = pymysql.connect(**dbparams)self.cursor = self.conn.cursor()self._sql = Nonedef process_item(self, item, spider):self.cursor.execute(self.sql,(item['city'],item['hotelName'],item['address'],item['hotelStar'],item['createTime']))self.conn.commit()return item@propertydef sql(self):if not self._sql:self._sql = """insert into ly_hotel(id,city,hotel_name,address,hotel_star,create_time)values(null,%s,%s,%s,%s,%s)"""return  self._sqlreturn self._sql

5、最重要的一步,编写hotel_spider.py文件

# -*- coding: utf-8 -*-
import json
import math
import time
from urllib.request import urlopen, quoteimport scrapy
from scrapy.spidermiddlewares.httperror import HttpError
from twisted.internet.error import DNSLookupError, TCPTimedOutErrorfrom MyScrapy.items import HotelItemclass HotelSpider(scrapy.Spider):# 这个是爬虫名,不能与项目名相同name = None# 允许的域名allowed_domains = ['hotels.ctrip.com']# 入口URL,扔到调度器里面去start_urls = Nonecustom_settings = {'ITEM_PIPELINES':{'MyScrapy.pipelines.HotelPipeline': 300,}}# 默认解析方法def parse(self, response):# 获取系统时间nowTime = time.strftime('%Y.%m.%d %H:%M:%S ', time.localtime(time.time()))yield from self.getPageInfo(nowTime, response)# 获取页面数据def getPageInfo(self, nowTime, response):# 循环页面的酒店条目jd_list = response.xpath("//div[@id='hotel_list']/div/ul[@class='hotel_item']/li[@class='hotel_item_name']")# 当前页数pageCnt = response.xpath("//div[@class='page_box']/div[@id='page_info']/div[@class='c_page_list layoutfix']/a[@class='current']/text()").extract_first()pageCnt = int(pageCnt)# 最大页数maxPage = response.xpath("//div[@class='page_box']/div[@id='page_info']/div[@class='c_page_list layoutfix']/a[8]/text()").extract_first()maxPage = int(maxPage)for i_item in jd_list:# 酒店urlhotel_url = i_item.xpath("./h2[@class='hotel_name']/a/@href").extract_first()hotelUrl = "https://hotels.ctrip.com" + hotel_urlyield scrapy.http.Request(hotelUrl, callback=HotelInfo.get_HotelInfo, meta={'nowTime': nowTime}, priority=20,errback=self.errback_hotel)if pageCnt != maxPage:pageCnt += 1yield scrapy.http.Request(self.pageUrl + str(pageCnt), callback=self.parse, priority=10,errback=self.errback_hotel)def errback_hotel(self, failure):# log all failuresself.logger.error(repr(failure))if failure.check(HttpError):response = failure.value.responseprint('-----------------------------报错的头部信息:' + response.text)self.logger.error('HttpError on %s', response.url)elif failure.check(DNSLookupError):# this is the original requestrequest = failure.requestprint('-----------------------------报错的头部信息:' + request.text)self.logger.error('DNSLookupError on %s', request.url)elif failure.check(TimeoutError, TCPTimedOutError):request = failure.requestprint('-----------------------------报错的头部信息:' + request.text)self.logger.error('TimeoutError on %s', request.url)# 海口
class HaikouHotelSpider(HotelSpider):# 这个是爬虫名,不能与项目名相同name = 'haikou_spider'# 入口URL,扔到调度器里面去start_urls = ['https://hotels.ctrip.com/hotel/Haikou42.html']def __init__(self):self.pageUrl = 'https://hotels.ctrip.com/hotel/Haikou42/p'# 酒店详细信息
class HotelInfo:# 获取酒店信息def get_HotelInfo(response):# 加载item文件hotel_item = HotelItem()# 城市city = response.xpath("//div[@id='J_htl_info']/div[@class='adress']/span[@id='ctl00_MainContentPlaceHolder_commonHead_lnkCity']/text()").extract_first()# 去空格if not city:city = ''city = city.strip()hotel_item['city'] = city# 酒店名hotel_item['hotelName'] = response.xpath("//div[@id='J_htl_info']/div[@class='name']/h2[@class='cn_n']/text()").extract_first()# 区location = response.xpath("//div[@id='J_htl_info']/div[@class='adress']/span[@id='ctl00_MainContentPlaceHolder_commonHead_lnkLocation']/text()").extract_first()# 去空格if not location:location = ''location = location.strip()# 酒店地址address = response.xpath("//div[@id='J_htl_info']/div[@class='adress']/span[@id='ctl00_MainContentPlaceHolder_commonHead_lbAddress']/text()").extract_first()address = city + location + address.strip()hotel_item['address'] = address# 星级标准hotel_item['hotelStar'] = response.xpath("//div[@id='J_htl_info']/div[@class='grade']/span[@id='ctl00_MainContentPlaceHolder_commonHead_imgStar']/@title").extract_first()# 创建时间hotel_item['createTime'] = response.meta['nowTime']yield hotel_item

6、新建run.py文件,方便进执行spider

from scrapy import cmdline
# 酒店spider
cmdline.execute('scrapy crawl haikou_spider'.split())

7、新建数据表

SET FOREIGN_KEY_CHECKS=0;-- ----------------------------
-- Table structure for ly_hotel
-- ----------------------------
DROP TABLE IF EXISTS `ly_hotel`;
CREATE TABLE `ly_hotel` (`id` bigint(8) NOT NULL AUTO_INCREMENT COMMENT 'id自增主键',`city` varchar(20) DEFAULT NULL COMMENT '城市',`hotel_name` varchar(100) DEFAULT NULL COMMENT '酒店名',`address` varchar(500) DEFAULT NULL COMMENT '酒店地址',`hotel_star` varchar(100) DEFAULT NULL COMMENT '星级标准',`create_time` datetime DEFAULT NULL COMMENT '创建时间',PRIMARY KEY (`id`)
) COMMENT='旅游酒店信息' ENGINE=InnoDB AUTO_INCREMENT=00000001 DEFAULT CHARSET=utf8;alter table ly_hotel AUTO_INCREMENT=10000001;

8、执行run.py文件即可执行爬虫

-----------------------------------以上就是所有代码,初学Python,欢迎各位指正和修复冗余代码

Python3+Scrapy通过代理爬取携程酒店数据相关推荐

  1. Java数据爬取——爬取携程酒店数据(二)

    在上篇文章Java数据爬取--爬取携程酒店数据(一)爬取所有地区后,继续根据地区数据爬取酒店数据 1.首先思考怎样根据地域获取地域酒店信息,那么我们看一下携程上是怎样获得的. 还是打开http://h ...

  2. Java数据爬取——爬取携程酒店数据(一)

    最近工作要收集点酒店数据,就到携程上看了看,记录爬取过程去下 1.根据城市名称来分类酒店数据,所以先找了所有城市的名称 在这个网页上有http://hotels.ctrip.com/domestic- ...

  3. JAVA爬虫爬取携程酒店数据selenium实现

    在爬取携程的时候碰到很多的壁垒,接下来分析所有过程 1.根据以往经验最初想到用jsoup去解析每个HTML元素,然后拿到酒店数据,然后发现解析HTML根本拿不到id为hotel_list的div,所以 ...

  4. 爬虫第六课:爬取携程酒店数据

    首先打开携程所有北京的酒店http://hotels.ctrip.com/hotel/beijing1 简简单单,源代码中包含我们需要的酒店数据,你以为这样就结束了?携程的这些数据这么廉价地就给我们得 ...

  5. python爬取携程酒店信息_不写代码玩转爬虫实例(3) - 抓取携程酒店信息

    背景需求 有不少朋友问永恒君携程网站的酒店信息怎么抓取,今天这篇文章来分享一下使用web scraper来快速实现抓取携程酒店信息. 例如,在携程官网搜索北京 密云水库的酒店信息, 可以搜索到非常多的 ...

  6. 最新爬取携程酒店信息上:思路讲解

    本以为携程的信息很好爬,但是在我目前能力一般的时候,经过尝试,发现了携程真的有太多坑了,虽然说代码和大佬比起来不是最优的,但是可以完成爬取任务. 在这里记录一下本次学习过程,为后人乘凉. 要爬取所有的 ...

  7. python携程酒店评论_Python基于selenium爬取携程酒店评论信息

    爬取站点 任意一个携程酒店的详细链接,这里给出了四个,准备开四个线程爬取: https://hotels.ctrip.com/hotel/6278770.html#ctm_ref=hod_hp_hot ...

  8. python 携程_python爬取携程和蚂蜂窝的景点评论数据\python爬取携程评论数据\python旅游网站评论数...

    本人长期出售超大量微博数据.旅游网站评论数据,并提供各种指定数据爬取服务,Message to YuboonaZhang@Yahoo.com.同时欢迎加入社交媒体数据交流群:99918768 前言 为 ...

  9. python中scrapy可以爬取多少数据_python中scrapy框架爬取携程景点数据

    ------------------------------- [版权申明:本文系作者原创,转载请注明出处] 文章出处:https://blog.csdn.net/sdksdk0/article/de ...

最新文章

  1. 类DefaultWsdl 11定义中英文对比API文档
  2. 网站开发之鼠标悬停简单特效实现(四)
  3. java main方法里调用mapper
  4. 论文笔记(eTrust: Understanding Trust Evolution in an Online World)
  5. 企业五年后卓越或者死亡,数据战略是关键!
  6. 20道有代表性的HTML基础题,测测你能入前端的门吗
  7. linux下定时网站文件备份和数据备份以及删除旧备份标准代码
  8. 知乎热议的Deep Peak2模型究竟是什么?答案都在这里
  9. 安卓虚拟键盘_微软双屏Surface Duo上的安卓应用体验:可瞬间变身笔记本电脑
  10. 群晖nas存储系统原理_一篇看懂黑群晖重装系统
  11. 【虚拟仿真】Unity3D中实现UI跟随3D模型旋转移动、UI一直面朝屏幕
  12. Prisma(一):初识
  13. stm32+DS18B20编程教学
  14. UVa 11909 - Soya Milk
  15. VLDB 历年最佳论文汇总
  16. 嵌入层(Embedding Layer)与词向量(Word Embedding)详解
  17. Java知识大全 - 十二、Java和大数据
  18. flutter permission动态权限申请以及IOS端权限问题审核被拒处理
  19. 滚动轴承退化趋势预测
  20. 前端使用js读取文件

热门文章

  1. 店铺降权的原因|盛天海科技
  2. Android 系统默认铃声修改 添加删除铃声
  3. 信息收集--子域名查询
  4. SwapMix: Diagnosing and Regularizing the Over-Reliance on Visual Context in ... ——2022 CVPR 论文笔记
  5. 实现蝴蝶翩翩飞舞的效果
  6. sin傅里叶变换公式_全面解析傅立叶变换(非常详细)
  7. python Django Rest_Framework框架 视图集与路由Routers详解(图文并茂版)
  8. 【小学信息技术教资面试】教案模板
  9. 产品创新案例分析|大疆如何从初创到无人机帝国的进阶之路
  10. react-native调起第三方高德地图导航URL解释