因为有在北京租房的打算,于是上网浏览了一下链家网站的房价,想将他们爬取下来,并保存到本地。

先看链家网的源码。。房价信息 都保存在 ul 下的li 里面

爬虫结构:

​ 其中封装了一个数据库处理模块,还有一个user-agent池。。

先看mylianjia.py

#-*- coding: utf-8 -*-

importscrapyfrom ..items importLianjiaItemfrom scrapy.http importRequestfrom parsel importSelectorimportrequestsimportosclassMylianjiaSpider(scrapy.Spider):

name= 'mylianjia'

#allowed_domains = ['lianjia.com']

start_urls = ['https://bj.lianjia.com/ershoufang/chaoyang/pg']defstart_requests(self):for i in range(1, 101): #100页的所有信息

url1 = self.start_urls +list(str(i))#print(url1)

url = ''

for j inurl1:

url+= j + ''

yieldRequest(url, self.parse)defparse(self, response):print(response.url)'''response1 = requests.get(response.url, params={'search_text': '粉墨', 'cat': 1001})

if response1.status_code == 200:

print(response1.text)

dirPath = os.path.join(os.getcwd(), 'data')

if not os.path.exists(dirPath):

os.makedirs(dirPath)

with open(os.path.join(dirPath, 'lianjia.html'), 'w', encoding='utf-8')as fp:

fp.write(response1.text)

print('网页源码写入完毕')'''infoall=response.xpath("//div[4]/div[1]/ul/li")#infos = response.xpath('//div[@class="info clear"]')

#print(infos)

#info1 = infoall.xpath('div/div[1]/a/text()').extract_first()

#print(infoall)

for info ininfoall:

item=LianjiaItem()#print(info)

info1 = info.xpath('div/div[1]/a/text()').extract_first()

info1_url= info.xpath('div/div[1]/a/@href').extract_first()#info2 = info.xpath('div/div[2]/div/text()').extract_first()

info2_dizhi = info.xpath('div/div[2]/div/a/text()').extract_first()

info2_xiangxi= info.xpath('div/div[2]/div/text()').extract()#info3 = info.xpath('div/div[3]/div/a/text()').extract_first()

#info4 = info.xpath('div/div[4]/text()').extract_first()

price = info.xpath('div/div[4]/div[2]/div/span/text()').extract_first()

perprice= info.xpath('div/div[4]/div[2]/div[2]/span/text()').extract_first()#print(info1,'--',info1_url,'--',info2_dizhi,'--',info2_xiangxi,'--',info4,'--',price,perprice)

info2_xiangxi1 = ''

for j1 ininfo2_xiangxi:

info2_xiangxi1+= j1 + ''

#print(info2_xiangxi1) #化为字符串

item['houseinfo']=info1

item['houseurl']=info1_url

item['housedizhi']=info2_dizhi

item['housexiangxi']=info2_xiangxi1

item['houseprice']=price

item['houseperprice']=perpriceyield item

再看items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items

#

# See documentation in:

# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy

class LianjiaItem(scrapy.Item):

# define the fields for your item here like:

# name = scrapy.Field()

houseinfo=scrapy.Field()

houseurl=scrapy.Field()

housedizhi=scrapy.Field()

housexiangxi=scrapy.Field()

houseprice=scrapy.Field()

houseperprice=scrapy.Field()

pass

接下来看pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here

#

# Don't forget to add your pipeline to the ITEM_PIPELINES setting

# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

class LianjiaPipeline(object):

def process_item(self, item, spider):

print('房屋信息:',item['houseinfo'])

print('房屋链接:', item['houseurl'])

print('房屋位置:', item['housedizhi'])

print('房屋详细信息:', item['housexiangxi'])

print('房屋总价:', item['houseprice'],'万')

print('平方米价格:', item['houseperprice'])

print('===='*10)

return item

接下来看csvpipelines.py

import os

print(os.getcwd())

class LianjiaPipeline(object):

def process_item(self, item, spider):

with open('G:\pythonAI\爬虫大全\lianjia\data\house.txt', 'a+', encoding='utf-8') as fp:

name=str(item['houseinfo'])

dizhi=str(item['housedizhi'])

info=str(item['housexiangxi'])

price=str(item['houseprice'])

perprice=str(item['houseperprice'])

fp.write(name + dizhi + info+ price +perprice+ '\n')

fp.flush()

fp.close()

return item

print('写入文件成功')

接下来看 settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for lianjia project

#

# For simplicity, this file contains only settings considered important or

# commonly used. You can find more settings consulting the documentation:

#

# https://doc.scrapy.org/en/latest/topics/settings.html

# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html

# https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'lianjia'

SPIDER_MODULES = ['lianjia.spiders']

NEWSPIDER_MODULE = 'lianjia.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent

#USER_AGENT = 'lianjia (+http://www.yourdomain.com)'

# Obey robots.txt rules

ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)

#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)

# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay

# See also autothrottle settings and docs

DOWNLOAD_DELAY = 0.5

# The download delay setting will honor only one of:

#CONCURRENT_REQUESTS_PER_DOMAIN = 16

#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)

#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)

#TELNETCONSOLE_ENABLED = False

# Override the default request headers:

#DEFAULT_REQUEST_HEADERS = {

# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',

# 'Accept-Language': 'en',

#}

# Enable or disable spider middlewares

# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html

SPIDER_MIDDLEWARES = {

'lianjia.middlewares.LianjiaSpiderMiddleware': 543,

#'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,

#'lianjia.rotate_useragent.RotateUserAgentMiddleware' :400

}

# Enable or disable downloader middlewares

# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html

DOWNLOADER_MIDDLEWARES = {

'lianjia.middlewares.LianjiaDownloaderMiddleware': 543,

}

# Enable or disable extensions

# See https://doc.scrapy.org/en/latest/topics/extensions.html

#EXTENSIONS = {

# 'scrapy.extensions.telnet.TelnetConsole': None,

#}

# Configure item pipelines

# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html

ITEM_PIPELINES = {

'lianjia.pipelines.LianjiaPipeline': 300,

#'lianjia.iopipelines.LianjiaPipeline': 301,

'lianjia.csvpipelines.LianjiaPipeline':302,

}

# Enable and configure the AutoThrottle extension (disabled by default)

# See https://doc.scrapy.org/en/latest/topics/autothrottle.html

#AUTOTHROTTLE_ENABLED = True

# The initial download delay

#AUTOTHROTTLE_START_DELAY = 5

# The maximum download delay to be set in case of high latencies

#AUTOTHROTTLE_MAX_DELAY = 60

# The average number of requests Scrapy should be sending in parallel to

# each remote server

#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

# Enable showing throttling stats for every response received:

#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)

# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings

HTTPCACHE_ENABLED = True

HTTPCACHE_EXPIRATION_SECS = 0

HTTPCACHE_DIR = 'httpcache'

HTTPCACHE_IGNORE_HTTP_CODES = []

HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

LOG_LEVEL='INFO'

LOG_FILE='lianjia.log'

最后看starthouse.py

from scrapy.cmdline import execute

execute(['scrapy', 'crawl', 'mylianjia'])

代码运行结果

保存到本地效果:

完成,事后可以分析一下房价和每平方米的方剂,,因为是海淀区的,,可以看到 都是好几万一平米,总价也得几百万了 而且是二手房,,,可以看出来 ,在北京买房太难。。。。

源码 tyutltf/lianjia: 爬取链家北京房价并保存txt文档  https://github.com/tyutltf/lianjia

python爬取链家房价消息_Python的scrapy之爬取链家网房价信息并保存到本地相关推荐

  1. python+selenium爬取链家网房源信息并保存至csv

    python+selenium爬取链家网房源信息并保存至csv 抓取的信息有:房源', '详细信息', '价格','楼层', '有无电梯 import csv from selenium import ...

  2. Python 爬取网页信息并保存到本地爬虫爬取网页第一步【简单易懂,注释超级全,代码可以直接运行】

    Python 爬取网页信息并保存到本地[简单易懂,代码可以直接运行] 功能:给出一个关键词,根据关键词爬取程序,这是爬虫爬取网页的第一步 步骤: 1.确定url 2.确定请求头 3.发送请求 4.写入 ...

  3. 爬去豆瓣网中电影信息并保存到本地目录当中

    爬取豆瓣网中电影信息并保存到本地目录当中 读者可以根据源代码来设计自己的爬虫,url链接不能通用,由于源代码中后续查找筛选中有不同类或者标签名,仅供参考,另外推荐b站上一个老师,叫路飞学城IT的,讲的 ...

  4. python爬取南京市房价_Python的scrapy之爬取链家网房价信息并保存到本地

    因为有在北京租房的打算,于是上网浏览了一下链家网站的房价,想将他们爬取下来,并保存到本地. 先看链家网的源码..房价信息 都保存在 ul 下的li 里面 ​ 爬虫结构: ​ 其中封装了一个数据库处理模 ...

  5. python爬取链家新房数据_Python爬虫实战:爬取链家网二手房数据

    前言 本文的文字及图片来源于网络,仅供学习.交流使用,不具有任何商业用途,如有问题请及时联系我们以作处理. 买房装修,是每个人都要经历的重要事情之一.相对于新房交易市场来说,如今的二手房交易市场一点也 ...

  6. Python网络爬虫数据采集实战(八):Scrapy框架爬取QQ音乐存入MongoDB

    通过前七章的学习,相信大家对整个爬虫有了一个比较全貌的了解 ,其中分别涉及四个案例:静态网页爬取.动态Ajax网页爬取.Selenium浏览器模拟爬取和Fillder今日头条app爬取,基本涵盖了爬虫 ...

  7. python中的除法、取整和求模_python中的除法,取整和求模

    首先注明:如果没有特别说明,以下内容都是基于python 3.4的. 先说核心要点: 1. /是精确除法,//是向下取整除法,%是求模 2. %求模是基于向下取整除法规则的 3. 四舍五入取整roun ...

  8. python中的除法、取整和求模_python中的除法,取整和求模-Go语言中文社区

    首先注明:如果没有特别说明,以下内容都是基于python 3.4的. 先说核心要点: 1. /是精确除法,//是向下取整除法,%是求模 2. %求模是基于向下取整除法规则的 3. 四舍五入取整roun ...

  9. python 依据某几列累加求和_Python爬虫笔记:爬取单个页面

    前言 学习Python爬虫技术也是一件需要大量实践的事情,因为并不是所有的网站都对爬虫友好,更多的一种情况是网站为了限制爬虫不得不在最小化影响用户体验的前提下对网站访问做出一定的限制,最常见的就是一些 ...

最新文章

  1. Paper4:Voxel-Based Extraction and Classification of 3-D Pole-Like Object From Mobile LIDAR Point Clo
  2. 另类×××应用(三):不花一分钱,实现总部和多分支机构网络互联
  3. 深入理解JS中的变量作用域
  4. vue介绍及环境安装
  5. 【JavaScript】js数组与字符串的相互转换
  6. 手机端刷recovery工具_MIUI/REDMIN手机玩机汇集
  7. HOWTO: Create and submit your first Linux kernel patch using GIT
  8. 图解算法之排序算法(5)——归并排序
  9. 251f与ips屏显示器对比_8百左右预算,2020年PS平面设计/摄影后期显示器推荐/选购指南(全高清+高色域屏)...
  10. Android测试驱动开发实践2
  11. MAX6299MTT在CPLD上的应用
  12. 金角大王 python_【51CTO学院三周年】 老男孩python全栈心路
  13. 移动硬盘计算机无法打开硬盘,win10系统电脑无法打开移动硬盘的详细步骤
  14. 高一计算机函数公式,高一函数公式汇总
  15. 【C实现算法00】一个数组中除一个(两个)数只出现一次外其它数字都出现了两次,找出这个数,编程实现。
  16. noip2012 pj错题整理
  17. r matlab spss,特别放送 | 零基础编程入门:Python、Matlab、R、SPSS资料大放送
  18. 2020 年校招,最值得加入的互联网公司有哪些?
  19. Qiime2 软件安装
  20. layui:图片上传

热门文章

  1. 字符串String的长度限制
  2. 哈哈日语 日语入门之元音
  3. 《JAVA面试考点导读》(一)JDK基础类源码阅读
  4. 关于mysql varchar类型的长度
  5. 广告语中常用非法的词语,夸大性质的关键字
  6. python是什么意思
  7. 大专计算机应用专业课程课时,2015计算机应用专业课程设置(加大专)
  8. java做用户画像_用户画像之标签查询(附源码)
  9. 计算机与控制学院魏玮,迟浩坤
  10. 如何制作网页和电商官网的3D产品展示?