因为有在北京租房的打算,于是上网浏览了一下链家网站的房价,想将他们爬取下来,并保存到本地。

先看链家网的源码。。房价信息 都保存在 ul 下的li 里面

爬虫结构:

​ 其中封装了一个数据库处理模块,还有一个user-agent池。。

先看mylianjia.py

#-*- coding: utf-8 -*-

importscrapyfrom ..items importLianjiaItemfrom scrapy.http importRequestfrom parsel importSelectorimportrequestsimportosclassMylianjiaSpider(scrapy.Spider):

name= 'mylianjia'

#allowed_domains = ['lianjia.com']

start_urls = ['https://bj.lianjia.com/ershoufang/chaoyang/pg']defstart_requests(self):for i in range(1, 101): #100页的所有信息

url1 = self.start_urls +list(str(i))#print(url1)

url = ''

for j inurl1:

url+= j + ''

yieldRequest(url, self.parse)defparse(self, response):print(response.url)'''response1 = requests.get(response.url, params={'search_text': '粉墨', 'cat': 1001})

if response1.status_code == 200:

print(response1.text)

dirPath = os.path.join(os.getcwd(), 'data')

if not os.path.exists(dirPath):

os.makedirs(dirPath)

with open(os.path.join(dirPath, 'lianjia.html'), 'w', encoding='utf-8')as fp:

fp.write(response1.text)

print('网页源码写入完毕')'''infoall=response.xpath("//div[4]/div[1]/ul/li")#infos = response.xpath('//div[@class="info clear"]')

#print(infos)

#info1 = infoall.xpath('div/div[1]/a/text()').extract_first()

#print(infoall)

for info ininfoall:

item=LianjiaItem()#print(info)

info1 = info.xpath('div/div[1]/a/text()').extract_first()

info1_url= info.xpath('div/div[1]/a/@href').extract_first()#info2 = info.xpath('div/div[2]/div/text()').extract_first()

info2_dizhi = info.xpath('div/div[2]/div/a/text()').extract_first()

info2_xiangxi= info.xpath('div/div[2]/div/text()').extract()#info3 = info.xpath('div/div[3]/div/a/text()').extract_first()

#info4 = info.xpath('div/div[4]/text()').extract_first()

price = info.xpath('div/div[4]/div[2]/div/span/text()').extract_first()

perprice= info.xpath('div/div[4]/div[2]/div[2]/span/text()').extract_first()#print(info1,'--',info1_url,'--',info2_dizhi,'--',info2_xiangxi,'--',info4,'--',price,perprice)

info2_xiangxi1 = ''

for j1 ininfo2_xiangxi:

info2_xiangxi1+= j1 + ''

#print(info2_xiangxi1) #化为字符串

item['houseinfo']=info1

item['houseurl']=info1_url

item['housedizhi']=info2_dizhi

item['housexiangxi']=info2_xiangxi1

item['houseprice']=price

item['houseperprice']=perpriceyield item

再看items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items

#

# See documentation in:

# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy

class LianjiaItem(scrapy.Item):

# define the fields for your item here like:

# name = scrapy.Field()

houseinfo=scrapy.Field()

houseurl=scrapy.Field()

housedizhi=scrapy.Field()

housexiangxi=scrapy.Field()

houseprice=scrapy.Field()

houseperprice=scrapy.Field()

pass

接下来看pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here

#

# Don't forget to add your pipeline to the ITEM_PIPELINES setting

# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html

class LianjiaPipeline(object):

def process_item(self, item, spider):

print('房屋信息:',item['houseinfo'])

print('房屋链接:', item['houseurl'])

print('房屋位置:', item['housedizhi'])

print('房屋详细信息:', item['housexiangxi'])

print('房屋总价:', item['houseprice'],'万')

print('平方米价格:', item['houseperprice'])

print('===='*10)

return item

接下来看csvpipelines.py

import os

print(os.getcwd())

class LianjiaPipeline(object):

def process_item(self, item, spider):

with open('G:\pythonAI\爬虫大全\lianjia\data\house.txt', 'a+', encoding='utf-8') as fp:

name=str(item['houseinfo'])

dizhi=str(item['housedizhi'])

info=str(item['housexiangxi'])

price=str(item['houseprice'])

perprice=str(item['houseperprice'])

fp.write(name + dizhi + info+ price +perprice+ '\n')

fp.flush()

fp.close()

return item

print('写入文件成功')

接下来看 settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for lianjia project

#

# For simplicity, this file contains only settings considered important or

# commonly used. You can find more settings consulting the documentation:

#

# https://doc.scrapy.org/en/latest/topics/settings.html

# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html

# https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'lianjia'

SPIDER_MODULES = ['lianjia.spiders']

NEWSPIDER_MODULE = 'lianjia.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent

#USER_AGENT = 'lianjia (+http://www.yourdomain.com)'

# Obey robots.txt rules

ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)

#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)

# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay

# See also autothrottle settings and docs

DOWNLOAD_DELAY = 0.5

# The download delay setting will honor only one of:

#CONCURRENT_REQUESTS_PER_DOMAIN = 16

#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)

#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)

#TELNETCONSOLE_ENABLED = False

# Override the default request headers:

#DEFAULT_REQUEST_HEADERS = {

# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',

# 'Accept-Language': 'en',

#}

# Enable or disable spider middlewares

# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html

SPIDER_MIDDLEWARES = {

'lianjia.middlewares.LianjiaSpiderMiddleware': 543,

#'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,

#'lianjia.rotate_useragent.RotateUserAgentMiddleware' :400

}

# Enable or disable downloader middlewares

# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html

DOWNLOADER_MIDDLEWARES = {

'lianjia.middlewares.LianjiaDownloaderMiddleware': 543,

}

# Enable or disable extensions

# See https://doc.scrapy.org/en/latest/topics/extensions.html

#EXTENSIONS = {

# 'scrapy.extensions.telnet.TelnetConsole': None,

#}

# Configure item pipelines

# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html

ITEM_PIPELINES = {

'lianjia.pipelines.LianjiaPipeline': 300,

#'lianjia.iopipelines.LianjiaPipeline': 301,

'lianjia.csvpipelines.LianjiaPipeline':302,

}

# Enable and configure the AutoThrottle extension (disabled by default)

# See https://doc.scrapy.org/en/latest/topics/autothrottle.html

#AUTOTHROTTLE_ENABLED = True

# The initial download delay

#AUTOTHROTTLE_START_DELAY = 5

# The maximum download delay to be set in case of high latencies

#AUTOTHROTTLE_MAX_DELAY = 60

# The average number of requests Scrapy should be sending in parallel to

# each remote server

#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0

# Enable showing throttling stats for every response received:

#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)

# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings

HTTPCACHE_ENABLED = True

HTTPCACHE_EXPIRATION_SECS = 0

HTTPCACHE_DIR = 'httpcache'

HTTPCACHE_IGNORE_HTTP_CODES = []

HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

LOG_LEVEL='INFO'

LOG_FILE='lianjia.log'

最后看starthouse.py

from scrapy.cmdline import execute

execute(['scrapy', 'crawl', 'mylianjia'])

代码运行结果

保存到本地效果:

完成,事后可以分析一下房价和每平方米的方剂,,因为是海淀区的,,可以看到 都是好几万一平米,总价也得几百万了 而且是二手房,,,可以看出来 ,在北京买房太难。。。。

源码 tyutltf/lianjia: 爬取链家北京房价并保存txt文档  https://github.com/tyutltf/lianjia

python爬取南京市房价_Python的scrapy之爬取链家网房价信息并保存到本地相关推荐

  1. python+selenium爬取链家网房源信息并保存至csv

    python+selenium爬取链家网房源信息并保存至csv 抓取的信息有:房源', '详细信息', '价格','楼层', '有无电梯 import csv from selenium import ...

  2. python爬取链家房价消息_Python的scrapy之爬取链家网房价信息并保存到本地

    因为有在北京租房的打算,于是上网浏览了一下链家网站的房价,想将他们爬取下来,并保存到本地. 先看链家网的源码..房价信息 都保存在 ul 下的li 里面 ​ 爬虫结构: ​ 其中封装了一个数据库处理模 ...

  3. 基于python多线程和Scrapy爬取链家网房价成交信息

    文章目录 知识背景 Scrapy- spider 爬虫框架 SQLite数据库 python多线程 爬取流程详解 爬取房价信息 封装数据库类,方便多线程操作 数据库插入操作 构建爬虫爬取数据 基于百度 ...

  4. python爬虫爬取链家网房价信息

    打开链家网页:https://sh.lianjia.com/zufang/  :用F12以页面中元素进行检查 <a target="_blank" href="/z ...

  5. Python爬虫攻略(2)Selenium+多线程爬取链家网二手房信息

    申明:本文对爬取的数据仅做学习使用,请勿使用爬取的数据做任何商业活动,侵删 前戏 安装Selenium: pip install selenium 如果下载速度较慢, 推荐使用国内源: pip ins ...

  6. python爬房源信息_Python爬链家网租房信息

    爬去链家网的租房信息然后存储到数据库中. #-*- coding:utf-8 -*- import requests import re import random import MySQLdb fr ...

  7. python中scrapy可以爬取多少数据_python中scrapy框架爬取携程景点数据

    ------------------------------- [版权申明:本文系作者原创,转载请注明出处] 文章出处:https://blog.csdn.net/sdksdk0/article/de ...

  8. python爬取链家新房_Python爬虫实战:爬取链家网二手房数据

    前言 本文的文字及图片来源于网络,仅供学习.交流使用,不具有任何商业用途,如有问题请及时联系我们以作处理. 买房装修,是每个人都要经历的重要事情之一.相对于新房交易市场来说,如今的二手房交易市场一点也 ...

  9. python爬取链家新房数据_Python爬虫实战:爬取链家网二手房数据

    前言 本文的文字及图片来源于网络,仅供学习.交流使用,不具有任何商业用途,如有问题请及时联系我们以作处理. 买房装修,是每个人都要经历的重要事情之一.相对于新房交易市场来说,如今的二手房交易市场一点也 ...

最新文章

  1. 谷歌浏览器安装POSTMAN
  2. boost::search相关的测试程序
  3. Django完整配置settings.py
  4. Python培训的基础知识
  5. 关于@Import注解的几个问题
  6. 第四十二章 SQL函数 DATEADD
  7. 云丁智能锁说明书_真硬核!行业爆发前夜,这把锁登上航母
  8. mysql多表联合查询 去重_MySQL多表联合查询
  9. 关于分辨率,你该知道这些!
  10. 梦洁高端化,能解决中国人的睡眠问题吗?
  11. 删库跑路技巧 删库跑路命令
  12. 算法训练 4-2找公倍数
  13. python容易挂科吗_如何应付大学的python考试而不至于挂科?
  14. 打开天正建筑显示服务器为空,打开天正提示找不到可用cad版本 - 卡饭网
  15. 上证50指数基金定投三年亏2000
  16. 动力煤浅析+今日交易记录 2021.10.18
  17. stm32驱动ssd1306配置_stm32 ssD1306 OLED驱动架构
  18. Python 中的“多维”字典 (multi-dimension dictionary)
  19. ArcGIS API for JavaScript 4.0尝鲜——WebGIS前端开发大杀器
  20. 《蜘蛛侠4:超凡蜘蛛侠》蜘蛛侠前传 美国大片蓝光高清720P下载

热门文章

  1. Java毕设项目-OA办公系统
  2. HTTP协议三次握手过程
  3. python xlrd pandas_python处理excel方式(openpyxl,xlrd,xlwt,openpyxl,panda)
  4. 奇数点偶数点fft的matlab,电子科大 数字信号处理实验2_FFT的实现
  5. c++实现猜单词游戏
  6. 《TT语音》用户体验分析报告
  7. hge引擎配置登录器教程_Hge引擎程序+登录器配置器+配套工具+全套入门教程
  8. NLP-预训练模型-2018-Bert-解析:BertForMaskedLM
  9. 2021年网站不备案还会有收录排名吗
  10. 51Nod 2075 图书管理员 c/c++题解