一.项目目录结构

二.模块划分

1.settings

# -*- coding: utf-8 -*-# Scrapy settings for zhilian project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'zhilian'SPIDER_MODULES = ['zhilian.spiders']
NEWSPIDER_MODULE = 'zhilian.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'zhilian (+http://www.yourdomain.com)'# Obey robots.txt rules
ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {#    'zhilian.middlewares.ZhilianSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {#    'zhilian.middlewares.ZhilianDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'zhilian.pipelines.ZhilianPipeline': 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

2.main

from scrapy import cmdlinecmdline.execute(['scrapy','crawl','zhilian'])

3.items

# -*- coding: utf-8 -*-# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass ZhilianItem(scrapy.Item):# define the fields for your item here like:position = scrapy.Field()company = scrapy.Field()city = scrapy.Field()salary = scrapy.Field()

4.ZhilianSpider

import scrapy
from ..items import ZhilianItem
import jsonheaders = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36',
}class Zhilian_Spider(scrapy.Spider):name = 'zhilian'def start_requests(self):for i in range(0,1080,90):url = 'https://fe-api.zhaopin.com/c/i/sou?start={}&pageSize=90&cityId=601&salary=0,0&workExperience=-1&education=-1&companyType=-1&employmentType=-1&jobWelfareTag=-1&kw=%E9%94%80%E5%94%AE&kt=3&=0&_v=0.41352509&x-zp-page-request-id=fabc345dbbae4931a317f751a3952ec5-1572513651825-763363&x-zp-client-id=2a5a1b79-d92c-486f-e0e7-85a94b7837b2'.format(i)yield scrapy.Request(url=url,headers=headers)def parse(self, response):res = response.textdatas = json.loads(res)['data']['results']# print(datas)# print(type(datas))# print(len(datas))for data in datas:try:info = ZhilianItem()jobname = data.get('jobName')salary = data.get('salary')company = data.get('company').get('name')city = data.get('city').get('items')[0].get('name')print(jobname,salary,company,city)# 对info字典进行赋值info['position'] = jobnameinfo['company'] = companyinfo['city'] = cityinfo['salary'] = salaryyield infoexcept:pass

5.pipelines

# -*- coding: utf-8 -*-# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
import pymongo#建立一个连接
# conn = pymongo.MongoClient("mongodb://localhost:27017")
conn = pymongo.MongoClient(host='localhost',port=27017)#选择使用哪个数据库
mydb = conn['test']#使用哪个集合
myset = mydb['info']class ZhilianPipeline(object):def process_item(self, item, spider):infomations = {'position':item['position'],'company':item['company'],'city':item['city'],'salary':item['salary'],}myset.insert(infomations)return item

注意:智联招聘url变化的是cityid,每个城市下有12页(不一定),页数start利用循环每次加90

scrapy爬取智联招聘,MongoDB存储数据相关推荐

  1. scrapy爬取智联招聘

    我想分析下互联网行业全国招聘实习生的情况,通过爬取智联招聘,得到15467条数据,并导入Mysql 在items.py里: import scrapy from scrapy.http import ...

  2. python3 scrapy爬取智联招聘存mongodb

    写在前面,这次写智联招聘的爬虫是其次,主要的是通过智联招聘上的数据信息弄一个数据挖掘的小项目,这一篇主要是如何一气呵成的将智联招聘上的招聘信息给爬下来 (一)scrapy框架的使用 scrapy框架是 ...

  3. Python利用Scrapy爬取智联招聘和前程无忧的招聘数据

    爬虫起因   前面两个星期,利用周末的时间尝试和了解了一下Python爬虫,紧接着就开始用Scrapy框架做了一些小的爬虫,不过,由于最近一段时间的迷茫,和处于对职业生涯的规划.以及对市场需求的分析, ...

  4. scrapy 智联 mysql_Python利用Scrapy爬取智联招聘和前程无忧的招聘数据

    爬虫起因 前面两个星期,利用周末的时间尝试和了解了一下Python爬虫,紧接着就开始用Scrapy框架做了一些小的爬虫,不过,由于最近一段时间的迷茫,和处于对职业生涯的规划.以及对市场需求的分析,我通 ...

  5. python scrapy爬取智联招聘的公司和职位信息(一)

    这个帖子先暂时放弃.本以为和拉钩一样全是静态页面,结果在写item的时候,发现网页有点意思,突然有个大胆的想法,想试试-先埋坑,后面在填坑 缘由: 最近在找工作发现智联和51上太多培训机构的虚假招聘信 ...

  6. python scrapy爬取智联招聘全站的公司和职位信息(二)

    从网页中提取相关信息 **公司页面**: 公司的url,公司名称,规模,行业,在招岗位数量,邀面试数 1. 在scrapy shell中调试 在terminal/CMD中输入 scrapy shell ...

  7. Scrapy学习——爬取智联招聘网站案例

    Scrapy学习--爬取智联招聘网站案例 安装scrapy 下载 安装 准备 分析 代码 结果 安装scrapy 如果直接使用pip安装会在安装Twisted报错,所以我们需要手动安装. 下载 安装s ...

  8. 爬取智联招聘信息并存储

    #-*- coding: utf-8 -*- import urllib.request import os,time from bs4 import BeautifulSoup #爬取智联招聘网站的 ...

  9. Python爬虫爬取智联招聘职位信息

    目的:输入要爬取的职位名称,五个意向城市,爬取智联招聘上的该信息,并打印进表格中 #coding:utf-8 import urllib2 import re import xlwtclass ZLZ ...

最新文章

  1. RxJava+Retrofit+OkHttp深入浅出-终极封装四(多文件下载之断点续传)
  2. 前端html5的框架有哪些,10大html5前端框架
  3. cudnn问题 cudnnCreate 延时长 见效慢 要卡十几分钟才能过 如何解决?(229)
  4. 团队任务3:软件设计与开发准备
  5. vps没有mysql怎么用商店_如何在本地搞一个小程序的服务器之我没有vps我也很绝望呀...
  6. 【Caffe】训练ImageNet模型
  7. 基于强化学习和析取图模型的统一调度框架
  8. 企业项目开发--cookie(3)
  9. SystemVerilog中fork-join三种形式的应用
  10. 怎么用QQ查看对方IP
  11. linux sed 多个条件,sed多条件匹配
  12. 粪斗吧 ! 骚年 !
  13. Linux安装命令_rpm
  14. 目标跟踪质心跟踪算法
  15. 多想和你拉手跳恰恰恰
  16. 【应用笔记】APM32F4xx_ADC应用笔记
  17. Leet_Code_1
  18. 12.1.2、Doris__基本使用、doris的基本命令、建表概念、语句、建表语法、建表方式(引擎存储规则)、导入数据的方式、支持的数据类型、rollup索引
  19. 成为一名合格的java工程师
  20. 用软笔,写慢字:键盘时代如何拯救书法?

热门文章

  1. Python的条件语句的嵌套和随机数
  2. 【BOOST C++ (5)算法】【01】 Boost.Algorithm
  3. ES中通过join类型字段构建父子关联
  4. Java 线程间通信
  5. 企业做软文营销如何选择合适的发布网站?
  6. iOS13.1快捷指令设置早晨播报
  7. 【科技百咖】人大金仓:先定一个小目标,比如做中国No.1的数据库
  8. 商城业务-支付-支付宝沙箱代码
  9. 工业边缘计算为什么?
  10. LCD timing设置