1,使用API

1.1,API使用方法

API是通过Requests请求和服务端的Response回应来完成API的一次调用,所以用Python语言进行API的调用时,便可以使用Requests库来进行请求。

import requests
url = "http://fanyi.youdao.com/translate?&doctype=json&type=AUTO&i=你好"
res = requests.get(url)
print(res.text)
===========================================
{"type":"ZH_CN2EN","errorCode":0,"elapsedTime":1,"translateResult":[[{"src":"你好","tgt":"hello"}]]}

1.2,解析JSON数据

Python语言中解析JSON数据的标准库,可以通过以下方式来使用:import json

不同的其他Python的解析库,JSON解析库并不是把JSON数据解析为JSON对象或JSON节点,而是把JSON数据转换为字典,JSON数组转换成列表,JSON字符串转换为Python字符串,这样比那可以轻松地对JSON数据进行操作。

jsonstring = '''{"user_man":[{"name":"燕双嘤"},{"name":"杜马"},{"name":"步鹰"}],"user_woman":[{"name":"理慧"},{"name":"小玲"},{"name":"橘子"}]}
'''
import jsonjson_data = json.loads(jsonstring)
print(json_data.get("user_man"))
print(json_data.get("user_woman"))
print(json_data.get("user_man")[0].get("name"))
print(json_data.get("user_woman")[1].get("name"))
===================================================
[{'name': '燕双嘤'}, {'name': '杜马'}, {'name': '步鹰'}]
[{'name': '理慧'}, {'name': '小玲'}, {'name': '橘子'}]
燕双嘤
小玲

也可以使用如下写法完成数据的提取工作:

jsonstring = '''{"user_man":[{"name":"燕双嘤"},{"name":"杜马"},{"name":"步鹰"}],"user_woman":[{"name":"理慧"},{"name":"小玲"},{"name":"橘子"}]}
'''
import jsonjson_data = json.loads(jsonstring)
print(json_data["user_man"])
print(json_data["user_woman"])
print(json_data["user_man"][0].get("name"))
print(json_data["user_woman"][1].get("name"))

1.3,调用有道翻译API

import jsonimport requests
url = "http://fanyi.youdao.com/translate?&doctype=json&type=AUTO&i=你好"
res = requests.get(url)
json_data = json.loads(res.text)
english_word = json_data['translateResult'][0][0]["tgt"]
print(english_word)
==============================================
hello

1.4,调用高德地图API

import requests
address = input("请输入地点:")
par = {"address":address,"key":"25889f64896d78c4dcc87d9564ee35c4"}
url = "http://restapi.amap.com/v3/geocode/geo"
res = requests.get(url,par)
print(res.text)
=============================================
请输入地点:长沙
{"status":"1","info":"OK","infocode":"10000","count":"1","geocodes":[{"formatted_address":"湖南省长沙市","country":"中国","province":"湖南省","citycode":"0731","city":"长沙市","district":[],"township":[],"neighborhood":{"name":[],"type":[]},"building":{"name":[],"type":[]},"adcode":"430100","street":[],"number":[],"location":"112.938814,28.228209","level":"市"}]}

当放回结果太多时,结构上看不明显,解析JSON数据就会变得不清晰,因此可以通过pprint库来打印JSON数据。

import requests
import json
import pprint
address = input("请输入地点:")
par = {"address":address,"key":"25889f64896d78c4dcc87d9564ee35c4"}
url = "http://restapi.amap.com/v3/geocode/geo"
res = requests.get(url,par)
json_data = json.loads(res.text)
pprint.pprint(json_data)
====================================
请输入地点:郑州
{'count': '1','geocodes': [{'adcode': '410100','building': {'name': [], 'type': []},'city': '郑州市','citycode': '0371','country': '中国','district': [],'formatted_address': '河南省郑州市','level': '市','location': '113.625368,34.746599','neighborhood': {'name': [], 'type': []},'number': [],'province': '河南省','street': [],'township': []}],'info': 'OK','infocode': '10000','status': '1'}

可以通过JSON操作提取经纬度信息。

import requests
import json
import pprint
address = input("请输入地点:")
par = {"address":address,"key":"25889f64896d78c4dcc87d9564ee35c4"}
url = "http://restapi.amap.com/v3/geocode/geo"
res = requests.get(url,par)
json_data = json.loads(res.text)
geo = json_data['geocodes'][0]['location']
longitude = geo.split(",")[0]
latitude = geo.split(",")[1]
print(longitude,latitude)
=============================
请输入地点:长春
125.323544 43.817071

2,异步加载

2.1,异步加载技术与爬虫方法

传统的网页如果需要更新信息,必须重载整个网页页面,网页加载速度慢,用户体验差,而且数据传输少,会造成宽带浪费。异步加载技术(AJAX),是指一种创建交互式网页应用开发技术。通过在后台与服务器进行少量数据交换,AJAX可以使网页实现异步更新,这意味着不需要刷新就可以对网页的某部分进行更新。

类似于简书网个人主页,并没有分页信息,而是一直可以浏览下去。通过爬虫代码发现,爬取不到信息。

import requests
from lxml import etree
url = 'https://www.jianshu.com/u/c4920e664585'
html = requests.get(url)
selector =  etree.HTML(html.text)
infos = selector.xpath('//div[@id="list-container"]')
print(infos)

2.2,简书网用户文章(Lxml)

使用异步加载技术,不再是立即加载所有网页内容,而展示的内容也就不在HTML源码中。这样,通过前面的方法就无法正确抓到数据。想要抓取这些通过异步加载方法的网页数据,需要了解网页如何加载这些数据的,该过程叫做逆向工程。

import requests
from lxml import etree
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4651.0 Safari/537.36',
}
#修改page即可
url = 'https://www.jianshu.com/u/c4920e664585?order_by=shared_at&page=1'
html = requests.get(url,headers=headers)
selector =  etree.HTML(html.text)
name = selector.xpath('//div[@id="list-container"]/ul/li/div/a/text()')
content = selector.xpath('//div[@id="list-container"]/ul/li/div/p/text()')
print(name,content)

2.3,爬取京东网站数据(BeautifulSoup)

第一页,非异步。

import requests
from bs4 import BeautifulSoup
from lxml import etreeheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4651.0 Safari/537.36','Reference':'https://search.jd.com/Search?keyword=Python','cookie': 'shshshfpa=6b95b204-3ac4-e77f-bedf-db5fbc0ff0d6-1632968513; shshshfpb=aR5kxPIAwyu1pDc5e4k7zPw%3D%3D; __jdu=1632968514012909987986; qrsc=3; user-key=cf106cec-5c11-42f3-9144-549b67801f4d; PCSYCityID=CN_430000_0_0; pin=jd_649a2ae57440c; unick=jd_649a2ae57440c; _tp=LV3rYbf2Bc52BLQYXibljv8HF%2BkT5d1pk2MCA2pBcwc%3D; _pst=jd_649a2ae57440c; areaId=9; unpl=V2_ZzNtbUZXRBV2Ck4HLhAMVmJWFwkRVEsQJwBOVHwcCwwwA0YKclRCFnUURlVnGFsUZwoZXUdcRxJFCENkexhdBWMGEV5EVnMlMEsWBi8FXABkCxJYRFJDEXINRFJzGV4CZh8RXENeQxN2CUZXexhsBmczE21CUEEWfQ9PUHscXQVkCxNVSl5FHXwPdmR7EVg1ZwITXENWQxV9C0dSS8%2fykbKLrYTy8pW%2f293L5K6Q99HZhSJdS1dBHXQJRlN4KV01ZjNQM5bq5cD%2blZDe1czVtL%2bXjYng55S4zdDu9a%2btwdHZil9dRVVAHXIBQlR%2bGFwGbwIaVUtRSxxyOEdkeA%3d%3d; ipLocation=%u5409%u6797; cn=0; ipLoc-djd=18-1482-48938-54596.3775197224; __jdv=122270672|baidu|-|organic|not set|1633697951431; __jda=122270672.1632968514012909987986.1632968514.1633748589.1633866965.8; __jdc=122270672; shshshfp=6281187a2efac61935e40eb3cb4cbde5; rkv=1.0; __jdb=122270672.3.1632968514012909987986|8.1633866965; shshshsID=bef6c9c4880d730782bf5125c2fadfab_3_1633868355199; 3AB9D23F7A4B3C9B=FGN7I762UJHSQHTUYJI4LLNP24WP7ESJ6DFYBRXH2SLTDS6YSTPWX7WN7NOUYJP2NRELDSCIBZZNHRBJGLGWCAZDRU'
}
def get_info(url):res = requests.get(url,headers=headers)soup = BeautifulSoup(res.text,'lxml')node_li = soup.select('.gl-item')for li in node_li:p_name = li.select('.p-name em')[0].text.strip()p_price = li.select('.p-price strong')[0].text.strip()p_shopnum = li.select('.p-shopnum a')[0].textprint(p_name,p_price,p_shopnum)
if __name__ == '__main__':get_info("https://search.jd.com/Search?keyword=Python")

其他页,异步,需要登录。

import requests
from bs4 import BeautifulSoup
from lxml import etreeheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4651.0 Safari/537.36','Reference': 'https://search.jd.com/Search?keyword=Python&wq=Python&pvid=a0ab812959c341b892bd6f6f8f18eb81&page=9&s=236&click=0','x-requested-with': 'XMLHttpRequest','cookie': 'shshshfpa=6b95b204-3ac4-e77f-bedf-db5fbc0ff0d6-1632968513; shshshfpb=aR5kxPIAwyu1pDc5e4k7zPw%3D%3D; __jdu=1632968514012909987986; qrsc=3; user-key=cf106cec-5c11-42f3-9144-549b67801f4d; PCSYCityID=CN_430000_0_0; pin=jd_649a2ae57440c; unick=jd_649a2ae57440c; _tp=LV3rYbf2Bc52BLQYXibljv8HF%2BkT5d1pk2MCA2pBcwc%3D; _pst=jd_649a2ae57440c; areaId=9; unpl=V2_ZzNtbUZXRBV2Ck4HLhAMVmJWFwkRVEsQJwBOVHwcCwwwA0YKclRCFnUURlVnGFsUZwoZXUdcRxJFCENkexhdBWMGEV5EVnMlMEsWBi8FXABkCxJYRFJDEXINRFJzGV4CZh8RXENeQxN2CUZXexhsBmczE21CUEEWfQ9PUHscXQVkCxNVSl5FHXwPdmR7EVg1ZwITXENWQxV9C0dSS8%2fykbKLrYTy8pW%2f293L5K6Q99HZhSJdS1dBHXQJRlN4KV01ZjNQM5bq5cD%2blZDe1czVtL%2bXjYng55S4zdDu9a%2btwdHZil9dRVVAHXIBQlR%2bGFwGbwIaVUtRSxxyOEdkeA%3d%3d; ipLocation=%u5409%u6797; cn=0; ipLoc-djd=18-1482-48938-54596.3775197224; __jdv=122270672|baidu|-|organic|not set|1633697951431; rkv=1.0; pinId=klEN4yI9KTjptVavHRryabV9-x-f3wj7; ceshi3.com=000; thor=014F0B89864C65355D4D9ABA36F70C8887C1A34CFFF75F0A14A4D8B3F933F047C96FD1E47788BE78930250E97FEA9781E764BEC2BB6DC4EE19C10B29ED51AFDE29B1B31522CF8D5AD518C75CBC3FB3AAE3A012BC4C03205213CF6C0549FFB719391823C99BC86E46D6D6EE4B8F5FC67A4FE96594A005D254210487FAF9B79D963DEADE9B1B19BB9D900273B5CA476BE00A13A04FEBFE1121590A573AA70215F4; shshshfp=6281187a2efac61935e40eb3cb4cbde5; __jda=122270672.1632968514012909987986.1632968514.1633748589.1633866965.8; __jdb=122270672.43.1632968514012909987986|8.1633866965; __jdc=122270672; shshshsID=bef6c9c4880d730782bf5125c2fadfab_13_1633870295571; 3AB9D23F7A4B3C9B=FGN7I762UJHSQHTUYJI4LLNP24WP7ESJ6DFYBRXH2SLTDS6YSTPWX7WN7NOUYJP2NRELDSCIBZZNHRBJGLGWCAZDRU'
}def get_info(url):res = requests.get(url, headers=headers)soup = BeautifulSoup(res.text, 'lxml')node_li = soup.select('.gl-item')for li in node_li:p_name = li.select('.p-name em')[0].text.strip()p_price = li.select('.p-price strong')[0].text.strip()p_shopnum = li.select('.p-shopnum a')[0].textprint(p_name, p_price, p_shopnum)if __name__ == '__main__':get_info("https://search.jd.com/Search?keyword=Python&wq=Python&pvid=a0ab812959c341b892bd6f6f8f18eb81&page=9&s=236&click=0")

3,图片爬取

3.1,图片爬取方法1(URLretrieve模块)

通过URLlib.request中的URLretrieve模块:

from urllib.request import urlretrieveimport requests
from bs4 import BeautifulSoup
import time  # 导入库headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4651.0 Safari/537.36','cookie': 'Hm_lvt_cb7f29be3c304cd3bb0c65a4faa96c30=1633677537; Hm_lpvt_cb7f29be3c304cd3bb0c65a4faa96c30=1633678143','Referer': 'https://某子图/'
}  # 请求头
dowmload_links = []def get_info(url):  # 获取信息的函数res = requests.get(url, headers=headers)soup = BeautifulSoup(res.text, 'lxml')imgs = soup.select('#pins > li >a >img')for img in imgs:print(img.get("data-original"))dowmload_links.append(img.get("data-original"))if __name__ == '__main__':get_info("某子图")for item in dowmload_links:urlretrieve(item, "D://photo/" + item[-10:])

urlretrieve虽然方便,但是无法设置headers,也就是说会被网站采用的反爬虫策略处理(user-agent,cookie等)。

3.2,图片爬取方法2(获取链接再存储)

BeautifulSoup

from urllib.request import urlretrieveimport requests
from bs4 import BeautifulSoup
import time  # 导入库headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4651.0 Safari/537.36','cookie': 'Hm_lvt_cb7f29be3c304cd3bb0c65a4faa96c30=1633677537; Hm_lpvt_cb7f29be3c304cd3bb0c65a4faa96c30=1633678143','Referer': 'https://某子图/'
}  # 请求头
dowmload_links = []path = "D://photo/"
def get_info(url):  # 获取信息的函数res = requests.get(url, headers=headers)soup = BeautifulSoup(res.text, 'lxml')imgs = soup.select('#pins > li >a >img')for img in imgs:print(img)data = requests.get(img["data-original"],headers=headers)fp = open(path+img['data-original'][-10:],'wb')fp.write(data.content)print(img["data-original"])
if __name__ == '__main__':urls = ['https://某子图/page/{}/'.format(str(i)) for i inrange(1, 20)]for url in urls:get_info(url)

Lxml

from urllib.request import urlretrieveimport requests
from lxml import etree
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4651.0 Safari/537.36','cookie': 'Hm_lvt_cb7f29be3c304cd3bb0c65a4faa96c30=1633677537; Hm_lpvt_cb7f29be3c304cd3bb0c65a4faa96c30=1633678143','Referer': 'https://某子图/'
}  # 请求头
dowmload_links = []path = "D://photo/"
def get_info(url):  # 获取信息的函数res = requests.get(url, headers=headers)selector = etree.HTML(res.text)photo_urls = selector.xpath('//*[@id="pins"]/li/a/img/@data-original')for photo_url in photo_urls:print(photo_url)data = requests.get(photo_url,headers=headers)fp = open(path+photo_url[-10:],"wb")fp.write(data.content)fp.close()
if __name__ == '__main__':urls = ['https://某子图/page/{}/'.format(str(i)) for i inrange(1, 20)]for url in urls:get_info(url)

3.3,爬取某事百科用户地理信息(三重爬取)

import csv
import jsonimport requests
from lxml import etreeheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36(KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36'
}  # 请求头
f = open("city.csv", "wt", newline="")
writer = csv.writer(f)
writer.writerow(("city", "longitude", "latitude"))
def get_info(url):  # 获取信息的函数res = requests.get(url, headers=headers)selector = etree.HTML(res.text)url_infos = selector.xpath('//div[@class="col1 old-style-col1"]/div/div[1]/a[2]/@href')for url in url_infos:print("https://www.qiushibaike.com"+url)res = requests.get("https://www.qiushibaike.com"+url, headers=headers)selector = etree.HTML(res.text)locationDetail = selector.xpath('//div[@class="user-statis user-block"]/ul/li[5]/text()')try:if len(locationDetail)>1:print(locationDetail)city = locationDetail[1].split(" · ")[0]par = {"address": city, "key": "25889f64896d78c4dcc87d9564ee35c4"}url = "http://restapi.amap.com/v3/geocode/geo"res = requests.get(url, par)json_data = json.loads(res.text)try:geo = json_data['geocodes'][0]['location']print(geo)longitude = geo.split(",")[0]latitude = geo.split(",")[1]writer.writerow((city, longitude, latitude))except Exception as e:print(e.args)passexcept Exception as e:print(e.args)pass
if __name__ == '__main__':urls = ['https://www.qiushibaike.com/text/page/{}/'.format(str(i)) for i inrange(1, 20)]for url in urls:get_info(url)

3.4,爬取有脑PPT

import json
from multiprocessing import Pool
import requestsheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4651.0 Safari/537.36','cookie': 'JSESSIONID=aaa_p4LeyBuk2hWtUl7Vx; rmbUser=true; username=******; password=******',
}  # 请求头path = "D://"urls = ['http://www.ynlibs.com/yk/search?type=0&mode=1&cid=50003&qs=&count=10&page={}'.format(str(i)) for i inrange(20,40)]
for url in urls:print(url)res = requests.get(url, headers=headers)json_data = json.loads(res.text)print(res.text)for i in json_data.get("data").get("data").get("docList"):id = i.get("docId")res = requests.get("http://www.ynlibs.com/yk/usermodule/dDoc?repId=" + str(id), headers=headers)json_data = json.loads(res.text)fileUrl = json_data.get("fileUrl")for j in fileUrl:if j != None:data = requests.get(fileUrl, headers=headers)fp = open(path + str(id) + ".ppt", 'wb')fp.write(data.content)

4,数据库存储

Python:文件处理、数据库操作_燕双嘤-CSDN博客

4.1,爬取某瓣音乐TOP250的数据

import csv
import json
import re
import requests
from lxml import etree
import mysql.connectormydb = mysql.connector.connect(host="localhost",user="root",passwd="123456",database="test"
)
mycursor = mydb.cursor()
sql = "INSERT INTO music (name,singer,kind,publish,time) VALUES (%s, %s, %s, %s, %s)"
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36(KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36'
}  # 请求头def get_info(url):  # 获取信息的函数res = requests.get(url, headers=headers)selector = etree.HTML(res.text)url_infos = selector.xpath('//div[@class="indent"]/table/tr/td[2]/div/a/@href')for url in url_infos:print(url)res = requests.get(url, headers=headers)selector = etree.HTML(res.text)sing_name = selector.xpath('//*[@id="wrapper"]/h1/span/text()')print(sing_name[0])all_datas = selector.xpath('//*[@id="info"]')for data in all_datas:singer = data.xpath('span[1]/span/a/text()')if singer == []:singer = data.xpath('span[2]/span/a/text()')print(singer[0])styles = re.findall('<span class="pl">流派:</span>&nbsp;(.*?)<br/>', res.text, re.S)style = styles[0].strip()[0:2]print(style)publish = re.findall('<span class="pl">出版者:</span>&nbsp;(.*?)<br/>', res.text, re.S)index = re.match(r"\S*", publish[0]).span()publish = publish[0].strip()[0:index[1]]print(publish)time = re.findall('<span class="pl">发行时间:</span>&nbsp;(.*?)<br/>', res.text, re.S)index = re.match(r"\S*", time[0]).span()time = time[0].strip()[0:index[1]]print(time)val = (str(sing_name[0]),str(singer[0]),str(style),str(publish),str(time))mycursor.execute(sql, val)mydb.commit()  # 数据表内容有更新,必须使用到该语句
if __name__ == '__main__':urls = ['https://music.douban.com/top250?start={}'.format(str(i)) for i inrange(0, 250, 25)]for url in urls:get_info(url)

4.2,爬取某瓣电影TOP250的数据

import csv
import json
import re
import requests
from lxml import etree
import mysql.connectorheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4651.0 Safari/537.36','Cookie': 'll="108305"; bid=uvmHZibszbw; dbcl2="247557647:VBU9nanuBDI"; push_doumail_num=0; push_noty_num=0; __utmv=30149280.24755; gr_user_id=7b7b1620-ef95-47f6-9073-839e80418af7; _ga=GA1.1.816098258.1633044506; _ga_RXNMP372GL=GS1.1.1633697920.1.0.1633697926.0; __utmz=30149280.1633697985.6.3.utmcsr=baidu|utmccn=(organic)|utmcmd=organic; ck=UFS-; ap_v=0,6.0; __utma=30149280.816098258.1633044506.1633734972.1633788095.9; __utmc=30149280; __utma=223695111.816098258.1633044506.1633789727.1633789727.1; __utmc=223695111; __utmb=223695111.0.10.1633789727; __utmz=223695111.1633789727.1.1.utmcsr=music.douban.com|utmccn=(referral)|utmcmd=referral|utmcct=/subject/2995812/; _pk_ref.100001.4cf6=%5B%22%22%2C%22%22%2C1633789727%2C%22https%3A%2F%2Fmusic.douban.com%2Fsubject%2F2995812%2F%22%5D; _pk_ses.100001.4cf6=*; _vwo_uuid_v2=D5B68F73323D17A288F03BF8DF89E6C0C|53f9872ed02716e5ea14af32b4bd4535; __utmb=30149280.5.10.1633788095; _pk_id.100001.4cf6=d02f83d4c496bd5a.1633789727.1.1633790208.1633789727.'
}  # 请求头def get_info(url):  # 获取信息的函数res = requests.get(url, headers=headers)selector = etree.HTML(res.text)url_infos = selector.xpath('//*[@id="content"]/div/div[1]/ol/li/div/div[2]/div[1]/a/@href')for url in url_infos:print(url)res = requests.get(url, headers=headers)selector = etree.HTML(res.text)sing_name = selector.xpath('//*[@id="wrapper"]/div[1]/h1/span[1]/text()')print(sing_name)all_datas = selector.xpath('//*[@id="info"]')for data in all_datas:director = data.xpath('span[1]/span[2]/a/text()')print(director)actors = data.xpath('span[3]/span[2]/a/text()')print(actors)style = re.findall('<span property="v:genre">(.*?)</span>', res.text, re.S)print(style)country = re.findall('<span class="pl">制片国家/地区:</span>(.*?)<br/>', res.text, re.S)print(country)release_time = re.findall('上映日期:</span>.*?>(.*?)</span>', res.text, re.S)print(release_time)time = re.findall('片长:</span>.*?>(.*?)</span>', res.text, re.S)print(time)score = data.xpath('//*[@id="interest_sectl"]/div[1]/div[2]/strong/text()')print(score)
if __name__ == '__main__':urls = ['https://movie.douban.com/top250?start={}'.format(str(i)) for i inrange(0, 250, 25)]for url in urls:get_info(url)

4.3,爬取拉勾网职业信息

import csvimport requests
from lxml import etreeheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36(KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36','cookie': 'JSESSIONID=ABAAABAABEIABCIB43D3E4C970DD31B321CF69247A19EF7; WEBTJ-ID=20211015220328-17c8443cbb2b6-027aeed09fe4d7-5a402f16-2073600-17c8443cbb375e; RECOMMEND_TIP=true; privacyPolicyPopup=false; user_trace_token=20211015220331-d885d7e2-e8e1-4bd3-9f9a-f13a9d63b33f; PRE_UTM=; PRE_LAND=https%3A%2F%2Fwww.lagou.com%2F; LGUID=20211015220331-82e7acdf-b026-4444-a787-01f288e1861b; _ga=GA1.2.776691368.1634306610; _gid=GA1.2.1643548909.1634306610; LGSID=20211015220331-016659bb-6d09-46a7-bc50-5732a39aec3d; PRE_HOST=www.baidu.com; PRE_SITE=https%3A%2F%2Fwww.baidu.com%2Flink%3Furl%3D3wdVF4Q-oI-sCHqMoJr8iQ12y%5FrkkpG-Fgt9yX0-iPO%26wd%3D%26eqid%3Dd7d9ee120001a8150000000661698a2c; sajssdk_2015_cross_new_user=1; sensorsdata2015session=%7B%7D; index_location_city=%E5%85%A8%E5%9B%BD; Hm_lvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1634306610,1634306637; TG-TRACK-CODE=index_navigation; __lg_stoken__=951a9a09f09fe204845d58bf00941146469f5d8d9faabc414c5e9a0abc3559e9e95a378f393e6b35db2524f6f12c08e9751559b856e932dbc3eea6d05663753d37a5579a3432; X_MIDDLE_TOKEN=670e067f1f8c019f8c1536e2cb3de120; SEARCH_ID=a1f7bce5b8f54f9b858f2984780ccfc7; _gat=1; gate_login_token=c99579b65bb700ecc7bd105f8e3f2314633ea0c78ed20469f6ab380ea2733f66; LG_LOGIN_USER_ID=273889413db51bf1d48467a531bf75844c9c20d8da709117a93f7465e45c760a; LG_HAS_LOGIN=1; _putrc=98D7C7ED8192071B123F89F2B170EADC; login=true; unick=%E7%87%95%E5%8F%8C%E5%98%A4; showExpriedIndex=1; showExpriedCompanyHome=1; showExpriedMyPublish=1; hasDeliver=0; X_HTTP_TOKEN=fba6aa0e0541c2f92597034361aae666f9da56088e; __SAFETY_CLOSE_TIME__15083126=1; Hm_lpvt_4233e74dff0ae5bd0a3d81c6ccf756e6=1634307951; LGRID=20211015222553-639d5fc9-bd52-49e4-9588-dc983ffb036e; sensorsdata2015jssdkcross=%7B%22distinct_id%22%3A%2215083126%22%2C%22first_id%22%3A%2217c8443d3bc406-08f1ccd659aaa9-5a402f16-2073600-17c8443d3bd430%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24os%22%3A%22Windows%22%2C%22%24browser%22%3A%22Chrome%22%2C%22%24browser_version%22%3A%2294.0.4606.81%22%7D%2C%22%24device_id%22%3A%2217c8443d3bc406-08f1ccd659aaa9-5a402f16-2073600-17c8443d3bd430%22%7D'
}  # 请求头
def get_info(url):  # 获取信息的函数res = requests.get(url, headers=headers)selector = etree.HTML(res.text)print(res.text)datas = selector.xpath('//div[@class="list_item_top"]/div[@class="position"]')print(datas)for data in datas:name = data.xpath('div[1]/a/h3/text()')add = data.xpath('div[1]/span/text()')dolor = data.xpath('div[2]/div/span/text()')print(name, add, dolor)if __name__ == '__main__':urls = ['https://www.lagou.com/zhaopin/Java/{}/'.format(str(i)) for i inrange(2,5)]for url in urls:get_info(url)

4.4,爬取新浪微博热搜

import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from lxml import etree
browser = webdriver.Chrome()
wait = WebDriverWait(browser,5)
def searcher(url):browser.get(url)browser.maximize_window()time.sleep(10)#进度条缓慢下拉js = "return action=document.body.scrollHeight"# 初始化现在滚动条所在高度为0height = 0# 当前窗口总高度new_height = browser.execute_script(js)while height < new_height:# 将滚动条调整至页面底部for i in range(height, new_height,500):browser.execute_script('window.scrollTo(0, {})'.format(i))print(i,height,new_height)time.sleep(0.5)i+=100height = new_heighttime.sleep(2)new_height = browser.execute_script(js)parse_page()def parse_page():html = browser.page_sourceselector = etree.HTML(html)name = browser.find_elements_by_xpath('//div[@class="vue-recycle-scroller__item-wrapper"]/div/div/div/a/div/div/div/div[1]/div[2]')for i in name:print(i.text)
if __name__ == '__main__':searcher("https://weibo.com/newlogin?tabtype=search&url=")

5,多进程爬虫

5.1,多线程与多进程

当计算机运行程序时,就会创建包含代码和状态的进程。这些进程会通过计算机的一个或多个CPU执行。不过,同一时刻每个CPU只会执行一个进程,然后在不同进程间快速切换,这样就给人以多个程序同时运行的感觉。同理,在一个进程中,程序的执行也是在不同线程间进行切换的,每个线程执行程序的不同部分。

例子:有一个大型工厂,该工厂负责生产玩具;同时工厂下又有多个车间,每个车间负责不同的功能,生产不同的玩具零件;每个车间里又有多个车间工人,这些工人相互合作,彼此共享资源来共同生产某个玩具零件等。这里的工程就相当于一个网络爬虫,而每个车间相当于一个进程,每个车间工人就相当于线程。这样,通过多线程和多进程,网络爬虫就能高效、快速地运行下去。

5.2,多进程使用方法

Python进行多进程爬虫使用了multiprocessing库,multiprocessing是一款进程池进行管理进程,使用方法:

from multiprocessing import Pool
pool = Pool(processes=4)
pool.map(func,iterable[,chunksize])
  • 第一行用于导入multiprocessing库的Pool模块。
  • 第二行用于创建进程池,processes参数为设置进程的个数。
  • 第三行利用map()函数运行进程,fun参数为需运行的函数,在爬虫实战中,为爬虫函数。iterable为迭代参数,可以为多个URL列表进行迭代。

5.3,性能对比(某事百科)

import time
from multiprocessing import Pool
import requests
import reheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36'
}
def get_info(url):res = requests.get(url)ids = re.findall('<h2>(.*?)</h2>', res.text, re.S)contents = re.findall('<div class="content">.*?<span>(.*?)</span>', res.text, re.S)laughs = re.findall('<span class="stats-vote"><i class="number">(\d+)</i>', res.text, re.S)comments = re.findall('<i class="number">(\d+)</i> 评论', res.text, re.S)for id, content, laugh, comment in zip(ids, contents, laughs, comments):info = {'id': id,'content': content,'laugh': laugh,'comment': comment}return infoif __name__ == '__main__':urls = ['http://www.qiushibaike.com/text/page/{}/'.format(str(i)) for i in range(1, 36)]start_1 = time.time()for url in urls:get_info(url)end_1 = time.time()print("串行爬虫", end_1 - start_1)start_2 = time.time()pool = Pool(processes=2)pool.map(get_info, urls)end_2 = time.time()print("两个进程", end_2 - end_1)start_3 = time.time()pool = Pool(processes=8)pool.map(get_info, urls)end_3 = time.time()print("八个进程", end_3 - start_3)
=================================================
串行爬虫 20.030537605285645
两个进程 13.452766418457031
八个进程 5.213830471038818

6,代理IP

6.1,使用IP代理

针对网站检测IP访问的反爬虫策略,可以使用代理IP。代理IP是代理用户取得网络信息的IP地址,它可以帮助爬虫程序掩饰真实身份,突破IP访问的限制,隐藏爬虫程序的真实IP,从而避免被网站的发爬虫程序禁止。

requests库实现使用代理IP非常方便,只需要构造一个代理IP的字典,然后在发送HTTP请求时,使用proxies参数添加代理IP的字典即可。如果需要使用多个代理IP字典构成列表,然后从列表中随机选择代理IP。

import requests
from bs4 import BeautifulSoup
from lxml import etreeurl = "https://www.pythontab.com/"
proxies = [{'http':'http://121.232.148.167:9000'},{'http':'http://39.105.28.28:8181'},{'http':'http://113.195.18.133:9999'}
]
for proxy in proxies:try:r = requests.get(url,proxies=proxy)except:print("此代理IP不可用")else:print("此代理IP可用",proxy)print("响应状态码",r.status_code)
========================================
此代理IP可用 {'http': 'http://121.232.148.167:9000'}
响应状态码 200
此代理IP可用 {'http': 'http://39.105.28.28:8181'}
响应状态码 200
此代理IP可用 {'http': 'http://113.195.18.133:9999'}
响应状态码 200

6.2,爬取Python某开发者社区教程

import requests
from bs4 import BeautifulSoup
from lxml import etreeproxies = {'http':'http://121.232.148.167:9000'}def get_infos(url):r = requests.get(url, proxies=proxies)html = etree.HTML(r.text)all_dates = html.xpath('//ul[@id="catlist"]/li')for date in all_dates:name = date.xpath('h2/a/text()')content = date.xpath('p/text()')print(name,content)
if __name__ == '__main__':urls = ["https://www.pythontab.com/html/pythonhexinbiancheng/{}.html".format(str(i)) for i in range(2,28)]for url in urls:get_infos(url)

6.3,爬取B站视频信息(Lxml)

import random
import requests
from bs4 import BeautifulSoup
from lxml import etree
proxies = [{'http':'http://121.232.148.167:9000'},{'http':'http://39.105.28.28:8181'},{'http':'http://113.195.18.133:9999'}
]
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4651.0 Safari/537.36',
}
def get_infos(url):r = requests.get(url, proxies=random.choice(proxies))html = etree.HTML(r.text)all_dates = html.xpath('//li[@class="video-item matrix"]/div')for date in all_dates:name = date.xpath('div[1]/a/@title')href = date.xpath('div[1]/a/@href')watch = str(date.xpath('div[3]/span[1]/text()')).replace("\\n","").replace(" ","")danmu = str(date.xpath('div[3]/span[2]/text()')).replace("\\n","").replace(" ","")time = str(date.xpath('div[3]/span[3]/text()')).replace("\\n","").replace(" ","")up = date.xpath('div[3]/span[4]/a/text()')print(name,danmu,watch,time,up,href)
if __name__ == '__main__':urls = ["https://search.bilibili.com/all?keyword=Python&page={}".format(str(i)) for i in range(1,10)]for url in urls:get_infos(url)
====================================================
['花了2万多买的Python教程全套,现在分享给大家,入门到精通(Python全栈开发教程)'] ['12.7万'] ['916.2万'] ['2020-09-07'] ['Python_子木'] ['//www.bilibili.com/video/BV1wD4y1o7AS?from=search']
['黑马程序员Python教程_600集Python从入门到精通教程(懂中文就能学会)'] ['18.7万'] ['1097.6万'] ['2017-09-05'] ['黑马程序员'] ['//www.bilibili.com/video/BV1ex411x7Em?from=search']
['[小甲鱼]零基础入门学习Python'] ['29.9万'] ['1639.6万'] ['2016-03-09'] ['IT搬運工'] ['//www.bilibili.com/video/BV1xs411Q799?from=search']
['【2021新版】全套Python教程-750集完整版(基础+高级+项目)'] ['3998'] ['28.8万'] ['2021-07-14'] ['IT峰播'] ['//www.bilibili.com/video/BV1HU4y1n7CP?from=search']
['【Python教程】《零基础入门学习Python》最新版'] ['4.4万'] ['290.1万'] ['2019-05-11'] ['鱼C-小甲鱼'] ['//www.bilibili.com/video/BV1c4411e77t?from=search']
['膜拜!清华大牛终于把Python整成了漫画,让人茅塞顿开!'] ['296'] ['10.4万'] ['2021-06-30'] ['图灵学院教程'] ['//www.bilibili.com/video/BV1264y1Q7uF?from=search']

SpiderMan:异步加载,图片爬取,数据库存储,多进程爬虫,IP代理相关推荐

  1. 爬虫案例:ajax异步加载,爬取豆瓣电影排行榜

    from urllib.request import Request,urlopen from fake_useragent import UserAgent #导入相应的库 base_url ='h ...

  2. LruCache缓存处理及异步加载图片类的封装

    Android中的缓存处理及异步加载图片类的封装   一.缓存介绍: (一).Android中缓存的必要性: 智能手机的缓存管理应用非常的普遍和需要,是提高用户体验的有效手段之一. 1.没有缓存的弊端 ...

  3. Android实现ListView异步加载图片总结

    参考自http://blog.csdn.net/wanglong0537/article/details/6334005# http://www.cnblogs.com/slider/archive/ ...

  4. Android批量图片加载经典系列——使用LruCache、AsyncTask缓存并异步加载图片

    一.问题描述 使用LruCache.AsyncTask实现批量图片的加载并达到下列技术要求 1.从缓存中读取图片,若不在缓存中,则开启异步线程(AsyncTask)加载图片,并放入缓存中 2.及时移除 ...

  5. Android ListView 异步加载图片

    使用ListView.GridView来展示图片是项目中经常遇到的情况,这里使用官方文档的BitmapFun稍作修改实现ListView 异步加载图片效果. 实现原理:给ListView 注册一个 滚 ...

  6. Android:异步加载图片

    我们知道Android为了不阻塞UI线程(main线程),不允许在非UI线程中进行UI操作以及网络请求等操作,为了不阻塞UI,我们往往就要进行异步加载. 我们以异步加载图片为例子,来学习一下异步加载 ...

  7. android异步加载图片并缓存到内存和sd卡上,Android批量图片加载经典系列——采用二级缓存、异步加载网络图片...

    http://www.cnblogs.com/jerehedu/p/4560119.html 2015-06-08 09:20 by 杰瑞教育, 232 阅读, 1 评论, 收藏, 编辑一.问题描述 ...

  8. ListView异步加载图片,完美实现图文混排

    昨天参加一个面试,面试官让当场写一个类似于新闻列表的页面,文本数据和图片都从网络上获取,想起我还没写过ListView异步加载图片并实现图文混排效果的文章,so,今天就来写一下,介绍一下经验. Lis ...

  9. Android实现ListView异步加载图片

    转: http://www.iteye.com/topic/685986 ListView异步加载图片是非常实用的方法,凡是是要通过网络获取图片资源一般使用这种方法比较好,用户体验好,下面就说实现方法 ...

  10. 模仿SDWebImage实现异步加载图片

    模仿SDWebImage实现异步加载图片 SDWebImage想必大家都不陌生吧,要实现它的图片异步加载功能这个还是很简单的. 注意:此处我只实现了异步加载图片,并没有将文件缓存到本地的打算哦:) 源 ...

最新文章

  1. 批量替换sqlserver数据库TEXT字段类型的数据
  2. L4,C16:差1墩,从张数最多的套上去找
  3. 008_SpringBoot视图层技术jsp
  4. SharePoint 2013 APP 开发示例 系列
  5. g20曲线拟合源码解读
  6. [原创]android使用代码生成LayerDrawable的方法和注意事项
  7. 除夕快乐 | 2月11日 星期四 | B站发文回应热搜风波;美团上线“团好货”独立App;国内首家自动驾驶企业获网约车运营许可...
  8. 【C语言】输入5个整数并按输入顺序逆序输出
  9. 浅谈SaaS应用开发的难度
  10. BIOS、BootLoader、uboot对比
  11. AllenNLP常用命令记录
  12. edison\arduino-1.5.3-Intel.1.0.3闪退
  13. 21年杭州云栖大会参会总结-安全相关内容
  14. 人体下肢表面肌电,足底压力和关节角度分析
  15. 一朵花的组成结构图_花是由哪几个部分组成的?
  16. php星空背景动态,纯CSS3炫酷3D星空动画特效
  17. Linux Centos 7软件防火墙
  18. 横滚角,俯仰角,航向角
  19. 项目实训2021.07.16
  20. 计算机科学家格言,科学家说的名人名言20句

热门文章

  1. MaterialDesign学习篇(五),使用SearchView的正确姿势
  2. 这6个Office学习网站凭什么值得收藏?看完小白秒变大神
  3. Visual C++编程实现摄像头视频捕捉
  4. 支付宝年账单html5,今年你被支付宝年账单如何定义?
  5. 西南大学2021年博士研究生招生章程
  6. Ubuntu下获取Android源码
  7. Java工程师修炼之路(校招总结)
  8. 互换市场和期货市场(互换交易市场)
  9. Unity进度条简单制作(两个Image搞定)
  10. Fluka软件Flair中compile无法编译的问题解决