文章目录

  • 前言
  • 一、准备工作
  • 二、思路分析
    • 1、获取cookie
    • 2、打开粉丝列表
    • 3、互粉
    • 4、点赞
    • 5、翻页
  • 三、代码实现
  • 总结

前言

想用Python做个某点评的自动点赞互粉功能,毕竟手点太浪费时间


一、准备工作

  1. Python:3.7
  2. 谷歌驱动:chromedriver.exe,根据自己的版本去下载
  3. git地址:代码

二、思路分析

​    ​    由于这里是直接用selenium实现的,所以基本和用户的操作差不多,先定位元素在进行相应的操作

1、获取cookie

由于是自动给粉丝点赞,所以必须先登录才行,以便获取谷歌的cookie(注:当然也可以做个自动登录的功能,不过嫌太麻烦就没做了)。谷歌的cookie路径:C:\Users\“用户名”\AppData\Local\Google\Chrome\User Data\Default(注:使用cookie的时候需要关闭浏览器,不然程序会报错

2、打开粉丝列表

​    ​    这个就比较简单了,直接通过xpath定位到元素,然后模拟点击事件就可以了,依次打开 首页->粉丝 就行了

​    注:这里由于会打开新的页签,所以需要切换到新打开的页签才能就行操作,否则元素定位不到

3、互粉

​    ​    分析了下页面,这个鼠标悬浮在上边的时候才会出现关注的按钮,所以我们只要定位到元素再模拟鼠标悬浮,最后再点击关注就可以了

4、点赞

​    ​    点赞的话,逻辑差不多。只需要打开粉丝的详情页进入评价页面,获取到未点赞的点赞文章进行点赞就可以了

​    ​    注:由于点开粉丝页之后需要回退,所以图省事,这里就再打开一个新的页签进行操作,点赞完成之后再关闭页签

5、翻页

​    ​    由于粉丝和粉丝文章可能存在多页,所以需要进行翻页操作。这个也简单,只需要定位到下一页元素,模拟点击事件就可以了,停止的条件也简单,我们可以看下图,到最后一页后,下一页元素会消失,所以判断下一页元素存不存在即可

达到最后一页,如图:


三、代码实现

最终代码如下:

import base64
import getopt
import json;
import os, winreg
import random
import sqlite3
import sys
import time
import signalimport fake_useragent
import requests;
import urllib3
from bs4 import BeautifulSoup;
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from selenium import webdriver
import abc
import shutil
import logging
import picklefrom selenium.common.exceptions import InvalidArgumentException
from selenium.webdriver import ActionChainsurllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
logging.basicConfig(level=logging.INFO)# 点赞数
thumbs_up_count = 1;"""
谷歌浏览器工具类"""class google_util():@classmethoddef __dpapi_decrypt(cls, encrypted):import ctypes.wintypesclass DATA_BLOB(ctypes.Structure):_fields_ = [('cbData', ctypes.wintypes.DWORD),('pbData', ctypes.POINTER(ctypes.c_char))]p = ctypes.create_string_buffer(encrypted, len(encrypted))blobin = DATA_BLOB(ctypes.sizeof(p), p)blobout = DATA_BLOB()retval = ctypes.windll.crypt32.CryptUnprotectData(ctypes.byref(blobin), None, None, None, None, 0, ctypes.byref(blobout))if not retval:raise ctypes.WinError()result = ctypes.string_at(blobout.pbData, blobout.cbData)ctypes.windll.kernel32.LocalFree(blobout.pbData)return result@classmethoddef __aes_decrypt(cls, encrypted_txt):browser_data_path = google_util.get_user_data_path();with open(os.path.join(browser_data_path, r'Local State'), encoding='utf-8', mode="r") as f:jsn = json.loads(str(f.readline()))encoded_key = jsn["os_crypt"]["encrypted_key"]encrypted_key = base64.b64decode(encoded_key.encode())encrypted_key = encrypted_key[5:]key = google_util.__dpapi_decrypt(encrypted_key)nonce = encrypted_txt[3:15]cipher = Cipher(algorithms.AES(key), None, backend=default_backend())cipher.mode = modes.GCM(nonce)decryptor = cipher.decryptor()return decryptor.update(encrypted_txt[15:])@classmethoddef __chrome_decrypt(cls, encrypted_txt):if sys.platform == 'win32':try:if encrypted_txt[:4] == b'x01x00x00x00':decrypted_txt = google_util.__dpapi_decrypt(encrypted_txt)return decrypted_txt.decode()elif encrypted_txt[:3] == b'v10':decrypted_txt = google_util.__aes_decrypt(encrypted_txt)return decrypted_txt[:-16].decode()except WindowsError:return Noneelse:raise WindowsError@classmethoddef get_cookie(cls, domain):browser_data_path = google_util.get_user_data_path();sql = f'SELECT name, encrypted_value as value FROM cookies where host_key like "%{domain}%"'file_name = os.path.join(browser_data_path, r'Default\Cookies')con = sqlite3.connect(file_name)con.row_factory = sqlite3.Rowcur = con.cursor()cur.execute(sql)cookie = ''for row in cur:if row['value'] is not None:name = row['name']value = google_util.__chrome_decrypt(row['value']);if value is not None:cookie += name + '=' + value + ';'return cookie# 取得浏览器的安装路径@classmethoddef get_install_path(cls):try:key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,r"SOFTWARE\Clients\StartMenuInternet\Google Chrome\DefaultIcon")except FileNotFoundError:return ''value, type = winreg.QueryValueEx(key, "")  # 获取默认值full_file_name = value.split(',')[0]  # 截去逗号后面的部分dir_name, file_name = os.path.split(full_file_name)  # 分离文件名和路径return dir_name# 取得浏览器的数据路径@classmethoddef get_user_data_path(cls):user_path = os.environ['LOCALAPPDATA']return user_path + r'\Google\Chrome\User Data';@classmethoddef copy_user_data(cls, target_path=None):if target_path is None or target_path == '':target_path = os.path.split(os.path.realpath(__file__))[0]source_path = google_util.get_user_data();shutil.copytree(source_path, target_path);"""粉丝文章对象
"""class fan_article(object):def __init__(self,article_id, article_name):self.__article_id = article_id;self.__article_name = article_name;@propertydef article_name(self):return self.__article_name@article_name.setterdef article_name(self, article_name):self.__article_name = article_name;@propertydef article_id(self):return self.__article_id@article_name.setterdef article_id(self, article_id):self.__article_id = article_id;"""selenium 粉丝文章对象
"""class selenium_fan_article(fan_article):def __init__(self, article_id,article_name, ele):super(selenium_fan_article, self).__init__(article_id,article_name);self.__ele = ele;@propertydef ele(self):return self.__ele@ele.setterdef ele(self, ele):self.__ele = ele;"""requests 粉丝文章对象
"""class requests_fan_article(fan_article):def __init__(self,article_id, article_name,referer_url):super(requests_fan_article, self).__init__(article_id,article_name);self.__referer_url = referer_url;@propertydef referer_url(self):return self.__referer_url@referer_url.setterdef referer_url(self, referer_url):self.__referer_url = referer_url;"""粉丝对象
"""class fan(object):def __init__(self, id, name,url):self.__id = id;self.__name = name;self.__url = url;@propertydef name(self):return self.__name@name.setterdef name(self, name):self.__name = name;@propertydef id(self):return self.__id@id.setterdef id(self, id):self.__id = id@propertydef url(self):return self.__url@url.setterdef url(self, url):self.__url = url;def __eq__(self, other):return self.id == other.id;"""selenium 粉丝对象
"""class selenium_fan(fan):def __init__(self, id, name,url,ele):super(selenium_fan, self).__init__(id, name,url);self.__ele = ele;@propertydef ele(self):return self.__ele@ele.setterdef ele(self, ele):self.__ele = ele;def set_ele(self, ele):self.__ele = ele;"""requests 粉丝对象
"""class requests_fan(fan):def __init__(self, id, name, url,review_url):super(requests_fan, self).__init__(id, name,url);self.__review_url = review_url;@propertydef review_url(self):return self.__review_url@review_url.setterdef review_url(self, review_url):self.__review_url = review_url;"""
大众点评抽象类
"""class dianping_reptile(metaclass=abc.ABCMeta):def __init__(self, is_continue=False, is_follow_fans=True, is_thumbs_up=True):self._cookie = None;self._ua = fake_useragent.UserAgent(path='fake_useragent.json');# 点赞数self._thumbs_up_count = thumbs_up_count;self._base_url = 'http://www.dianping.com';# 已经点赞粉丝self._already_thumbs_up_fans = [];# 此次点赞粉丝self._this_thumbs_up_fans = [];# 持久化文件路径self._ser_path = 'fans.ser';# 是否已经初始化self._init = False;# 是否继续上次爬虫self._is_continue = is_continue;self._is_stop = False;# 是否自动点赞self._is_thumbs_up = is_thumbs_up;# 是否回粉self._is_follow_fans = is_follow_fans;def _get_cookie(self):if self._cookie is None:self._cookie = google_util.get_cookie('.dianping.com')logging.debug('获取cookie:{}'.format(self._cookie));return self._cookie;"""初始化"""def _init_reptile(self):self._init = True;"""爬虫退出,调用销毁函数"""def exit(self):logging.info('############### 爬虫退出 ###############')if self._is_stop is False:# 如果是点赞,则写入粉丝数据if self._is_thumbs_up:self._write_fans();logging.info('此次已点赞粉丝个数:{}'.format(len(self._this_thumbs_up_fans)))self._destroy();self._is_stop = True;"""定义爬虫入口方法定义模板(执行步骤)"""def start(self):begin_time = time.time();if self._init is False:logging.info('############### 爬虫初始化 ###############')self._init_reptile();self._read_fans();cookie = self._get_cookie();self._before(cookie);try:logging.info('############### 爬虫开始 ###############')self._reptile();except Exception as e:raise e;finally:end_time = time.time();run_time = end_time - begin_timelogging.info('该循环程序运行时间:{0}'.format(run_time));self.exit();"""读取已经点赞的粉丝"""def _read_fans(self):if self._is_thumbs_up is False:returnif self._is_continue:logging.info('############### 读取上次已经点赞的粉丝')if os.access(self._ser_path, os.R_OK) is False:logging.warning('######### 文件不存在或者不可读取');return;if os.path.getsize(self._ser_path) == 0:logging.warning('######### 上次点赞粉丝数为空');return;with open(self._ser_path, 'rb') as f:self._already_thumbs_up_fans = pickle.load(f)else:logging.info('############### 重新开始点赞,删除上次已经点赞的粉丝')if os.path.isfile(self._ser_path):os.remove(self._ser_path)"""写入已经点赞的粉丝"""def _write_fans(self):logging.info('############### 写入此次已经点赞的粉丝')with open(self._ser_path, "ab+") as f:if len(self._this_thumbs_up_fans) > 0:list(map(lambda f: f.set_ele(None), self._this_thumbs_up_fans))pickle.dump(self._this_thumbs_up_fans, f)"""爬虫开始准备工作"""def _before(self, cookie):pass"""爬虫结束回调"""def _destroy(self):pass"""回粉"""def _follow_fans(self, fan):if self._is_follow_fans:if self._do_follow_fans(fan):logging.info('################ 对"{0}"回粉'.format(fan.name))@abc.abstractmethoddef _do_follow_fans(self, fan):pass"""对粉丝点赞"""def _thumbs_up_fan(self, fan):if self._is_thumbs_up is False:returnif self._is_continue and fan in self._already_thumbs_up_fans:returnself._sleep_time();logging.debug('################ 进入"{0}"详细页面'.format(fan.name))self._goto_fan_review(fan);self._thumbs_up_article(fan);"""爬取粉丝数据"""def _reptile(self):while True:fans = self._get_fans();for fan in fans:self._follow_fans(fan)self._thumbs_up_fan(fan)if self._has_next_page_fan():self._sleep_time();logging.debug('################ 粉丝翻页')self._next_page_fan();else:break;"""进入粉丝评价页面"""@abc.abstractmethoddef _goto_fan_review(self, fan):pass;"""点赞文章"""def _thumbs_up_article(self, fan):logging.debug('################ 执行点赞文章前置方法');self._before_thumbs_up_article(fan);count = 0;while True:self._sleep_time();# 获取当前页面所有未点赞文章articles = self._get_unthumbs_up_article(fan);if len(articles) == 0:logging.info('############## 粉丝"{0}" 还未发表文章或不存在未点赞文章'.format(fan.name))break;# 打乱文章顺序random.shuffle(articles)for article in articles:self._sleep_time();if self._thumbs_up(article):logging.info('############## 粉丝 "{0}"的文章: {1}  点赞成功'.format(fan.name, article.article_name));count += 1;else:logging.error('############## 粉丝 "{0}"的文章: {1}  点赞失败'.format(fan.name, article.article_name));if self._is_over_thumbs_count(count):break;if self._is_over_thumbs_count(count):logging.info('####################### 粉丝"{}"点赞完成 #######################'.format(fan.name));break;# 文章是否存在下一页if self._has_next_page_article():logging.debug('################ 文章翻页')self._next_page_article(fan);else:logging.info('################ 粉丝 "${0}" 不存在下一页文章,点赞失败'.format(fan.name))break;self._this_thumbs_up_fans.append(fan);self._sleep_time();logging.debug('################ 执行点赞文章后置方法');self._after_thumbs_up_article(fan);"""点完赞前置处理"""def _before_thumbs_up_article(self, fan):pass"""点完赞后置处理"""def _after_thumbs_up_article(self, fan):pass"""判断是否超过点赞数"""def _is_over_thumbs_count(self, count=0):if self._thumbs_up_count > count:return False;return True;"""粉丝翻页"""@abc.abstractmethoddef _next_page_fan(self):pass"""粉丝是否存在下一页"""@abc.abstractmethoddef _has_next_page_fan(self):pass"""文章是否存在下一页"""@abc.abstractmethoddef _has_next_page_article(self):pass"""文章翻页"""@abc.abstractmethoddef _next_page_article(self, fan):pass"""获取当前页面粉丝"""@abc.abstractmethoddef _get_fans(self):pass"""点赞文章"""@abc.abstractmethoddef _thumbs_up(self, article):pass"""获取未点赞的文章"""@abc.abstractmethoddef _get_unthumbs_up_article(self, fan):passdef _sleep_time(self):time.sleep(random.randint(3, random.randint(10, 15)));@propertydef base_url(self):return self._base_url;@propertydef fan_thumbs_up_count(self):return self._fan_thumbs_up_count;"""
selenium爬虫实现类
"""class selenium_dianping_reptile(dianping_reptile):def __init__(self, driver_path, is_continue=False, is_follow_fans=True, is_thumbs_up=True, is_show=True):super(selenium_dianping_reptile, self).__init__(is_continue, is_follow_fans, is_thumbs_up);self.__driver_path = driver_path;self.__driver = None;self.__options = None;self.__is_show = is_show;"""初始化谷歌驱动参数"""def __get_options(self):if self.__options is None:self.__options = webdriver.ChromeOptions()# 添加cookie路径self.__options.add_argument(r"--user-data-dir=" + google_util.get_user_data_path());# 防止网站识别出seleniumself.__options.add_experimental_option('excludeSwitches', ['enable-automation'])self.__options.add_argument("--disable-blink-features=AutomationControlled");# 过滤无用日志self.__options.add_experimental_option("excludeSwitches", ['enable-automation', 'enable-logging'])# self.__options.add_argument("User-Agent=Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/600.1.3 (KHTML, like Gecko) Version/8.0 Mobile/12A4345d Safari/600.1.4");self.__options.add_argument('User-Agent=' + ' Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36');# 无界面运行if self.__is_show is False:self.__options.add_argument('headless')return self.__options"""获取谷歌驱动"""def get_driver(self):if self.__driver is None:try:self.__driver = webdriver.Chrome(executable_path=self.__driver_path, options=self.__get_options());except InvalidArgumentException as e:raise Exception('谷歌浏览器已经被打开,错误:{}'.format(e.msg))# 防止网站识别出seleniumself.__driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", {"source": """Object.defineProperty(navigator, 'webdriver', {get: () => undefined})"""})return self.__driver;def _init_reptile(self):driver = self.get_driver();# 打开首页driver.get(self._base_url)self._sleep_time();driver.find_element_by_xpath('//*[@id="top-nav"]/div/div[2]/span[1]/a[1]').click();# 切换到最新窗口self.__switch_newest_window();self._sleep_time();# 打开粉丝页driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[3]/div/div[1]/div[1]/div/div[1]/ul/li[2]/a').click();# 打开关注列表# driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[3]/div/div[1]/div[1]/div/div[1]/ul/li[1]/a').click();self._init = True;"""切换到最新窗口"""def __switch_newest_window(self):windows = self.__driver.window_handlesself.__driver.switch_to.window(windows[-1])def _goto_fan_review(self, fan):cmd = 'window.open("{}")'self.__driver.execute_script(cmd.format(fan.url));self.__switch_newest_window();self.__driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[1]/div/div/div/div[2]/div[3]/ul/li[2]/a').click();"""点击下一页文章"""def _next_page_article(self, fan):self.__driver.find_element_by_css_selector('a.page-next').click();def _next_page_fan(self):self.__driver.find_element_by_css_selector('a.page-next').click();def _has_next_page_fan(self):try:self.__driver.find_element_by_css_selector('a.page-next');return True;except:return False;def _has_next_page_article(self):try:self.__driver.find_element_by_css_selector('a.page-next');return True;except:return False;def _do_follow_fans(self, fan):try:# 自动点关注add_ele = fan.ele.find_element_by_css_selector('span.J_add');ActionChains(self.__driver).move_to_element(fan.ele).perform();self._sleep_time();add_ele.click();return Trueexcept:return False;def _get_fans(self):fan_eles = self.__driver.find_elements_by_xpath('/html/body/div[1]/div[2]/div[2]/div/div/div[2]/div/ul/li');fans = [];for fan_ele in fan_eles:e = fan_ele.find_element_by_tag_name('h6').find_element_by_tag_name('a');fans.append(selenium_fan(e.get_attribute('user-id'), e.text,e.get_attribute('href'), fan_ele));return fans;def _get_unthumbs_up_article(self, fan):article_eles = self.__driver.find_elements_by_xpath('//*[@id="J_review"]/div/ul/li/div');if len(article_eles) > 0:articles = [];for article_ele in article_eles:try:# 获取未点赞标识article_ele.find_element_by_css_selector('i.heart-s1');a_ele = article_ele.find_element_by_tag_name('a');id_ele =article_ele.find_element_by_css_selector('a.J_flower');articles.append(selenium_fan_article(id_ele.get_attribute('data-id'),a_ele.text, article_ele));except:continue;return articles;return [];"""点完赞关闭窗口"""def _after_thumbs_up_article(self, fan):self.__driver.close();self.__switch_newest_window();def _thumbs_up(self, article):try:i_ele = article.ele.find_element_by_css_selector('i.heart-s1');i_ele.click();time.sleep(2)# 如果还存在未点赞的样式,则点赞失败article.ele.find_element_by_css_selector('i.heart-s1');except:return True;return False;def _destroy(self):# 关闭浏览器self.__driver.quit()"""request爬虫 这个老被限制
"""class requests_dianping_reptile(dianping_reptile):def __init__(self, is_continue=False, is_follow_fans=True, is_thumbs_up=True):super(requests_dianping_reptile, self).__init__(is_continue, is_follow_fans, is_thumbs_up)self.__user_card_url = self._base_url + '/dpnav/userCardData'self.__user_fans_url = self._base_url + '/member/';self.__thumbs_up_url = self._base_url + '/ajax/json/shop/reviewflower';self.__fans_current_page = 1;self.__fans_page_count = 0;self.__article_current_page = 1;self.__article_page_count = 0;def _before(self, cookie):self.__get_user_info();self.__fans_page_count = self.__get_page_count(url=self.__user_fans_url)"""请求内容结果解析"""def __get_url_content(self, url, type='html', referer=None):headers = self.__get_headers(referer);self._sleep_time()result = requests.get(url, headers=headers);if type == 'html':return BeautifulSoup(result.content, 'lxml');else:return json.loads(result.content.decode("utf-8"));"""获取随机请求头"""def __get_headers(self, referer):return {'Cookie': self._cookie, 'User-Agent': self._ua.random,'Referer': referer, 'Host': 'www.dianping.com'}"""获取粉丝文章页数"""def __get_article_page_count(self, fan_url):fan_url = fan_url + '/reviews';return self.__get_page_count(fan_url);"""获取总页数"""def __get_page_count(self, url):html = self.__get_url_content(url, referer=self._base_url)page_arr = html.select("div.pages-num a");"""页数不为空"""if len(page_arr) > 0:# 删除下一页page_arr.pop();return int(page_arr[len(page_arr) - 1].getText());return 1;"""获取用户信息,拼接粉丝页面"""def __get_user_info(self):result = self.__get_url_content(url=self.__user_card_url, type='json');self.__user_fans_url = self.__user_fans_url + result['msg']['userCard']['userId'] + '/fans';def _goto_fan_review(self, fan):passdef _next_page_fan(self):self.__fans_current_page += 1;return self._get_fans(self.__get_fan_url());def _has_next_page_fan(self):if self.__fans_page_count >= self.__fans_current_page:return True;return False;def _has_next_page_article(self):if self.__article_page_count >= self.__article_current_page:return True;return False;def _next_page_article(self, fan):self.__article_current_page += 1;self._get_unthumbs_up_article(self.__get_article_url(fan));def _do_follow_fans(self, fan):pass"""拼接粉丝页面的url"""def __get_fan_url(self):return self.__user_fans_url + '?pg=' + str(self.__fans_current_page)def _get_fans(self):url = self.__get_fan_url();logging.info('###### 粉丝页数url:{0}'.format(url));html = self.__get_url_content(url, referer=self.__user_fans_url);fan_eles = html.select("div.fllow-list div ul li");fans = [];for fan_ele in fan_eles:a_ele = fan_ele.select_one('div.txt div.tit h6 a')fan_url = self._base_url + a_ele.attrs['href'];fan = requests_fan(id=a_ele.attrs['user-id'], name=a_ele.text, url=fan_url,review_url=fan_url + '/reviews?pg={0}&reviewCityId=0&reviewShopType=0&c=0&shopTypeIndex=0');fans.append(fan)return fans;def _thumbs_up(self, article):url = '';thumbs_up_headers = self.__get_headers(referer=article.referer_url);thumbs_up_headers['Origin'] = self._base_url;params = {'t': 1, 's': 2, 'do': 'aa', 'i': id};result = requests.post(self.__thumbs_up_url, headers=thumbs_up_headers, params=params);print('粉丝详情url:' + url);print('结果:' + result.content.decode("utf-8"));if result.status_code == 200:result_json = json.loads(result.content.decode("utf-8"))# 点赞成功if result_json['code'] == 900:return Truereturn Falseif result.status_code == 403:raise Exception('爬虫被限制')def __get_article_url(self, fan):return fan.review_url.format(self.__article_current_page);def _get_unthumbs_up_article(self, fan):url = self.__get_article_url(fan);html = self.__get_url_content(url, referer=fan.url);article_eles = html.select("div.J_rptlist")unthumbs = [];for article_ele in article_eles:# 去除已经点赞if article_ele.find(class_="heart-s1") is not None:ele = article_ele.find('div', attrs={'class': 'tit'})unthumbs.append(requests_fan_article(article_ele.find(class_="heart-s1").parent.attrs['data-id'],ele.text,referer_url=fan.url+'/reviews'));return unthumbs;def _after_thumbs_up_article(self, fan):# 点完赞,清空文章信息self.__article_current_page = 1;self.__article_page_count = 0;def get_args():opts, args = getopt.getopt(sys.argv[1:], "ho:", ["help", "output="])def exit_handler(signum, frame):logging.info('################### 爬虫停止')print(signum)if __name__ == '__main__':reptile = selenium_dianping_reptile(r"chromedriver.exe", is_follow_fans=True, is_thumbs_up=True);# reptile = requests_dianping_reptile(is_follow_fans=False, is_thumbs_up=True);# signal.signal(signal.SIGINT, exit_handler)reptile.start()

总结

原本使用requests的,但是没搞懂网站的反爬机制,被各种限制(╥╯^╰╥),就选择用了selenium来实现了( ̄▽ ̄)~*。如果有知道requests为什么被限制的大佬希望能够告知一下(〃‘▽’〃)

用Python实现某点评自动点赞相关推荐

  1. python+selenium h5新浪微博自动点赞

    同样采用h5版页面进行自动化点赞 update0:微博会莫名其妙取消掉点过的赞图标,但是赞的内容还在,没办法先取消了,只点别人没点过赞的微博 update1:加入了try except和一个浏览器刷新 ...

  2. 用python 实现朋友圈自动点赞

    使用该程序只是为了熟悉pyautogui模块的应用,不是所有圈文该点赞的,请读者慎用! 我们需要pyautogui模块,pyautogui是一个纯Python的GUI自动化工具,通过它可以让程序自动控 ...

  3. python+selenium h5QQ空间自动点赞器

    h5的页面要清爽很多,也方便查找 测试环境是macOS+python3+anaconda update:会出现页面什么都没有的情况,已更新 #qzone like robot import time ...

  4. python 自动点赞_Python实现QQ自动点赞

    用Python做一个QQ自动点赞神器,上代码: 1 def QQZan(qq): 2 browser = webdriver.Chrome() 3 browser.maximize_window() ...

  5. python实现qq自动点赞_Python实现QQ自动点赞

    用python做一个QQ自动点赞神器,上代码: 1 def QQZan(qq): 2 browser = webdriver.chrome() 3 browser.maximize_window() ...

  6. python实现快手美女颜值打分,高颜值的自动点赞关注

    文章目录 前言 一.adb是什么? 二.使用步骤 1.准备工作 1.引入库 2.控制滑动屏幕 2.利用百度api人脸识别代码 3.完整代码 总结 前言 快手现在很多人都在用,今天就用python写一个 ...

  7. pythoncookie自动模拟登录_用Python模拟技巧带你实现自动抽屉登录自动点赞

    原标题:用Python模拟技巧带你实现自动抽屉登录&自动点赞 /1 前言/ 嘿,各位小伙伴们晚上好呀,今天小编又给大家带来干货内容啦,今天带来的是,如何自动登录抽屉,并且点赞! 原计划是不打算 ...

  8. 如何用python实现一个简单的自动评论,自动点赞,自动关注脚本?

    今天的这个脚本, 是一个别人发的外包, 交互界面的代码就不在这里说了, 但是可以分享下自动评论.自动点赞.自动关注.采集评论和视频的数据是如何实现的 开发环境 python 3.8 运行代码 pych ...

  9. Python实现淘宝直播自动点赞与抽奖

    最近入了直播抽奖的坑,而且中了不少奖,薅羊毛事后一时爽,天天刷火葬场. 于是想到用Python自动监控,直播福利是以抽奖为形式的,粉丝们在互动区疯狂发送关键字,主播随机截图,并给在截图中的粉丝送出福利 ...

最新文章

  1. 一行命令堆出你的新垣结衣,不爆肝也能创作ASCII Art
  2. DotNetCore跨平台~dotnet pack打包详细介绍
  3. 大块数据申请及DMA
  4. canal能监控多个mysql_learning-mysql-canal
  5. javaweb关于用户是否登录全局判断,没有登录跳转到登录界面
  6. CodeForces - 892E Envy(可撤销并查集)
  7. Python 2.6.2的字节码指令集一览
  8. zabbix自动发现主机并加入组绑定模板
  9. linux7.0安装oracle乱码,Oracle Linux 7设置中文字符集
  10. Ubuntu 16.04 下安装运行 Suricata
  11. Head First Java 中文版 (第 2 版) PDF 下载
  12. 帆软报表日期控件默认值为空
  13. java点击注册跳转到注册页面_web项目为什么点击注册按钮跳转不到注册页面?...
  14. Java--JSON嵌套JSON中带‘\‘字符的解决方式
  15. IT一族需警惕11钟“电脑病”
  16. 中年男人失业,滴滴,外卖,保安三选一,怎么选?
  17. 让机器学会断句:基于词典的Bigram分词算法
  18. SSM框架之Spring
  19. Ubuntu安装Matlab其Simulink没有菜单栏的解决方案(转载可用)
  20. (转载)依赖、关联、聚合、组合

热门文章

  1. 6款大神级PPT辅助工具,帮你轻松完成PPT制作
  2. WebRTC 一对一语音通话中音频端到端分段延迟分析
  3. DNF手游模拟器,如何使用技能在电脑上操作
  4. 怎么把aac转化为mp3?
  5. 自制USB转232线
  6. android 清空arp缓存表,ARP缓存表 相关命令 arp-a /arp-d
  7. 力扣665. 非递减数列
  8. C语言递归实现二路归并排序
  9. html代码3D滚球游戏代码,滚球控制系统代码
  10. 我的憨憨女友都能看懂学会的python多线程