博文目录

文章目录

  • 环境准备 YOLO V7 main 分支
  • TensorRT 环境
  • 工程源码
    • 假人权重文件
    • toolkit.py
    • 测试.实时检测.py
    • grab.for.apex.py
    • label.for.apex.py
    • aimbot.for.apex.py

环境准备 YOLO V7 main 分支

Python Apex YOLO V5 6.2 目标检测 全过程记录

YOLO V7 main
YOLO V7 模型下载
yolov7.pt
yolov7-tiny.pt

  • 下载代码 yolov7-main.zip 并用 PyCharm 打开
  • 下载权重文件 yolov7-tiny.pt (小) 或者 yolov7.pt (中) 并放到代码同目录, 也可在运行时自动下载
  • 在 PyCharm 中打开 Terminal
    • 设置, File | Settings | Tools | Terminal - Application Settings, Shell path 选 cmd.exe 而不是 powelshell.exe, 这样可以直接在 Terminal 里使用 conda, 会方便很多
    • 执行 conda create -n 7 python=3.9 创建虚拟环境
    • 执行 conda activate 7 激活虚拟环境
    • 执行 pip install -r requirements.txt 安装依赖
  • 运行 detect.py 推理, 此时使用的应该是 CPU, YOLOR d666f2a torch 1.13.0 CPU
    • 第一次运行, 会生成 traced_model.pt, 不太清楚在做什么. 把 --no-trace 参数设置成 action='store_false', 不然每次运行都重新生成这个文件. 扩展: store_true: 意为执行时带该参数, 参数才会被解析为 True 否则解析为 False, store_false 与之相反
  • 执行 nvidia-smi 确认当前系统驱动支持的 CUDA 版本, 我这边支持到 CUDA 11.8 了, 即不高于 11.8 的都可以
  • 到 PyTorch 官网 按照实际情况选择选项并生成命令, 拷贝命令并执行, 开始安装 CUDA 环境
    • 我这边用 Pip 的方式没有成功, 提示安装完成, 但是验证返回 False
    • 用 Conda 的方式就直接成功了
  • 执行 python, 输入 import torch 回车 torch.cuda.is_available(), 返回 True 则安装成功
  • 运行 detect.py 推理, 测试使用的应该是 GPU, YOLOR d666f2a torch 1.13.0 CUDA:0 (NVIDIA GeForce RTX 2080, 8191.5625MB)
  • 如果需要导出模型, 则将 requirements.txtExport 部分按需解除注释并重新安装

安装并验证好 CUDA 环境后, 运行 detect.py 时, 如果出现如下错误, 可尝试卸载 pillow 并重新安装, 本来默认安装的是 9.3.0, 我换成 9.2.0 后都正常了. 卸载可能要卸载多次才能卸干净, pip install pillow==9.2.0

UserWarning: The NumPy module was reloaded (imported a second time). This can in some cases result in small but subtle issues and is discouraged.
...
...File "C:\mrathena\develop\miniconda\envs\7\lib\site-packages\PIL\Image.py", line 100, in <module>from . import _imaging as core
ImportError: DLL load failed while importing _imaging: 找不到指定的模块。

运行期间有一个警告, 解决方法如下

C:\mrathena\develop\miniconda\envs\cuda\lib\site-packages\torch\functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:2895.)return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]

Docs > torch > torch.meshgrid

找到报错文件, 找到报错行, 修改为 return _VF.meshgrid(tensors, **kwargs, indexing='ij') 即可, 该参数默认值就是 ij. 注意: V7 和 V5 如果用了同一个虚拟环境, 则 V7 修改了这个参数后, V5运行 .pt 权重文件时会报错. 所以, V7 和 V5 建议使用不同的虚拟环境

在整合成为工具类后, 报警告如下, 暂不会解决(运行 detect.py 不报错, toolkit.py 理论上和它一样却报错, 猜测是有地方整的不对, 但是找了好久都没发现哪里不对)

C:\mrathena\develop\miniconda\envs\7\lib\site-packages\torch\nn\modules\module.py:673: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at C:\cb\pytorch_1000000000000\work\build\aten\src\ATen/core/TensorBody.h:485.)if param.grad is not None:

TensorRT 环境

wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt
python export.py --weights ./yolov7-tiny.pt --grid --end2end --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640
git clone https://github.com/Linaom1214/tensorrt-python.git
python ./tensorrt-python/export.py -o yolov7-tiny.onnx -e yolov7-tiny-nms.trt -p fp16
  • 先导出 onnx 模型

    • python export.py --weights ./yolov7-tiny.pt --grid --end2end --simplify
  • 下载 https://github.com/Linaom1214/tensorrt-python.git 代码, 现在是 https://github.com/Linaom1214/TensorRT-For-YOLO-Series
  • 安装 python-tensorrt 环境
    • 参考 Python Apex YOLO V5 6.2 目标检测 全过程记录 中的 安装 Python 环境的 TensorRT 部分
    • 执行 pip install pycuda, 这一步爆炸了, 做不下去了, 草. 看到有人说基于 u5 分支来做比较好? 我看了下 u5 分支好像就是 yolov5 …
  • 将 onnx 模型转为 engine 模型
    • python C:\mrathena\develop\workspace\pycharm\TensorRT-For-YOLO-Series\export.py -o yolov7-tiny.onnx -e yolov7-tiny-nms.trt -p fp16

工程源码

假人权重文件

无, 需要自行训练

toolkit.py

import os
import timeimport cv2
import d3dshot  # pip install git+https://github.com/fauskanger/D3DShot#egg=D3DShot
import mss as pymss  # pip install mss
import numpy as np
import torch
from win32api import GetSystemMetrics  # conda install pywin32
from win32con import SRCCOPY, SM_CXSCREEN, SM_CYSCREEN
from win32gui import GetDesktopWindow, GetWindowDC, DeleteObject, ReleaseDC
from win32ui import CreateDCFromHandle, CreateBitmap
import random
from models.experimental import attempt_load
from utils.datasets import LoadStreams, LoadImages, letterbox
from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier, \scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path
from utils.plots import plot_one_box
from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModelclass Capturer:@staticmethoddef win(region):"""region: tuple, (left, top, width, height)conda install pywin32, 用 pip 装的一直无法导入 win32ui 模块, 找遍各种办法都没用, 用 conda 装的一次成功"""left, top, width, height = regionhWin = GetDesktopWindow()hWinDC = GetWindowDC(hWin)srcDC = CreateDCFromHandle(hWinDC)memDC = srcDC.CreateCompatibleDC()bmp = CreateBitmap()bmp.CreateCompatibleBitmap(srcDC, width, height)memDC.SelectObject(bmp)memDC.BitBlt((0, 0), (width, height), srcDC, (left, top), SRCCOPY)array = bmp.GetBitmapBits(True)DeleteObject(bmp.GetHandle())memDC.DeleteDC()srcDC.DeleteDC()ReleaseDC(hWin, hWinDC)img = np.frombuffer(array, dtype='uint8')img.shape = (height, width, 4)return img@staticmethoddef mss(instance, region):"""region: tuple, (left, top, width, height)pip install mss"""left, top, width, height = regionreturn instance.grab(monitor={'left': left, 'top': top, 'width': width, 'height': height})@staticmethoddef d3d(instance, region=None):"""DXGI 普通模式region: tuple, (left, top, width, height)因为 D3DShot 在 Python 3.9 里会和 pillow 版本冲突, 所以使用大佬修复过的版本来替代pip install git+https://github.com/fauskanger/D3DShot#egg=D3DShot"""if region:left, top, width, height = regionreturn instance.screenshot((left, top, left + width, top + height))else:return instance.screenshot()@staticmethoddef d3d_latest_frame(instance):"""DXGI 缓存帧模式"""return instance.get_latest_frame()@staticmethoddef instance(mss=False, d3d=False, buffer=False, frame_buffer_size=60, target_fps=60, region=None):if mss:return pymss.mss()elif d3d:"""buffer: 是否使用缓存帧模式否: 适用于 dxgi.screenshot是: 适用于 dxgi.get_latest_frame, 需传入 frame_buffer_size, target_fps, region"""if not buffer:return d3dshot.create(capture_output="numpy")else:dxgi = d3dshot.create(capture_output="numpy", frame_buffer_size=frame_buffer_size)left, top, width, height = regiondxgi.capture(target_fps=target_fps, region=(left, top, left + width, top + height))  # region: left, top, right, bottom, 需要适配入参为 left, top, width, height 格式的 regionreturn dxgi@staticmethoddef grab(win=False, mss=False, d3d=False, instance=None, region=None, buffer=False, convert=False):"""win:region: tuple, (left, top, width, height)mss:instance: mss instanceregion: tuple, (left, top, width, height)d3d:buffer: 是否为缓存帧模式否: 需要 region是: 不需要 regioninstance: d3d instance, 区分是否为缓存帧模式region: tuple, (left, top, width, height), 区分是否为缓存帧模式convert: 是否转换为 opencv 需要的 numpy BGR 格式, 转换结果可直接用于 opencv"""# 补全范围if (win or mss or (d3d and not buffer)) and not region:w, h = Monitor.resolution()region = 0, 0, w, h# 范围截图if win:img = Capturer.win(region)elif mss:img = Capturer.mss(instance, region)elif d3d:if not buffer:img = Capturer.d3d(instance, region)else:img = Capturer.d3d_latest_frame(instance)else:img = Capturer.win(region)win = True# 图片转换if convert:if win:img = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR)elif mss:img = cv2.cvtColor(np.array(img), cv2.COLOR_BGRA2BGR)elif d3d:img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)return imgclass Monitor:@staticmethoddef resolution():"""显示分辨率"""w = GetSystemMetrics(SM_CXSCREEN)h = GetSystemMetrics(SM_CYSCREEN)return w, h@staticmethoddef center():"""屏幕中心点"""w, h = Monitor.resolution()return w // 2, h // 2class Timer:@staticmethoddef cost(interval):"""转换耗时, 输入纳秒间距, 转换为合适的单位"""if interval < 1000:return f'{interval}ns'elif interval < 1_000_000:return f'{round(interval / 1000, 3)}us'elif interval < 1_000_000_000:return f'{round(interval / 1_000_000, 3)}ms'else:return f'{round(interval / 1_000_000_000, 3)}s'class Predictor:kf = cv2.KalmanFilter(4, 2)kf.measurementMatrix = np.array([[1, 0, 0, 0], [0, 1, 0, 0]], np.float32)kf.transitionMatrix = np.array([[1, 0, 1, 0], [0, 1, 0, 1], [0, 0, 1, 0], [0, 0, 0, 1]], np.float32)def predict(self, point):x, y = pointmeasured = np.array([[np.float32(x)], [np.float32(y)]])self.kf.correct(measured)predicted = self.kf.predict()px, py = int(predicted[0]), int(predicted[1])return px, pyclass Detector:def __init__(self, weights):self.weights = weightsself.source = 'inference/images'  # file/folder, 0 for webcamself.imgsz = 640  # inference size (pixels)self.conf_thres = 0.25  # object confidence threshold, 不能是0, 不然会检测出大量目标, 显存爆炸, 卡死进程self.iou_thres = 0  # IOU threshold for NMSself.device = ''  # cuda device, i.e. 0 or 0,1,2,3 or cpuself.view_img = False  # display resultsself.save_txt = False  # save results to *.txtself.save_conf = False  # save confidences in --save-txt labelsself.nosave = False  # do not save images/videosself.classes = None  # filter by class: --class 0, or --class 0 2 3self.agnostic_nms = False  # class-agnostic NMSself.augment = False  # augmented inferenceself.update = False,  # update all modelsself.project = 'runs/detect'  # save results to project/nameself.name = 'exp'  # save results to project/nameself.exist_ok = False  # existing project/name ok, do not incrementself.trace = False  # trace model, 要改成 False, 不然每次都生成 traced_model.pt, 是权重文件的两倍大, 伤固态# 加载模型self.device = select_device(self.device)self.half = self.device.type != 'cpu'  # half precision only supported on CUDAself.model = attempt_load(self.weights, map_location=self.device)  # load FP32 modelself.stride = int(self.model.stride.max())  # model strideself.imgsz = check_img_size(self.imgsz, s=self.stride)  # check img_sizeif self.trace:self.model = TracedModel(self.model, self.device, self.imgsz)if self.half:self.model.half()  # to FP16self.names = self.model.module.names if hasattr(self.model, 'module') else self.model.namesself.colors = [[random.randint(0, 255) for _ in range(3)] for _ in self.names]if self.device.type != 'cpu':self.model(torch.zeros(1, 3, self.imgsz, self.imgsz).to(self.device).type_as(next(self.model.parameters())))  # run oncedef detect(self, region, classes=None, image=False, label=True, confidence=True):# 截图和转换t1 = time.perf_counter_ns()# 此 IMG 经过了转化, 和 cv2.read 读到的格式是一样的img0 = Capturer.grab(win=True, region=region, convert=True)t2 = time.perf_counter_ns()# 检测aims = []img = letterbox(img0, self.imgsz, stride=self.stride)[0]img = img[:, :, ::-1].transpose(2, 0, 1)  # BGR to RGB, to 3x416x416img = np.ascontiguousarray(img)img = torch.from_numpy(img).to(self.device)img = img.half() if self.half else img.float()  # uint8 to fp16/32img /= 255.0  # 0 - 255 to 0.0 - 1.0if img.ndimension() == 3:img = img.unsqueeze(0)old_img_w = old_img_h = self.imgszold_img_b = 1if self.device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]):for i in range(3):self.model(img, augment=self.augment)[0]with torch.no_grad():  # Calculating gradients would cause a GPU memory leakpred = self.model(img, augment=self.augment)[0]pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, classes=self.classes, agnostic=self.agnostic_nms)det = pred[0]im0 = img0if len(det):# Rescale boxes from img_size to im0 sizedet[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()for *xyxy, conf, cls in reversed(det):c = int(cls)  # integer classclazz = self.names[c] if not self.weights.endswith('.engine') else str(c)  # 类别if classes and clazz not in classes:continue# 屏幕坐标系下, 框的 ltwh 和 xysl = int(region[0] + xyxy[0])st = int(region[1] + xyxy[1])sw = int(xyxy[2] - xyxy[0])sh = int(xyxy[3] - xyxy[1])sx = int(sl + sw / 2)sy = int(st + sh / 2)# 截图坐标系下, 框的 ltwh 和 xygl = int(xyxy[0])gt = int(xyxy[1])gw = int(xyxy[2] - xyxy[0])gh = int(xyxy[3] - xyxy[1])gx = int((xyxy[0] + xyxy[2]) / 2)gy = int((xyxy[1] + xyxy[3]) / 2)# confidence 置信度aims.append((clazz, float(conf), (sx, sy), (gx, gy), (sl, st, sw, sh), (gl, gt, gw, gh)))if image:label2 = (f'{clazz} {conf:.2f}' if confidence else f'{clazz}') if label else Noneplot_one_box(xyxy, im0, label=label2, color=self.colors[int(cls)], line_thickness=3)t3 = time.perf_counter_ns()# print(f'截图:{Timer.cost(t2 - t1)}, 检测:{Timer.cost(t3 - t2)}, 总计:{Timer.cost(t3 - t1)}, 数量:{len(aims)}/{len(det)}')return aims, img0 if image else Nonedef label(self, path):img0 = cv2.imread(path)img = letterbox(img0, self.imgsz, stride=self.stride)[0]img = img[:, :, ::-1].transpose(2, 0, 1)  # BGR to RGB, to 3x416x416img = np.ascontiguousarray(img)img = torch.from_numpy(img).to(self.device)img = img.half() if self.half else img.float()  # uint8 to fp16/32img /= 255.0  # 0 - 255 to 0.0 - 1.0if img.ndimension() == 3:img = img.unsqueeze(0)old_img_w = old_img_h = self.imgszold_img_b = 1if self.device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]):for i in range(3):self.model(img, augment=self.augment)[0]with torch.no_grad():  # Calculating gradients would cause a GPU memory leakpred = self.model(img, augment=self.augment)[0]pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, classes=self.classes, agnostic=self.agnostic_nms)det = pred[0]result = []if len(det):im0 = img0gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh# Rescale boxes from img_size to im0 sizedet[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()for *xyxy, conf, cls in reversed(det):xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywhc = int(cls)  # integer classresult.append((c, xywh))if result:directory = os.path.dirname(path)filename = os.path.basename(path)basename, ext = os.path.splitext(filename)name = os.path.join(directory, basename + '.txt')print(name)with open(name, 'w') as file:for item in result:index, xywh = itemfile.write(f'{index} {xywh[0]} {xywh[1]} {xywh[2]} {xywh[3]}\n')

测试.实时检测.py

import timeimport cv2
from win32con import HWND_TOPMOST, SWP_NOMOVE, SWP_NOSIZE
from win32gui import FindWindow, SetWindowPosfrom toolkit import Detector, Timerregion = (3440 // 7 * 3, 1440 // 3, 3440 // 7, 1440 // 3)
weight = 'yolov7-tiny.pt'
detector = Detector(weight)title = 'Realtime ScreenGrab Detect'
while True:t = time.perf_counter_ns()_, img = detector.detect(region=region, image=True)cv2.namedWindow(title, cv2.WINDOW_AUTOSIZE)cv2.putText(img, f'{Timer.cost(time.perf_counter_ns() - t)}', (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (255, 255, 255), 1)cv2.imshow(title, img)# 寻找窗口, 设置置顶SetWindowPos(FindWindow(None, title), HWND_TOPMOST, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE)k = cv2.waitKey(1)  # 0:不自动销毁也不会更新, 1:1ms延迟销毁if k % 256 == 27:# ESC 关闭窗口cv2.destroyAllWindows()exit('ESC ...')

grab.for.apex.py

import multiprocessing
import time
from multiprocessing import Processimport cv2
import pynput
import winsoundfrom toolkit import Monitorend = 'end'
switch = 'switch'
init = {end: False,switch: False,
}def mouse(data):def down(x, y, button, pressed):if pressed and button == pynput.mouse.Button.x2:data[switch] = not data[switch]winsound.Beep(800 if data[switch] else 400, 200)with pynput.mouse.Listener(on_click=down) as m:m.join()def keyboard(data):def release(key):if key == pynput.keyboard.Key.end:winsound.Beep(400, 200)data[end] = Truereturn Falsewith pynput.keyboard.Listener(on_release=release) as k:k.join()def grab(data):# region = (3440 // 7 * 3, 1440 // 3, 3440 // 7, 1440 // 3)cx, cy = Monitor.center()size = 640region = (cx - size // 2, cy - size // 2, size, size)def save():name = f'D:\\resource\\develop\\python\\dataset.yolo\\apex\\action\\data\\{int(time.time())}.png'print(name)# img = Monitor.grab(region)# mss.tools.to_png(img.rgb, img.size, output=name)img = Monitor.grab(region, convert=True)cv2.imwrite(name, img)while True:if data[end]:breakif data[switch]:time.sleep(0.5)save()if __name__ == '__main__':multiprocessing.freeze_support()manager = multiprocessing.Manager()data = manager.dict()data.update(init)pm = Process(target=mouse, args=(data,))pm.start()pk = Process(target=keyboard, args=(data,))pk.start()pg = Process(target=grab, args=(data,))pg.start()pk.join()pm.terminate()

label.for.apex.py

import osfrom toolkit import Detectordetector = Detector('model.for.apex.2.engine')directory = r'D:\resource\develop\python\dataset.yolo\apex\action\data'
files = os.listdir(directory)
print(f'total files: {len(files)}')
paths = []
for file in files:path = os.path.join(directory, file)if path.endswith('.txt'):continuepaths.append(path)
print(f'image files: {len(paths)}')for i, path in enumerate(paths):print(f'{i + 1}/{len(paths)}')detector.label(path)

aimbot.for.apex.py

import ctypes
import math
import multiprocessing
import time
from multiprocessing import Process
import cv2
import pynput
from win32gui import GetCursorPos, FindWindow, SetWindowPos
from win32con import HWND_TOPMOST, SWP_NOMOVE, SWP_NOSIZE
import winsound
from simple_pid import PID  # pip install simple-pidfov = 'fov'
end = 'end'
box = 'box'
aim = 'aim'
show = 'show'
view = 'view'
fire = 'fire'
head = 'head'
size = 'size'
heads = {'head', '1'}
bodies = {'body', '0'}
region = 'region'
center = 'center'
radius = 'radius'
roundh = 'roundh'
roundv = 'roundv'
weights = 'weights'
predict = 'predict'
confidence = 'confidence'
sensitivity = 'sensitivity'
init = {center: None,  # 屏幕中心点fov: 110,  # 游戏内的 FOVroundh: 16420,  # 游戏内以鼠标灵敏度为1测得的水平旋转360°对应的鼠标移动距离, 多次测量验证. 经过测试该值与FOV无关. 移动像素理论上等于该值除以鼠标灵敏度roundv: 7710 * 2,  # 垂直, 注意垂直只能测一半, 即180°范围, 所以结果需要翻倍sensitivity: 2,  # 当前游戏鼠标灵敏度radius: 50,  # 瞄准生效半径weights: 'yolov7-tiny.pt',  # 权重文件size: 400,  # 截图的尺寸confidence: 0.5,  # 置信度, 低于该值的认为是干扰region: None,  # 截图范围end: False,  # 退出标记, Endbox: False,  # 显示开关, Upshow: False,  # 显示状态aim: False,  # 瞄准开关, Downfire: False,  # 开火状态view: False,  # 预瞄状态, F, 手枪狙击枪可提前预瞄一下head: False,  # 切换头和身体, Rightpredict: False,  # 准星跳目标点/预瞄点, Left
}def mouse(data):def down(x, y, button, pressed):if button == pynput.mouse.Button.left:data[fire] = pressedwith pynput.mouse.Listener(on_click=down) as m:m.join()def keyboard(data):def press(key):if key == pynput.keyboard.KeyCode.from_char('f'):data[view] = Truedef release(key):if key == pynput.keyboard.Key.end:# 结束程序data[end] = Truewinsound.Beep(400, 200)return Falseelif key == pynput.keyboard.KeyCode.from_char('f'):data[view] = Falseelif key == pynput.keyboard.Key.up:data[box] = not data[box]winsound.Beep(800 if data[box] else 400, 200)elif key == pynput.keyboard.Key.down:data[aim] = not data[aim]winsound.Beep(800 if data[aim] else 400, 200)elif key == pynput.keyboard.Key.left:data[predict] = not data[predict]winsound.Beep(800 if data[predict] else 600, 200)elif key == pynput.keyboard.Key.right:data[head] = not data[head]winsound.Beep(800 if data[head] else 600, 200)with pynput.keyboard.Listener(on_release=release, on_press=press) as k:k.join()def aimbot(data):# 为了防止因多进程导致的重复加载问题出现导致启动变慢, 把耗时较多的操作和其他涉及到 toolkit 的操作都放在同一个进程中from toolkit import Detector, Monitor, KalmanFilterdata[center] = Monitor.center()c1, c2 = data[center]data[region] = c1 - data[size] // 2, c2 - data[size] // 2, data[size], data[size]detector = Detector(data[weights])kf = KalmanFilter()try:driver = ctypes.CDLL('logitech.driver.dll')ok = driver.device_open() == 1if not ok:print('初始化失败, 未安装lgs/ghub驱动')except FileNotFoundError:print('初始化失败, 缺少文件')def move(x, y, absolute=False):if (x == 0) & (y == 0):returnmx, my = x, yif absolute:ox, oy = GetCursorPos()mx = x - oxmy = y - oydriver.moveR(mx, my, True)def oc():ac, _ = data[center]return ac / math.tan((data[fov] / 2 * math.pi / 180))def rx(x):angle = math.atan(x / oc()) * 180 / math.pireturn int(angle * data[roundh] / data[sensitivity] / 360)def ry(y):angle = math.atan(y / oc()) * 180 / math.pireturn int(angle * data[roundv] / data[sensitivity] / 360)def inner(point):"""判断该点是否在准星的瞄准范围内"""a, b = data[center]x, y = pointreturn math.pow(x - a, 2) + math.pow(y - b, 2) < math.pow(data[radius], 2)def highest(targets):"""选最高的框"""if len(targets) == 0:return Noneindex = 0maximum = 0for i, item in enumerate(targets):height, sc, _, _ = itemif maximum == 0:index = imaximum = heightelse:if height > maximum:index = imaximum = heightreturn targets[index]def nearest(targets):"""选距离准星最近的框"""if len(targets) == 0:return Nonecx, cy = data[center]index = 0minimum = 0for i, item in enumerate(targets):_, sc, _, _ = itemsx, sy = scdistance = math.pow(sx - cx, 2) + math.pow(sy - cy, 2)if minimum == 0:index = iminimum = distanceelse:if distance < minimum:index = iminimum = distancereturn targets[index]def follow(targets, last):"""从 targets 里选距离 last 最近的"""if len(targets) == 0 or last is None:return None_, lsc, _, _ = lastlx, ly = lscindex = 0minimum = 0for i, item in enumerate(targets):_, sc, _, _ = itemsx, sy = scdistance = math.pow(sx - lx, 2) + math.pow(sy - ly, 2)if minimum == 0:index = iminimum = distanceelse:if distance < minimum:index = iminimum = distancereturn targets[index]pidx = PID(1, 0, 0, setpoint=0, sample_time=0.001)pidx.output_limits = (-100, 100)last = None  # 上次瞄准的目标winsound.Beep(800, 200)title = 'Realtime ScreenGrab Detect'while True:# 检测是否需要退出if data[end]:break# 检测是否需要推测, 如需推测则推测if data[box] or data[aim]:t = time.time()aims, img = detector.detect(region=data[region], classes=heads.union(bodies), image=True, label=True)cv2.putText(img, f'{int((time.time() - t) * 1000)}', (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (255, 255, 255), 1)else:continue# 拿到瞄准目标targets = []# class, confidence, screen target center, grab target center, screen target rectangle, grab target rectanglefor clazz, conf, sc, gc, sr, gr in aims:# 置信度过滤if conf < data[confidence]:continue# 拿到指定的分类_, _, _, height = srif data[head]:if clazz in heads:targets.append((height, sc, gc, gr))else:if clazz in bodies:cx, cy = sctargets.append((height, (cx, cy - (height // 2 - height // 3)), gc, gr))  # 检测身体的时候, 因为中心位置不太好, 所以对应往上调一点# 筛选该类中最符合的目标# 尽量跟一个目标, 不要来回跳, 直到未检测到目标, 就打断本次跟踪# 有目标就跟目标, 没目标就选距离准星最近的target = Noneif len(targets) != 0:target = follow(targets, last) if last else nearest(targets)# 重置上次瞄准的目标last = target# 解析目标里的信息predicted = Noneif target:_, sc, gc, gr = targetsx, sy = sc  # 当前截图中目标所在点gl, gt, gw, gh = grpredicted = kf.predict(sc)  # 下张截图中可能的目标所在点(预测)px, py = predictedif abs(px - sx) > 50 or abs(py - sy) > 50:predicted = scdx = predicted[0] - sxdy = predicted[1] - sy# 计算移动距离, 展示预瞄位置if data[box]:px1 = gl + dxpy1 = gt + dypx2 = px1 + gwpy2 = py1 + ghcv2.rectangle(img, (px1, py1), (px2, py2), (0, 256, 0), 2)# 检测瞄准开关if data[aim] and (data[view] or data[fire]):if target:_, sc, gc, _ = targetif inner(sc):# 计算要移动的像素cx, cy = data[center]  # 准星所在点(屏幕中心)sx, sy = sc  # 目标所在点# predicted  # 目标将在点if data[predict]:x = int((predicted[0] - cx))y = int((predicted[1] - cy))else:x = sx - cxy = sy - cyox = rx(x)oy = ry(y)px = int(pidx(ox))px = int(ox)py = int(oy)print(f'目标:{sc}, 预测:{predicted}, 移动像素:{(x, y)}, FOV:{(ox, oy)}, PID:{(px, py)}')move(px, py)# 检测显示开关if data[box]:data[show] = Truecv2.namedWindow(title, cv2.WINDOW_AUTOSIZE)cv2.imshow(title, img)SetWindowPos(FindWindow(None, title), HWND_TOPMOST, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE)cv2.waitKey(1)if not data[box] and data[show]:data[show] = Falsecv2.destroyAllWindows()if __name__ == '__main__':multiprocessing.freeze_support()  # windows 平台使用 multiprocessing 必须在 main 中第一行写这个manager = multiprocessing.Manager()data = manager.dict()  # 创建进程安全的共享变量data.update(init)  # 将初始数据导入到共享变量# 将键鼠监听和压枪放到单独进程中跑pa = Process(target=aimbot, args=(data,))pa.start()pm = Process(target=mouse, args=(data,))pm.start()pk = Process(target=keyboard, args=(data,))pk.start()pk.join()  # 不写 join 的话, 使用 dict 的地方就会报错 conn = self._tls.connection, AttributeError: 'ForkAwareLocal' object has no attribute 'connection'pm.terminate()  # 鼠标进程无法主动监听到终止信号, 所以需强制结束pa.terminate()

Python Apex YOLO V7 main 目标检测 全过程记录相关推荐

  1. YOLOv3目标检测全过程记录

    前提: 软硬件环境: python 3.6.5 Ubuntu18.04 LTS PyTorch 1.1.0 CUDA 10.0 cudnn 7.5.0 GPU: NVIDIA TITAN XP 一  ...

  2. Yolo:实时目标检测实战(上)

    Yolo:实时目标检测实战(上) YOLO:Real-Time Object Detection 你只看一次(YOLO)是一个最先进的实时物体检测系统.在帕斯卡泰坦X上,它以每秒30帧的速度处理图像, ...

  3. YOLO v5 实现目标检测(参考数据集自制数据集)

    YOLO v5 实现目标检测(参考数据集&自制数据集) Author: Labyrinthine Leo   Init_time: 2020.10.26 GitHub: https://git ...

  4. YOLO v5 实现目标检测

    本文用于学习记录 文章目录 前言 一.YOLO v5 环境配置 1.1 安装 anaconda 与 pycharm 2. 创建虚拟环境 3. 进入 pytorch 环境 4. 安装 pytorch 二 ...

  5. Yolo:实时目标检测实战(下)

    Yolo:实时目标检测实战(下) YOLO:Real-Time Object Detection After a few minutes, this script will generate all ...

  6. 【lidar】基于YOLO的3D目标检测(激光雷达点云)课程设计

    基于YOLO的3D目标检测(激光雷达点云)课程设计 代码+数据集下载地址:下载地址

  7. Python实现一个简单的目标检测

    Python实现一个简单的目标检测 相关介绍 实验环境 基本思路 代码实现 输出结果 相关介绍 选择性搜索(Select Search)算法属于候选区域算法,用分割不同区域的办法来识别潜在的物体.在分 ...

  8. 路面裂痕检测YOLO算法、目标检测算法实现地面裂缝检测

    道路裂纹检测YOLO算法,目标检测,目标识别,裂纹检测 路面裂痕检测YOLO算法.目标检测算法实现地面裂缝检测 车头定位 交通标志识别 车道线识别 自己标注数据,训练模型,效果很好4360063193 ...

  9. 小学生python游戏编程arcade----单词对错检测及记录写入excel中

    小学生python游戏编程arcade----单词对错检测及记录写入excel中 前言 单词对错检测及记录写入excel中 1.excel读取修改 1.1 excel读取 1.2 修改用到的库 1.3 ...

最新文章

  1. 远程usb端口映射_PLC远程控制
  2. 1.1 内存的四个分区
  3. 【Vue】宝塔面板服务器配置Vue项目
  4. [python]自问自答:python -m参数? (转)
  5. leetcode算法题--二叉树中和为某一值的路径
  6. java 正规 忽略,java-正则表达式查找变量并忽略方法
  7. promise 为什么出现
  8. 6、ES6的let和const
  9. 《Perl语言入门》读书笔记(一)Perl简介
  10. 西南石油大学计算机类云南省分数线,2017西南石油大学各专业分数线
  11. java 指纹匹配算法_java指纹识别的精确算法——SIFT算法 | 学步园
  12. 超过100项改进 100tv聚好看全新发布
  13. word排版技巧:论文图表目录制作步骤
  14. static_cast,dynamic_cast,const_cast详解
  15. Python pyautogui 实现自动发送消息
  16. 用管理员权限打开vs2010并调试网页游戏辅助,使用WPE无法抓包的解决方法
  17. 计算机技术在中医药中的应用,计算机药物虚拟筛选技术在中医药领域中的应用前景...
  18. simca算法 matlab,SIMCA分类法中主成分分析算法探究.doc
  19. 虚假招聘的岗位的特征有哪些呢
  20. yolov1官方论文全文翻译[附个人理解]

热门文章

  1. 会员服务-获取所有会员等级
  2. Mysql创建数据库时提示Error 1044 Access denied for user 'sss'@localhost to database 'xxx'
  3. 【Data Analysis 01】Airbnb_new_user_booking_DataExploration(爱彼迎新用户订房数据探索)
  4. 无盘系统对服务器的要求,无盘服务器配置要求高?两千的主机就能带100台客户机你信吗?-服务器设置...
  5. Tommy Hilfiger 宣布,F1世界冠军Lewis Hamilton担任TOMMY HILFIGER全球男装代言人
  6. 恐龙快打无限子弹修改方案
  7. Codeforces Round #800 (Div. 2) E. Keshi in Search of AmShZ
  8. 2022苹果CMS 全新绿豆二开影视源码app源码完整版带安装教程
  9. VuePress 博客之 SEO 优化(一) sitemap 与搜索引擎收录
  10. 食物链(种类并查集)