最近项目应用需要部署到边缘模块,所以就玩起了Xavier。感觉网上的资料不多,所以从头记录一下笔记,方便大家一起学习应用。

玩转NVIDIA Jetson AGX Xavier

  • 1.刷机(安装sdkmanager)
    • 1.1 NRU公司的工控机模组
    • 1.2 英伟达(这个没有模组,所以没有usb驱动文件)
    • 1.3 瑞泰新时代家的模组
  • 2.配置yolov5环境并运行
    • 2.1配置虚拟环境
    • 2.2 yolo5进行tensorrt加速
  • 3.CAN总线发送数据
  • 4.拉出视频流
  • 5.设备设置
  • 6.win与xavier通信
  • 7.references

1.刷机(安装sdkmanager)

这里刷机,我刷了三台,不一样的厂家,驱动文件不一样。所以文件会有差别。这里分别记录一下。

1.1 NRU公司的工控机模组

拿到这个首先是刷机,安装sdkmanager。这里需要用到一台装有ubuntu18.04版本的电脑,这里我是装的双系统,没有用虚拟机。
需要的材料:
一台装有ubuntu18.04的主机
xavier,这里我叫他工控机
usb线
两个网线
1.首先在主机上下载英伟达的sdkmanager,这里需要注册一个英伟达的账号。

2.用usb将主机和工控机链接,一条安卓线就可以,这里的工控机也要插上三孔电源。

开始设置工控机进入recovery模式:首先是关机状态,用笔尖按下1,两秒后按下2,pwr的灯亮后1秒,1和2都松开,此时pwr灯是亮的。

在terminal(ctrl+alt+t)中输入lsusb,可以看到NVIDIA corp说明此时工控机进入了recovery状态。

3.主机上下载好sdkmanager,工控机和主机连接并进入recovery之后进行安装启动

sudo dpkg -i [sdk名字.deb]
sudo apt-get update
sudo apt-get install
sudo apt-get --fix-broken install
sdkmanager#启动

这里如果遇到问题就根据提示或者百度简单解决。启动sdk后选择AGX
这里第二步host machine可以不选,第二步选择jetpack4.4,然后continue。

然后按步骤安装,安装目录可以自己选择,跳出选择框先skip,完成后会打开你的下载路径文件夹。下载dtb文件刷一下device tree,把下载的dtb文件放到jetpack的安装目录(nvidia_sdk/JetPack_4.4_Linux_JETSON_AGX_XAVIER/Linux_for_Tegra)下,在这个目录打开终端,输入sudo ./flash.sh -r -d NRU_JetPack4.4_v0.8.dtb jetson-xavier mmcblk0p1
这里按步骤应该没有问题,如果出现not existing,就看自己的工控机是否进入了recovery模式(lsusb有nvidia),成功后如下图。

完成后给工控机插上网线(usb可以拔掉,我好像没拔,忘了),给工控机dp口插上显示屏,可以看到工控机开始安装ubuntu系统了,稍等一会儿。

选上accept,继续安装。安装好后查看自己工控机的ip。

然后设置工控机的休眠,默认是5分钟,改成never。
然后回到主机的sdkmanager


然后就会通过网络给工控机安装上CUDA环境。
安装完成后在工控机上可以查看cuda版本:sudo cat /usr/local/cuda/version.txt。

1.2 英伟达(这个没有模组,所以没有usb驱动文件)

这个工控机被我搞坏了,后来又拿了一个xavier模组,又进行了刷机,和这个过程类似,大家可以参考这个博主的:https://blog.csdn.net/qq_38679413/article/details/109398853,他写的很好,按步骤,没问题。

1.3 瑞泰新时代家的模组

1.下载文件:
首先下载 Jetson_Linux_R_aarch64.tbz2和Tegra-Linux-Sample-Root-Filesystem_R_aarch64.tbz2(这两个sdkmanager里面第二步就可以下载到自己的电脑上),还有驱动usb的bsp支持包Realtimes-L4T-.tar,这里我用的是32.4.3,具体可以找售后要文件,如果和我版本一样可以用我分享的realtimes_L4T_3243_Xavier.tar文件。
2.开始刷机:
2.1这三个文件下载好后开始,新建一个flash文件夹,首先将Linux Driver Package放入flash,然后打开终端,输入tar -vxf Jetson_Linux_R_aarch64.tbz2对它进行解压,
2.2然后进入他解压后的/Linux_for_Tegra/rootfs目录,把 the Root File System拷贝进来,解压到这个目录下 sudo tar -jxpf …/…/Tegra-Linux-Sample-Root-Filesystem_R_aarch64.tbz2,然后进入 Linux_for_Tegra目录,输入 sudo ./apply_binaries.sh(运行 apply_binaries.sh 脚本拷贝 NVIDIA 用户空间库进入目标文件系)。
2.3把Realtimes-L4T-.tar 包解压到与 Linux_for_Tegra 文件夹同级目录下面,使用命令: tar -xvf Realtimes-L4T-.tar,然后进入到 Realtimes-L4T 文件夹,运行sudo./install.sh
2.4然后开始进行烧录,同样把xavier开到recovery模式(关机状态下,中间件两秒钟然后中间件和右边建同时三秒钟一起松开,lsusb里面有了nvidia),在Linux_for_Tegra目录下输入sudo ./flash.sh rtso-1001 mmcblk0p1 //烧写系统

这个本来是打算烧录镜像文件的,但是感觉镜像文件并不好用,所以联系了售后进行了系统重装,并安装了cuda和opencv等(这个用sdkmanager刷一下就可以)这时候需要注意的几个点,首先在xavier上设置不睡眠,一直开机状态,查看xavier的ip地址,还要连上网,xavier不用更新源,用sdk刷的时候,不要os,usb线可以拔掉,他俩使用局域网传信息的。
1.Jetson 禁止指定软件及 L4T 系统升级:

sudo dpkg --get-selections | more //查看系统所有软件状态
sudo apt-mark hold nvidia-l4t-kernel //禁止 kernel 升级
sudo apt-mark hold nvidia-l4t-kernel-dtbs //禁止 kernel-dtb 升级
sudo dpkg --get-selections | grep hold //查看是否锁定成功

取消 L4T 系统升级
例如:Jetson L4T R32.4.x
System setting →softwore&Updates→other Software 取消勾选“…r32.4 main”



mount:https://blog.csdn.net/xingdou520/article/details/84309155

2.配置yolov5环境并运行

2.1配置虚拟环境

首先安装Archiconda3-0.2.3-Linux-aarch64.sh(类似于anconda),终端输入

sh Archiconda3-0.2.3-Linux-aarch64.sh

我是在zsh中(类似于bash),安装完毕后修改.zsh(或者.bash)文件,在终端中输入

vim ~/.zshrc

配置路径,最后一行添加路径

终端输入:

source ~/.zshrc
conda --version


创建虚拟环境

conda create -n yolo python=3.6.9
source activate yolo

克隆yolo文件

git clone -b v5.0 https://github.com/ultralytics/yolov5.git
git clone https://github.com/wang-xinyu/tensorrtx.git

在部署环境的时候走也一些弯路,其实直接用requirements文件就可以,下载不了的包单独领出来下载,用镜像路径。

pip install matplotlib==3.2.2pip install -r requirements.txt  -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
python detect.py --source 0 --weights weights/yolov5s.pt --conf 0.25 #安装完后测试

这里是我的环境包,因为之后部署tensorrt的时候有些包版本不对,所以第一个是原始yolo5的环境,第二个是我的tensorrt环境下的yolo5的包。


运行的时候总是出问题,这里大部分都是numpy的问题,可以试试更换numpy的版本。

成功倒入cv:将cv2.cpython-36m-aarch64-linux-gnu.so复制到虚拟环境下:

 cp -r cv2.cpython-36m-aarch64-linux-gnu.so /media/nvidia/ab1625d4-25fe-4be1-a9da-2198e4c13bb0/softwares/envs/yolo/lib/python3.6
pip install  numpy==1.18.5

下载torch

pip install /home/nvidia/Desktop/torch-1.7.0-cp36-cp36m-linux_aarch64.whl
sudo apt-get install libopenblas-dev

下载torchvision

git clone -b v0.8.1  https://gitee.com/zero-one-game/vision vision-0.8.1sudo /home/nvidia/archiconda3/envs/yolo/bin/python3.6 setup.py install

在yolo5的文件夹下打开虚拟环境,下载所需包

sudo apt install libfreetype6-dev -y
sudo apt install python3-matplotlib -y
pip install -r requirements.txt
pip install Cython matplotlib==3.2.2 numpy==1.18.5 Pillow PyYAML==5.4.1 scipy==1.5.4 tensorboard==1.15.0
pip install tqdm==4.58.0 seaborn==0.11.1 pandas thop pycocotools==2.0.2

下载不了matplotlib,然后就参考https://github.com/yqlbu/jetson-packages-family,还是不行,更换源https://www.it610.com/article/1280107338753130496.htm再

pip install matplotlib==3.2.2
pip install numpy==1.18.5  -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com

运行后需要在下载一个ttf文件。
arm64下载vscode参考:https://blog.csdn.net/qq_45321807/article/details/109428777

2.2 yolo5进行tensorrt加速

这里可以参考博文:
【1】https://blog.csdn.net/qq_40691868/article/details/117331162
【2】https://blog.csdn.net/xue_csdn/article/details/100748309
【3】https://github.com/wang-xinyu/tensorrtx.git
主要步骤是通过工程里的gen_wts.py将best.pt文件生成best.wts文件


python gen_wts.py -w weights/best.pt


再将wts文件生成engine文件,首先在rt工程下新建build文件夹

mkdir build
cd buildcmake ..
make
sudo ./yolov5 -s yolov5s.wts yolov5s.engine s


这里注意版本yolo和rt版本要一致,如果出现以下错误,就是版本不一样的问题


这里因为自己训练的数据和yolo本身的80类有区别,所以需要修改为自己的类别数和标签

3.CAN总线发送数据

这里我将框出来的坐标保存成了log文件,之后要通过can总线发送,所以运行文件做了修改。
这个是我的yolo_trt_test.py文件

"""
An example that uses TensorRT's Python api to make inferences.
"""
import sys
import ctypes
import os
import shutil
import random
import sys
import threading
import time
import cv2
import numpy as np
import pycuda.autoinit
import pycuda.driver as cuda
import tensorrt as trt
import torch
import torchvision
import argparseCONF_THRESH = 0.5
IOU_THRESHOLD = 0.4def get_img_path_batches(batch_size, img_dir):ret = []batch = []for root, dirs, files in os.walk(img_dir):for name in files:if len(batch) == batch_size:ret.append(batch)batch = []batch.append(os.path.join(root, name))if len(batch) > 0:ret.append(batch)return retdef plot_one_box(x, img, color=None, label=None, line_thickness=None):"""description: Plots one bounding box on image img,this function comes from YoLov5 project.param: x:      a box likes [x1,y1,x2,y2]img:    a opencv image objectcolor:  color to draw rectangle, such as (0,255,0)label:  strline_thickness: intreturn:no return"""tl = (line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1)  # line/font thicknesscolor = color or [random.randint(0, 255) for _ in range(3)]c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))# 把框的坐标按照一定格式printprint('1F334455#'+hex(int(x[0]))[2:].zfill(4)+hex(int(x[1]))[2:].zfill(4) +hex(int(x[2]))[2:].zfill(4)+hex(int(x[3]))[2:].zfill(4))_console = sys.stdout  # 获取print数据fff = open("outfile.txt", "a+")  # 打开这个文件sys.stdout = fff  # 将print数据写入这个文件# with open("outfile.txt", "a+") as f:#     sys.stdout = fcv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)if label:tf = max(tl - 1, 1)  # font thicknesst_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA)  # filledcv2.putText(img,label,(c1[0], c1[1] - 2),0,tl / 3,[225, 255, 255],thickness=tf,lineType=cv2.LINE_AA,)class YoLov5TRT(object):"""description: A YOLOv5 class that warps TensorRT ops, preprocess and postprocess ops."""def __init__(self, engine_file_path):# Create a Context on this device,self.ctx = cuda.Device(0).make_context()stream = cuda.Stream()TRT_LOGGER = trt.Logger(trt.Logger.INFO)runtime = trt.Runtime(TRT_LOGGER)# Deserialize the engine from filewith open(engine_file_path, "rb") as f:engine = runtime.deserialize_cuda_engine(f.read())context = engine.create_execution_context()host_inputs = []cuda_inputs = []host_outputs = []cuda_outputs = []bindings = []for binding in engine:# print('bingding:', binding, engine.get_binding_shape(binding))size = trt.volume(engine.get_binding_shape(binding)) * engine.max_batch_sizedtype = trt.nptype(engine.get_binding_dtype(binding))# Allocate host and device buffershost_mem = cuda.pagelocked_empty(size, dtype)cuda_mem = cuda.mem_alloc(host_mem.nbytes)# Append the device buffer to device bindings.bindings.append(int(cuda_mem))# Append to the appropriate list.if engine.binding_is_input(binding):self.input_w = engine.get_binding_shape(binding)[-1]self.input_h = engine.get_binding_shape(binding)[-2]host_inputs.append(host_mem)cuda_inputs.append(cuda_mem)else:host_outputs.append(host_mem)cuda_outputs.append(cuda_mem)# Storeself.stream = streamself.context = contextself.engine = engineself.host_inputs = host_inputsself.cuda_inputs = cuda_inputsself.host_outputs = host_outputsself.cuda_outputs = cuda_outputsself.bindings = bindingsself.batch_size = engine.max_batch_sizedef infer(self, input_image_path):threading.Thread.__init__(self)# Make self the active context, pushing it on top of the context stack.self.ctx.push()self.input_image_path = input_image_path# Restorestream = self.streamcontext = self.contextengine = self.enginehost_inputs = self.host_inputscuda_inputs = self.cuda_inputshost_outputs = self.host_outputscuda_outputs = self.cuda_outputsbindings = self.bindings# Do image preprocessbatch_image_raw = []batch_origin_h = []batch_origin_w = []batch_input_image = np.empty(shape=[self.batch_size, 3, self.input_h, self.input_w])input_image, image_raw, origin_h, origin_w = self.preprocess_image(input_image_path)batch_origin_h.append(origin_h)batch_origin_w.append(origin_w)np.copyto(batch_input_image, input_image)batch_input_image = np.ascontiguousarray(batch_input_image)# Copy input image to host buffernp.copyto(host_inputs[0], batch_input_image.ravel())start = time.time()# Transfer input data  to the GPU.cuda.memcpy_htod_async(cuda_inputs[0], host_inputs[0], stream)# Run inference.context.execute_async(batch_size=self.batch_size,bindings=bindings, stream_handle=stream.handle)# Transfer predictions back from the GPU.cuda.memcpy_dtoh_async(host_outputs[0], cuda_outputs[0], stream)# Synchronize the streamstream.synchronize()end = time.time()# Remove any context from the top of the context stack, deactivating it.self.ctx.pop()# Here we use the first row of output in that batch_size = 1output = host_outputs[0]# Do postprocessresult_boxes, result_scores, result_classid = self.post_process(output, origin_h, origin_w)# Draw rectangles and labels on the original imagefor j in range(len(result_boxes)):box = result_boxes[j]plot_one_box(box,image_raw,label="{}:{:.2f}".format(categories[int(result_classid[j])], result_scores[j]),)return image_raw, end - startdef destroy(self):# Remove any context from the top of the context stack, deactivating it.self.ctx.pop()def get_raw_image(self, image_path_batch):"""description: Read an image from image path"""for img_path in image_path_batch:yield cv2.imread(img_path)def get_raw_image_zeros(self, image_path_batch=None):"""description: Ready data for warmup"""for _ in range(self.batch_size):yield np.zeros([self.input_h, self.input_w, 3], dtype=np.uint8)def preprocess_image(self, input_image_path):"""description: Convert BGR image to RGB,resize and pad it to target size, normalize to [0,1],transform to NCHW format.param:input_image_path: str, image pathreturn:image:  the processed imageimage_raw: the original imageh: original heightw: original width"""image_raw = input_image_pathh, w, c = image_raw.shapeimage = cv2.cvtColor(image_raw, cv2.COLOR_BGR2RGB)# Calculate widht and height and paddingsr_w = self.input_w / wr_h = self.input_h / hif r_h > r_w:tw = self.input_wth = int(r_w * h)tx1 = tx2 = 0ty1 = int((self.input_h - th) / 2)ty2 = self.input_h - th - ty1else:tw = int(r_h * w)th = self.input_htx1 = int((self.input_w - tw) / 2)tx2 = self.input_w - tw - tx1ty1 = ty2 = 0# Resize the image with long side while maintaining ratioimage = cv2.resize(image, (tw, th))# Pad the short side with (128,128,128)image = cv2.copyMakeBorder(image, ty1, ty2, tx1, tx2, cv2.BORDER_CONSTANT, (128, 128, 128))image = image.astype(np.float32)# Normalize to [0,1]image /= 255.0# HWC to CHW format:image = np.transpose(image, [2, 0, 1])# CHW to NCHW formatimage = np.expand_dims(image, axis=0)# Convert the image to row-major order, also known as "C order":image = np.ascontiguousarray(image)return image, image_raw, h, wdef xywh2xyxy(self, origin_h, origin_w, x):"""description:    Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-rightparam:origin_h:   height of original imageorigin_w:   width of original imagex:          A boxes tensor, each row is a box [center_x, center_y, w, h]return:y:          A boxes tensor, each row is a box [x1, y1, x2, y2]"""y = torch.zeros_like(x) if isinstance(x, torch.Tensor) else np.zeros_like(x)r_w = self.input_w / origin_wr_h = self.input_h / origin_hif r_h > r_w:y[:, 0] = x[:, 0] - x[:, 2] / 2y[:, 2] = x[:, 0] + x[:, 2] / 2y[:, 1] = x[:, 1] - x[:, 3] / 2 - \(self.input_h - r_w * origin_h) / 2y[:, 3] = x[:, 1] + x[:, 3] / 2 - \(self.input_h - r_w * origin_h) / 2y /= r_welse:y[:, 0] = x[:, 0] - x[:, 2] / 2 - \(self.input_w - r_h * origin_w) / 2y[:, 2] = x[:, 0] + x[:, 2] / 2 - \(self.input_w - r_h * origin_w) / 2y[:, 1] = x[:, 1] - x[:, 3] / 2y[:, 3] = x[:, 1] + x[:, 3] / 2y /= r_hreturn ydef post_process(self, output, origin_h, origin_w):"""description: postprocess the predictionparam:output:     A tensor likes [num_boxes,cx,cy,w,h,conf,cls_id, cx,cy,w,h,conf,cls_id, ...] origin_h:   height of original imageorigin_w:   width of original imagereturn:result_boxes: finally boxes, a boxes tensor, each row is a box [x1, y1, x2, y2]result_scores: finally scores, a tensor, each element is the score correspoing to boxresult_classid: finally classid, a tensor, each element is the classid correspoing to box"""# Get the num of boxes detectednum = int(output[0])# Reshape to a two dimentional ndarraypred = np.reshape(output[1:], (-1, 6))[:num, :]# to a torch Tensorpred = torch.Tensor(pred).cuda()# Get the boxesboxes = pred[:, :4]# Get the scoresscores = pred[:, 4]# Get the classidclassid = pred[:, 5]# Choose those boxes that score > CONF_THRESHsi = scores > CONF_THRESHboxes = boxes[si, :]scores = scores[si]classid = classid[si]# Trandform bbox from [center_x, center_y, w, h] to [x1, y1, x2, y2]boxes = self.xywh2xyxy(origin_h, origin_w, boxes)# Do nmsindices = torchvision.ops.nms(boxes, scores, iou_threshold=IOU_THRESHOLD).cpu()result_boxes = boxes[indices, :].cpu()result_scores = scores[indices].cpu()result_classid = classid[indices].cpu()return result_boxes, result_scores, result_classidclass inferThread(threading.Thread):def __init__(self, yolov5_wrapper):threading.Thread.__init__(self)self.yolov5_wrapper = yolov5_wrapperdef infer(self, frame):batch_image_raw, use_time = self.yolov5_wrapper.infer(frame)# for i, img_path in enumerate(self.image_path_batch):#     parent, filename = os.path.split(img_path)#     save_name = os.path.join('output', filename)#     # Save image#     cv2.imwrite(save_name, batch_image_raw[i])# print('input->{}, time->{:.2f}ms, saving into output/'.format(self.image_path_batch, use_time * 1000))return batch_image_raw, use_timeclass warmUpThread(threading.Thread):def __init__(self, yolov5_wrapper):threading.Thread.__init__(self)self.yolov5_wrapper = yolov5_wrapperdef run(self):batch_image_raw, use_time = self.yolov5_wrapper.infer(self.yolov5_wrapper.get_raw_image_zeros())# print(#   'warm_up->{}, time->{:.2f}ms'.format(batch_image_raw[0].shape, use_time * 1000))if __name__ == "__main__":# load custom pluginsparser = argparse.ArgumentParser()parser.add_argument('--engine', nargs='+', type=str,default="build/yolov5s.engine", help='.engine path(s)')parser.add_argument('--save', type=int, default=0, help='save?')opt = parser.parse_args()PLUGIN_LIBRARY = "build/libmyplugins.so"engine_file_path = opt.enginectypes.CDLL(PLUGIN_LIBRARY)# load coco labelscategories = ["person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light","fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow","elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee","skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard","tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple","sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch","potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone","microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear","hair drier", "toothbrush"]# a YoLov5TRT instanceyolov5_wrapper = YoLov5TRT(engine_file_path)cap = cv2.VideoCapture(-1)try:thread1 = inferThread(yolov5_wrapper)thread1.start()thread1.join()while 1:_, frame = cap.read()img, t = thread1.infer(frame)cv2.imshow("result", img)# thread1.pop()if cv2.waitKey(1) & 0XFF == ord('q'):  # 1 millisecondbreakfinally:# destroy the instancecap.release()cv2.destroyAllWindows()yolov5_wrapper.destroy()

这里我需要通过can1总线将坐标发出去,然后用自己的can0总线接受数据进行测试。这里我写了三个sh文件,如下:

这里我有生成的v6.0版本的yolo.engine文件:
https://download.csdn.net/download/weixin_38226321/45462164
和我工程需要的行人检测的v5.0版本的engine文件:
https://download.csdn.net/download/weixin_38226321/45458756

4.拉出视频流

这里主要参考博文https://blog.csdn.net/u013033845/article/details/86765598
这个博主文件里运行_zhangwei文件夹即可。这里只需要修改自己的config文件即可。
这里我在linux上运行出现了cannot assign requested address错误,是因为linux客户端链接端口用尽,可以用本机的回环地址即消除错误。

[server]
host=192.168.3.105#接收视频的主机
;host=127.0.0.1
port=12340
;feed_host=127.0.0.1
feed_host=192.168.3.78#链接camera,发送视频的主机
feed_port=12341[camera]
w=1280
h=720
d=3
pieces=10[delay]
frame=0.01
piece=0.001[header]
name=FIRE
data=4
index=1
time=7
data_size=60000
total_size=60016[send]
queue_size=60
piece_limit=50
piece_min=15
fps=30
recv_fps=30[receive]
queue_size=128
frame_limit=50
piece_limit=30
frame_delay=0.1

然后融合到yolo工程中。
在yolo融合中,我的第一次功能实现是传输原视频流,用can总线获取坐标再画到severse上。
主要用了yolo_trt_test3copy.py和client3.py文件

"""
An example that uses TensorRT's Python api to make inferences.
"""
import sys
import ctypes
import os
import shutil
import random
import sys
import threading
import time
import cv2
import numpy as np
import pycuda.autoinit
import pycuda.driver as cuda
import tensorrt as trt
import torch
import torchvision
import argparse
from client3 import SendVideo
from client3 import WebVideoStream
CONF_THRESH = 0.5
IOU_THRESHOLD = 0.4def get_img_path_batches(batch_size, img_dir):ret = []batch = []for root, dirs, files in os.walk(img_dir):for name in files:if len(batch) == batch_size:ret.append(batch)batch = []batch.append(os.path.join(root, name))if len(batch) > 0:ret.append(batch)return retdef plot_one_box(x, img, color=None, label=None, line_thickness=None):"""description: Plots one bounding box on image img,this function comes from YoLov5 project.param: x:      a box likes [x1,y1,x2,y2]img:    a opencv image objectcolor:  color to draw rectangle, such as (0,255,0)label:  strline_thickness: intreturn:no return"""tl = (line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1)  # line/font thicknesscolor = color or [random.randint(0, 255) for _ in range(3)]c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))# 把框的坐标按照一定格式printprint('1F334455#'+hex(int(x[0]))[2:].zfill(4)+hex(int(x[1]))[2:].zfill(4) +hex(int(x[2]))[2:].zfill(4)+hex(int(x[3]))[2:].zfill(4))_console = sys.stdout  # 获取print数据fff = open("outfile.txt", "a+")  # 打开这个文件sys.stdout = fff  # 将print数据写入这个文件# with open("outfile.txt", "a+") as f:#     sys.stdout = fcv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)if label:tf = max(tl - 1, 1)  # font thicknesst_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA)  # filledcv2.putText(img,label,(c1[0], c1[1] - 2),0,tl / 3,[225, 255, 255],thickness=tf,lineType=cv2.LINE_AA,)class YoLov5TRT(object):"""description: A YOLOv5 class that warps TensorRT ops, preprocess and postprocess ops."""def __init__(self, engine_file_path):# Create a Context on this device,self.ctx = cuda.Device(0).make_context()stream = cuda.Stream()TRT_LOGGER = trt.Logger(trt.Logger.INFO)runtime = trt.Runtime(TRT_LOGGER)# Deserialize the engine from filewith open(engine_file_path, "rb") as f:engine = runtime.deserialize_cuda_engine(f.read())context = engine.create_execution_context()host_inputs = []cuda_inputs = []host_outputs = []cuda_outputs = []bindings = []for binding in engine:# print('bingding:', binding, engine.get_binding_shape(binding))size = trt.volume(engine.get_binding_shape(binding)) * engine.max_batch_sizedtype = trt.nptype(engine.get_binding_dtype(binding))# Allocate host and device buffershost_mem = cuda.pagelocked_empty(size, dtype)cuda_mem = cuda.mem_alloc(host_mem.nbytes)# Append the device buffer to device bindings.bindings.append(int(cuda_mem))# Append to the appropriate list.if engine.binding_is_input(binding):self.input_w = engine.get_binding_shape(binding)[-1]self.input_h = engine.get_binding_shape(binding)[-2]host_inputs.append(host_mem)cuda_inputs.append(cuda_mem)else:host_outputs.append(host_mem)cuda_outputs.append(cuda_mem)# Storeself.stream = streamself.context = contextself.engine = engineself.host_inputs = host_inputsself.cuda_inputs = cuda_inputsself.host_outputs = host_outputsself.cuda_outputs = cuda_outputsself.bindings = bindingsself.batch_size = engine.max_batch_sizedef infer(self, input_image_path):threading.Thread.__init__(self)# Make self the active context, pushing it on top of the context stack.self.ctx.push()self.input_image_path = input_image_path# Restorestream = self.streamcontext = self.contextengine = self.enginehost_inputs = self.host_inputscuda_inputs = self.cuda_inputshost_outputs = self.host_outputscuda_outputs = self.cuda_outputsbindings = self.bindings# Do image preprocessbatch_image_raw = []batch_origin_h = []batch_origin_w = []batch_input_image = np.empty(shape=[self.batch_size, 3, self.input_h, self.input_w])input_image, image_raw, origin_h, origin_w = self.preprocess_image(input_image_path)batch_origin_h.append(origin_h)batch_origin_w.append(origin_w)np.copyto(batch_input_image, input_image)batch_input_image = np.ascontiguousarray(batch_input_image)# Copy input image to host buffernp.copyto(host_inputs[0], batch_input_image.ravel())start = time.time()# Transfer input data  to the GPU.cuda.memcpy_htod_async(cuda_inputs[0], host_inputs[0], stream)# Run inference.context.execute_async(batch_size=self.batch_size,bindings=bindings, stream_handle=stream.handle)# Transfer predictions back from the GPU.cuda.memcpy_dtoh_async(host_outputs[0], cuda_outputs[0], stream)# Synchronize the streamstream.synchronize()end = time.time()# Remove any context from the top of the context stack, deactivating it.self.ctx.pop()# Here we use the first row of output in that batch_size = 1output = host_outputs[0]# Do postprocessresult_boxes, result_scores, result_classid = self.post_process(output, origin_h, origin_w)# Draw rectangles and labels on the original imagefor j in range(len(result_boxes)):box = result_boxes[j]plot_one_box(box,image_raw,label="{}:{:.2f}".format(categories[int(result_classid[j])], result_scores[j]),)return image_raw, end - startdef destroy(self):# Remove any context from the top of the context stack, deactivating it.self.ctx.pop()def get_raw_image(self, image_path_batch):"""description: Read an image from image path"""for img_path in image_path_batch:yield cv2.imread(img_path)def get_raw_image_zeros(self, image_path_batch=None):"""description: Ready data for warmup"""for _ in range(self.batch_size):yield np.zeros([self.input_h, self.input_w, 3], dtype=np.uint8)def preprocess_image(self, input_image_path):"""description: Convert BGR image to RGB,resize and pad it to target size, normalize to [0,1],transform to NCHW format.param:input_image_path: str, image pathreturn:image:  the processed imageimage_raw: the original imageh: original heightw: original width"""image_raw = input_image_pathh, w, c = image_raw.shapeimage = cv2.cvtColor(image_raw, cv2.COLOR_BGR2RGB)# Calculate widht and height and paddingsr_w = self.input_w / wr_h = self.input_h / hif r_h > r_w:tw = self.input_wth = int(r_w * h)tx1 = tx2 = 0ty1 = int((self.input_h - th) / 2)ty2 = self.input_h - th - ty1else:tw = int(r_h * w)th = self.input_htx1 = int((self.input_w - tw) / 2)tx2 = self.input_w - tw - tx1ty1 = ty2 = 0# Resize the image with long side while maintaining ratioimage = cv2.resize(image, (tw, th))# Pad the short side with (128,128,128)image = cv2.copyMakeBorder(image, ty1, ty2, tx1, tx2, cv2.BORDER_CONSTANT, (128, 128, 128))image = image.astype(np.float32)# Normalize to [0,1]image /= 255.0# HWC to CHW format:image = np.transpose(image, [2, 0, 1])# CHW to NCHW formatimage = np.expand_dims(image, axis=0)# Convert the image to row-major order, also known as "C order":image = np.ascontiguousarray(image)return image, image_raw, h, wdef xywh2xyxy(self, origin_h, origin_w, x):"""description:    Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-rightparam:origin_h:   height of original imageorigin_w:   width of original imagex:          A boxes tensor, each row is a box [center_x, center_y, w, h]return:y:          A boxes tensor, each row is a box [x1, y1, x2, y2]"""y = torch.zeros_like(x) if isinstance(x, torch.Tensor) else np.zeros_like(x)r_w = self.input_w / origin_wr_h = self.input_h / origin_hif r_h > r_w:y[:, 0] = x[:, 0] - x[:, 2] / 2y[:, 2] = x[:, 0] + x[:, 2] / 2y[:, 1] = x[:, 1] - x[:, 3] / 2 - \(self.input_h - r_w * origin_h) / 2y[:, 3] = x[:, 1] + x[:, 3] / 2 - \(self.input_h - r_w * origin_h) / 2y /= r_welse:y[:, 0] = x[:, 0] - x[:, 2] / 2 - \(self.input_w - r_h * origin_w) / 2y[:, 2] = x[:, 0] + x[:, 2] / 2 - \(self.input_w - r_h * origin_w) / 2y[:, 1] = x[:, 1] - x[:, 3] / 2y[:, 3] = x[:, 1] + x[:, 3] / 2y /= r_hreturn ydef post_process(self, output, origin_h, origin_w):"""description: postprocess the predictionparam:output:     A tensor likes [num_boxes,cx,cy,w,h,conf,cls_id, cx,cy,w,h,conf,cls_id, ...] origin_h:   height of original imageorigin_w:   width of original imagereturn:result_boxes: finally boxes, a boxes tensor, each row is a box [x1, y1, x2, y2]result_scores: finally scores, a tensor, each element is the score correspoing to boxresult_classid: finally classid, a tensor, each element is the classid correspoing to box"""# Get the num of boxes detectednum = int(output[0])# Reshape to a two dimentional ndarraypred = np.reshape(output[1:], (-1, 6))[:num, :]# to a torch Tensorpred = torch.Tensor(pred).cuda()# Get the boxesboxes = pred[:, :4]# Get the scoresscores = pred[:, 4]# Get the classidclassid = pred[:, 5]# Choose those boxes that score > CONF_THRESHsi = scores > CONF_THRESHboxes = boxes[si, :]scores = scores[si]classid = classid[si]# Trandform bbox from [center_x, center_y, w, h] to [x1, y1, x2, y2]boxes = self.xywh2xyxy(origin_h, origin_w, boxes)# Do nmsindices = torchvision.ops.nms(boxes, scores, iou_threshold=IOU_THRESHOLD).cpu()result_boxes = boxes[indices, :].cpu()result_scores = scores[indices].cpu()result_classid = classid[indices].cpu()return result_boxes, result_scores, result_classidclass inferThread(threading.Thread):def __init__(self, yolov5_wrapper):threading.Thread.__init__(self)self.yolov5_wrapper = yolov5_wrapperdef infer(self, frame):batch_image_raw, use_time = self.yolov5_wrapper.infer(frame)# for i, img_path in enumerate(self.image_path_batch):#     parent, filename = os.path.split(img_path)#     save_name = os.path.join('output', filename)#     # Save image#     cv2.imwrite(save_name, batch_image_raw[i])# print('input->{}, time->{:.2f}ms, saving into output/'.format(self.image_path_batch, use_time * 1000))return batch_image_raw, use_timeclass warmUpThread(threading.Thread):def __init__(self, yolov5_wrapper):threading.Thread.__init__(self)self.yolov5_wrapper = yolov5_wrapperdef run(self):batch_image_raw, use_time = self.yolov5_wrapper.infer(self.yolov5_wrapper.get_raw_image_zeros())# print(#   'warm_up->{}, time->{:.2f}ms'.format(batch_image_raw[0].shape, use_time * 1000))if __name__ == "__main__":# load custom pluginsparser = argparse.ArgumentParser()parser.add_argument('--engine', nargs='+', type=str,default="build/yolov5s.engine", help='.engine path(s)')parser.add_argument('--save', type=int, default=0, help='save?')opt = parser.parse_args()PLUGIN_LIBRARY = "build/libmyplugins.so"engine_file_path = opt.enginectypes.CDLL(PLUGIN_LIBRARY)# load coco labelscategories = ["pedestrians", "riders", "partially", "ignore", "crowd"]# a YoLov5TRT instanceyolov5_wrapper = YoLov5TRT(engine_file_path)wvs = WebVideoStream()cap = wvs.streamwvs.start()try:thread1 = inferThread(yolov5_wrapper)thread1.start()thread1.join()while 1:_, frame = cap.read()# r_c(cap, _, frame)img, t = thread1.infer(frame)# wvs.frame_raw = imgcv2.imshow("result", img)# thread1.pop()if cv2.waitKey(1) & 0XFF == ord('q'):  # 1 millisecondbreakSendVideo(wvs, img)finally:# destroy the instancecap.release()cv2.destroyAllWindows()yolov5_wrapper.destroy()

client3.py

from threading import Thread, Lock
# from queue import Queue
# from collections import deque as Queue
import socket
import cv2
import numpy
import time
import sys
import os
from fps import FPS
from config import Config
from packer import Packer
import logginglogging.basicConfig(level=logging.DEBUG,filename='output.log',format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)class WebVideoStream:def __init__(self, src="C:\\Tools\\titan_test.mp4"):# 1080p D:\\kankan\\backup\\Automata.2014.1080p.BluRay.x264.YIFY.mp4# 720p  C:\\Tools\\titan_test.mp4self.config = Config()self.packer = Packer()# initialize the file video stream along with the boolean# used to indicate if the thread should be stopped or notos.environ["OPENCV_VIDEOIO_DEBUG"] = "1"os.environ["OPENCV_VIDEOIO_PRIORITY_MSMF"] = "0"encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 15]# self.stream = cv2.VideoCapture(src)self.stream = cv2.VideoCapture(-1)self.stream.set(cv2.CAP_PROP_FRAME_WIDTH, self.packer.w)   # floatself.stream.set(cv2.CAP_PROP_FRAME_HEIGHT, self.packer.h)  # float# while True:#  if cv2.waitKey(1) & 0xFF == ord('q'):#      break#  ret, frame = self.stream.read()#   if ret:#        # print(frame.shape)#       frame = frame.reshape(self.packer.h, self.packer.w, self.packer.d)#        cv2.imshow('read video data.jpg', frame)# self.stream.set(cv2.CAP_PROP_MODE, cv2.CAP_MODE_YUYV)# print(self.stream.get(cv2.CAP_PROP_FPS)) # 默认帧率30# self.stream.set(cv2.CAP_PROP_FPS, 20)   # cv version is 3.4.2self.stopped = Falseself.requesting = Falseself.request = Falseself.quit = Falseself.fps = 40self.recv_fps = 0self.push_sleep = 0.01self.push_sleep_min = 0.001self.push_sleep_max = 0.2self.send_sleep = 0.05self.send_sleep_min = 0.01self.send_sleep_max = 0.1self.network_delay = 0self.delay_timer = int(time.time()*1000)self.piece_array = []self.piece_time = int(time.time()*1000)self.piece_fps = 40for i in range(self.packer.frame_pieces):self.piece_array.append(None)self.frame = numpy.zeros(self.packer.frame_size_3d, dtype=numpy.uint8)self.imshow = self.frame.reshape(self.packer.h, self.packer.w, self.packer.d)self.frame_size = 0self.piece_size = 0self.frame_pieces = 0self.init_config()self.init_connection()# intialize thread and lockself.thread = Thread(target=self.update, args=())self.thread.daemon = Truedef init_config(self):config = self.config# 初始化连接信息host = config.get("server", "host")port = config.get("server", "port")feed_host = config.get("server", "feed_host")feed_port = config.get("server", "feed_port")self.address = (host, int(port))self.feed_address = (feed_host, int(feed_port))# 初始化delay信息self.frame_delay = float(config.get("delay", "frame"))self.piece_delay = float(config.get("delay", "piece"))# 初始化队列大小信息self.queue_size = int(config.get("receive", "queue_size"))def init_connection(self):try:self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)# self.sock.bind(self.address)except socket.error as msg:print(msg)sys.exit(1)def close_connection(self):self.sock.close()# def init_feedback_connection(self):#    try:#       feed_sock = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)#       feed_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)#        feed_sock.bind(self.feed_address)#      return feed_sock#   except socket.error as msg:#        print(msg)#         sys.exit(1)def start(self):# start a thread to read frames from the file video streamself.thread.start()recv_thread = Thread(target=self.recv_thread, args=())recv_thread.daemon = Truerecv_thread.start()return selfdef update(self):piece_size = self.packer.piece_size# keep looping infinitely until the thread is stoppedwhile True:# if the thread indicator variable is set, stop the threadif self.stopped:return# self.Q_stuck_control()time.sleep(self.push_sleep)# otherwise, read the next frame from the stream(grabbed, frame_raw) = self.stream.read()# print(frame_raw.shape)now = int(time.time()*1000)for i in range(self.packer.frame_pieces):self.packer.pack_data(i, now, frame_raw, self.piece_array, self.piece_time, self.piece_fps)# print("pfps:", self.piece_fps)# now2 = int(time.time()*1000)# print("Time to get a frame:", (now2-now))returndef Q_stuck_control(self):if self.piece_fps == 0:return False  # 为零表示还没有变化if self.piece_fps > self.packer.send_fps:self.push_sleep = min(self.push_sleep + 0.01, self.push_sleep_max)return Trueif self.piece_fps < self.packer.send_fps:self.push_sleep = max(self.push_sleep - 0.01, self.push_sleep_min)return Falsedef send_stuck_control(self):if self.recv_fps == 0:return Falseif self.recv_fps > self.packer.recv_fps_limit:self.send_sleep = min(self.send_sleep + 0.01, self.send_sleep_max)return Trueif self.recv_fps < self.packer.recv_fps_limit:self.send_sleep = max(self.send_sleep - 0.01, self.send_sleep_min)return Falsedef get_request(self):if self.requesting:returnprint("waiting...")thread = Thread(target=self.get_request_thread, args=())thread.daemon = Truethread.start()self.requesting = Truedef get_request_thread(self):while True:data = b''try:data, address = self.sock.recvfrom(4)except:passif(data == b"get"):self.request = Truebreakelif(data == b"quit"):self.quit = Truebreakdef read(self, i):return self.piece_array[i]def read_send(self, i):# while True:#  if cv2.waitKey(1) & 0xFF == ord('q'):#      break# start threads to recieve# for i in range(self.packer.frame_pieces):# intialize threadpack = self.piece_array[i]if pack is None:returnself.sock.sendto(pack, self.address)# thread = Thread(target=self.send_thread, args=(i,))# thread.daemon = True# thread.start()def send_thread(self, i):pack = self.piece_array[i]if pack is None:returnself.sock.sendto(pack, self.address)def recv_thread(self):s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)s.bind(self.feed_address)s.listen(1)conn, addr = s.accept()while True:data = conn.recv(self.packer.info_pack_len)if len(data) > 0:sname, server_fps, send_ctime = self.packer.unpack_info_data(data)now = int(time.time()*1000)self.network_delay = int((now - send_ctime)/2.0)self.recv_fps = server_fpsconn.close()def stop(self):# indicate that the thread should be stoppedself.stopped = Truedef SendVideo(wvs, img):t = 0if t == 0:sock = wvs.sockaddress = wvs.address# running = True# while running:now = time.time()# camara_delay = 0.03wvs.send_stuck_control()time.sleep(wvs.send_sleep)# time.sleep(0.03)# print(wvs.send_sleep)for i in range(wvs.packer.frame_pieces):wvs.read_send(i)now1 = time.time()cnow = int(now1*1000)ctime = now1 - now# print("frame time", ctime)if ctime > 0:send_fps = str(int(1.0/ctime)).ljust(4)recv_fps = str(wvs.recv_fps).ljust(4)net_delay = str(wvs.network_delay).ljust(4)if cnow - wvs.delay_timer > 300:# if True:wvs.delay_timer = cnow# img = numpy.zeros((100, 900, 3), numpy.uint8)font = cv2.FONT_HERSHEY_SIMPLEXbottomLeftCornerOfText = (10, 50)fontScale = 0.7fontColor = (255, 255, 255)lineType = 2# cv2.putText(img, 'Hello Fire! Send FPS:' + send_fps + ", Recv FPS:" + recv_fps + ", Net delay:" + net_delay,#             bottomLeftCornerOfText,#             font,#             fontScale,#             fontColor,#             lineType)# cv2.imshow("Send clinet", img)# 不断地从队列里面取数据尝试# try:#  for i in range(wvs.packer.frame_pieces):#       pack = wvs.piece_array[i]#         line_data = cv2.imdecode(pack, 1).flatten()#       row_start = i*wvs.packer.piece_size#       row_end = (i+1)*wvs.packer.piece_size#        wvs.frame[row_start:row_end] = line_data#  frame = wvs.frame.reshape(wvs.packer.h, wvs.packer.w, wvs.packer.d)#   if frame is not None:#      cv2.imshow("FireStreamer", frame)# except:#   pass# now = time.time()# frame = wvs.read(i)# if frame:#  # print(len(frame))#    time.sleep(0.05)#   sock.sendto(frame, wvs.address)# now1 = time.time()# ctime += now1 - now# print("frame time", ctime)# if ctime>0:#  print("fps:", (1.0/(ctime)))else:con = Config()host = con.get("server", "host")port = con.get("server", "port")address = (host, int(port))sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)capture = cv2.VideoCapture(0)capture.set(cv2.CAP_PROP_MODE, cv2.CAP_MODE_YUYV)# 读取一帧图像,读取成功:ret=1 frame=读取到的一帧图像;读取失败:ret=0ret, frame = capture.read()encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 60]# 停止0.1S 防止发送过快服务的处理不过来,如果服务端的处理很多,那么应该加大这个值time.sleep(0.01)ret, frame = capture.read()frame = cv2.flip(frame, 1)  # 水平翻转result, imgencode = cv2.imencode('.jpg', frame, encode_param)print(len(imgencode))s = frame.flatten().tostring()for i in range(20):time.sleep(0.001)# print(i.to_bytes(1, byteorder='big'))sock.sendto(s[i*46080:(i+1)*46080] +i.to_bytes(1, byteorder='big'), address)# result, imgencode = cv2.imencode('.jpg', frame, encode_param)# data = numpy.array(imgencode)# stringData = data.tostring()# save data# cv2.imwrite('read video data.jpg', frame, encode_param)# show locally# cv2.imshow('read video data.jpg', frame)# 读取服务器返回值# receive = sock.recvfrom(1024)# if len(receive): print(str(receive,encoding='utf-8'))# if cv2.waitKey(10) == 27: break# exit(0)# capture.release()# cv2.destroyAllWindows()# sock.close()

第二种方法直接传输有坐标框的视频流

5.设备设置

1.更改风扇转速

sudo vi /sys/devices/pwm-fan/target_pwm

更改,最大250,默认77
2.下载 vscode:https://code.visualstudio.com/Download
choose arm64 deb:

6.win与xavier通信

项目需要,需要win通过udp对xavier发送信息,xavier通过can给win发送数据。
首先通过一条网线将两者相连,设置两者的ip地址,xavier(192.168.1.124),win(192.168.3.123),xavier读取win通过udp发送的数据并写入udp_can.txt。
win发送:

import socket
def send_message(udp_socket,dest_ip,dest_port):send_data = input("请输入你要发送的信息:")udp_socket.sendto(send_data.encode("gbk"),(dest_ip,dest_port))# def receive_message(udp_socket):
#     udp_rece = udp_socket.recvfrom(1024)
#     print(udp_rece[0].decode("gbk"),udp_rece[1])def main():#创建套接字udp_socket = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)# dest_ip = input("请输入目标ip:")# dest_port = int(input("请输入目标port:"))dest_ip ="192.168.1.124"dest_port =8000#绑定ip和portudp_socket.bind(("",8888))#循环来处理事情while True:#发送send_message(udp_socket,dest_ip,dest_port)# #接受# receive_message(udp_socket)if __name__ == "__main__":main()

xavier接受:

import socket# def send_message(udp_socket, dest_ip, dest_port):
#     send_data = input("请输入你要发送的信息:")
#     udp_socket.sendto(send_data.encode("gbk"), (dest_ip, dest_port))def receive_message(udp_socket):udp_rece = udp_socket.recvfrom(1024)print(udp_rece[0].decode("gbk"), udp_rece[1])with open("udp_can.txt", 'a') as f:f.write(udp_rece[0].decode("gbk")+"\n")def main():# 创建套接字udp_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)# dest_ip = input("请输入目标ip:")# dest_port = int(input("请输入目标port:"))# 绑定ip和portudp_socket.bind(("", 8000))# 循环来处理事情while True:# 发送# send_message(udp_socket,dest_ip,dest_port)# 接受receive_message(udp_socket)if __name__ == "__main__":main()

第二步通过查看udp_can.txt里又2新数据而通过can总线发送数据。

7.references

[1]https://blog.csdn.net/qq_40691868/article/details/114379061?spm=1001.2014.3001.5501
[2]https://blog.csdn.net/qq_40691868/article/details/114362278?spm=1001.2014.3001.5501
[3]https://blog.csdn.net/qq_40691868/article/details/117331162
[4]https://blog.csdn.net/xingdou520/article/details/84309155

玩转NVIDIA Jetson AGX Xavier相关推荐

  1. 玩转NVIDIA Jetson AGX Xavier(3)--- 使用JetPack 4.1为Xavier刷机

    这篇文章开始我们正式要介绍如何玩转Xavier了 在开始本章内容之前,大家可以下载我翻译的Xavier中文指导手册 地址:玩转NVIDIA Jetson AGX Xavier--- 中文用户手册_木子 ...

  2. 玩转NVIDIA Jetson AGX Xavier(1)--- jetson是什么

    先进的 AI 边缘设备平台 NVIDIA Jetson 是业内领先的 AI 计算平台,它面向移动嵌入式系统市场中的 GPU 加速并行处理.在深度学习和计算机视觉方面的高性能.低能耗计算,使得 Jets ...

  3. 玩转NVIDIA Jetson AGX Xavier(5)--- Xavier jetpack 4.2 刷机(nano TX2)

    使用jetpack 4.2为Xavier nano TX2刷机 JetPack从4.2版本开始不再是.run的安装方式,而是.deb的安装方式 1.下载安装包 https://developer.nv ...

  4. 玩转NVIDIA Jetson AGX Xavier(7)--- jetson打开关闭风扇,解决风扇不转

    本文介绍如何控制Xavier的风扇 很多小伙伴拿到Xavier之后发现运行的很热但是风扇始终不转,经过了四处查找资料,终于搞定了如何打开关闭风扇以及控制转速. 本文系统环境基于 jetpack4.2 ...

  5. NVIDIA Jetson AGX Xavier 安装 JetPack 注意事项

    NVIDIA Jetson AGX Xavier 安装 JetPack 注意事项 本文主要记录了我第一次接触Linux和为NVIDIA Jetson AGX Xavier安装JetPack时遇到的一些 ...

  6. 手把手教你备份 nvidia jetson agx xavier 系统(亲测有效、超详细)

    手把手教你备份 nvidia jetson agx xavier 系统(亲测有效.超详细) 此版本适合小白,网上虽然很多教程,但是对于超级小白来说还是有点粗略了,教程奉上! 文章目录 手把手教你备份 ...

  7. nvidia jetson agx xavier运行 OpenCL

    nvidia jetson agx xavier 运行 OpenCL 最近用OpenCL在某个GPGPU上开发了某些功能,想移植到nvidia的gpu上时才发现nvidia的移动端GPU居然没有Ope ...

  8. Nvidia Jetson Agx Xavier 在Ros中调用GMSL2相机

    一.背景 在ros中调用GMSL2摄像头,刚开始是通过修改官方驱动包ros-meloidc-usb-cam,可能是修改的地方不对,一直报错,调用失败,要是有大佬修改成功,希望能交流一下. 后来借鉴了一 ...

  9. NVIDIA Jetson AGX Xavier冰壶的跟踪与识别

    NVIDIA Jetson AGX Xavier冰壶的跟踪与识别 一.项目简介 二.硬件平台的选取 2.1为什么选择NVIDIA Jetson AGX Xavier 三.识别环境配置 3.1 下载py ...

最新文章

  1. 修改科技论文的6项注意
  2. oracle checkpoint
  3. javascript 变量提前
  4. Android网络项目课程笔记-----代码复用性
  5. 清空mysql注册表步骤_完全卸载MySQL 数据库清空MySql注册表
  6. 【博客】搭建个人博客边学边搭(一个晚上)
  7. mysql链路跟踪工具_SkyWalking全链路追踪利器
  8. [react-router] react的路由和普通路由有什么区别?
  9. “约见”面试官系列之常见面试题之第八十九篇之vue生命周期作用(建议收藏)
  10. 接待员如何向客人upsell_客房留言卡也能收获好评,看看高情商酒店如何做的?...
  11. 解决 sessionStroage 无法在多个标签页共享数据的问题
  12. 安装Nagios监控软件
  13. Python+Visual Studio
  14. Q102:光线追踪场景(2)——PLYs(多种模型汇集)
  15. geemap_01_安装geemap
  16. js之类 面向对象的书写方式(详细篇)
  17. linux---动静态库编译及使用
  18. Excel VBA 函数返回值
  19. 软件开发中的需求种类
  20. 计算机无法安装VC2015,我电脑安装C++2015和2017安装失败怎么解决?

热门文章

  1. Java基础语法小结
  2. 二维dct变换例题_图象题处理习题
  3. 隐枚举法matlab程序,隐枚举法例题
  4. **潘老师 领导力\管理效能提升专家**
  5. 无法安全地用该源进行更新,所以默认禁用该源 E: 无法定位软件包
  6. 基于SUSAN算法的边缘检测方法研究(Matlab代码实现)
  7. eclipse网络代理设置
  8. aix查看lv_AIX中的硬盘、PP、VG、LV、文件系统
  9. QQ空间首页模仿制作——HTML布局练习
  10. ros 发布信息频率_ROS:消息发布器和订阅器(c++)