前言

最近公司需要对图片中的不同的货车品牌和车系进行识别,通过PaddleClas进行模型训练后得到一个品牌识别模型和一个车系识别模型,现在对两个模型部署到一台华为云的GPU服务器上,要对多个模型同时进行部署,只能采取PaddleServing中的 Pipeline 服务或者C++ serving服务进行部署,由于C++ serving需要编译源码,比较麻烦,所以下面采用Pipeline 方式对多个模型进行串联部署。

如何在华为云服务器上搭建GPU版本的PaddlePaddle环境请参考以下文章: https://blog.csdn.net/loutengyuan/article/details/126527326

环境准备

需要准备PaddleClas的运行环境和Paddle Serving的运行环境。

  • 准备PaddleClas的运行环境链接
# 克隆代码
git clone https://github.com/PaddlePaddle/PaddleClas
  • 安装PaddleServing的运行环境,步骤如下
# 安装serving,用于启动服务
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.8.3.post102-py3-none-any.whl
pip3 install paddle_serving_server_gpu-0.8.3.post102-py3-none-any.whl# 安装client,用于向服务发送请求
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.8.3-cp38-none-any.whl
pip3 install paddle_serving_client-0.8.3-cp38-none-any.whl# 安装serving-app
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.8.3-py3-none-any.whl
pip3 install paddle_serving_app-0.8.3-py3-none-any.whl

服务数据准备

使用 PaddleServing 做图像识别服务化部署时,需要将保存的多个 inference 模型都转换为 Serving 模型。

模型转换

进入工作目录:

cd PaddleClas/deploy/

创建并进入models文件夹:

# 创建并进入models文件夹
mkdir models
cd models

将训练好的inference 模型放到该文件夹下,包括检测模型(picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer)、品牌识别模型(rec_brands_v1.0_infer)和车系识别模型(rec_series_v1.0_infer)结构如下:

├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer
│   ├── infer_cfg.yml
│   ├── inference.pdiparams
│   ├── inference.pdiparams.info
│   └── inference.pdmodel
├── rec_brands_v1.0_infer
│   ├── inference.pdiparams
│   ├── inference.pdiparams.info
│   └── inference.pdmodel
└── rec_series_v1.0_infer├── inference.pdiparams├── inference.pdiparams.info├── inference.pdmodel└── readme.txt

转换通用检测 inference 模型为 Serving 模型:

# 转换通用检测模型
python3.8 -m paddle_serving_client.convert --dirname ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_infer/ \
--model_filename inference.pdmodel  \
--params_filename inference.pdiparams \
--serving_server ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/ \
--serving_client ./picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/

通用检测 inference 模型转换完成后,会在当前文件夹多出 picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/和 picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/ 的文件夹,具备如下结构:

    ├── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/│   ├── inference.pdiparams│   ├── inference.pdmodel│   ├── serving_server_conf.prototxt│   └── serving_server_conf.stream.prototxt│└── picodet_PPLCNet_x2_5_mainbody_lite_v1.0_client/├── serving_client_conf.prototxt└── serving_client_conf.stream.prototxt

转换品牌识别 inference 模型为 Serving 模型:

  # 转换品牌识别模型python3.8 -m paddle_serving_client.convert \--dirname ./rec_brands_v1.0_infer/ \--model_filename inference.pdmodel  \--params_filename inference.pdiparams \--serving_server ./rec_brands_v1.0_serving/ \--serving_client ./rec_brands_v1.0_client/

品牌识别 inference 模型转换完成后,会在当前文件夹多出 rec_brands_v1.0_serving/ 和 rec_brands_v1.0_client/ 的文件夹,具备如下结构:

    ├── rec_brands_v1.0_serving/│   ├── inference.pdiparams│   ├── inference.pdmodel│   ├── serving_server_conf.prototxt│   └── serving_server_conf.stream.prototxt│└── rec_brands_v1.0_client/├── serving_client_conf.prototxt└── serving_client_conf.stream.prototxt

分别修改 rec_brands_v1.0_serving/rec_brands_v1.0_client/ 目录下的 serving_server_conf.prototxt 中的 alias 名字: 将 fetch_var 中的 alias_name 改为 features。 修改后的 serving_server_conf.prototxt 内容如下:

feed_var {name: "x"alias_name: "x"is_lod_tensor: falsefeed_type: 1shape: 3shape: 224shape: 224
}
fetch_var {name: "save_infer_model/scale_0.tmp_1"alias_name: "features"is_lod_tensor: falsefetch_type: 1shape: 512
}

转换车系识别 inference 模型为 Serving 模型:

  # 转换车系识别模型python3.8 -m paddle_serving_client.convert \--dirname ./rec_series_v1.0_infer/ \--model_filename inference.pdmodel  \--params_filename inference.pdiparams \--serving_server ./rec_series_v1.0_serving/ \--serving_client ./rec_series_v1.0_client/

车系识别 inference 模型转换完成后,会在当前文件夹多出 rec_series_v1.0_serving/ 和 rec_series_v1.0_client/ 的文件夹,具备如下结构:

    ├── rec_series_v1.0_serving/│   ├── inference.pdiparams│   ├── inference.pdmodel│   ├── serving_server_conf.prototxt│   └── serving_server_conf.stream.prototxt│└── rec_series_v1.0_client/├── serving_client_conf.prototxt└── serving_client_conf.stream.prototxt

分别修改 rec_series_v1.0_serving/rec_series_v1.0_client/ 目录下的 serving_server_conf.prototxt 中的 alias 名字: 将 fetch_var 中的 alias_name 改为 features。 修改后的 serving_server_conf.prototxt 内容如下:

feed_var {name: "x"alias_name: "x"is_lod_tensor: falsefeed_type: 1shape: 3shape: 224shape: 224
}
fetch_var {name: "save_infer_model/scale_0.tmp_1"alias_name: "features"is_lod_tensor: falsefetch_type: 1shape: 512
}

上述命令中参数具体含义如下表所示:

参数 类型 默认值 描述
dirname str - 需要转换的模型文件存储路径,Program结构文件和参数文件均保存在此目录。
model_filename str None 存储需要转换的模型Inference Program结构的文件名称。如果设置为None,则使用 __model__ 作为默认的文件名
params_filename str None 存储需要转换的模型所有参数的文件名称。当且仅当所有模型参数被保>存在一个单独的二进制文件中,它才需要被指定。如果模型参数是存储在各自分离的文件中,设置它的值为None
serving_server str "serving_server" 转换后的模型文件和配置文件的存储路径。默认值为serving_server
serving_client str "serving_client" 转换后的客户端配置文件存储路径。默认值为serving_client

索引库添加

将品牌和车系库放到上一级(deploy)目录

    # 回到deploy目录cd ../

目录结构如下:

    ├── brand_dataset_v1.0/│   └── index│       ├── id_map.pkl│       └── vector.index└── series_dataset_v1.0/└── index├── id_map.pkl└── vector.index

服务部署

注意: 识别服务涉及到多个模型,出于性能考虑采用 PipeLine 部署方式。Pipeline 部署方式当前不支持 windows 平台。
进入到工作目录

  cd ./deploy/paddleserving/recognition

paddleserving 目录包含启动 Python Pipeline 服务、C++ Serving 服务和发送预测请求的代码,包括:

  __init__.pyconfig.yml                  # 启动python pipeline服务的配置文件pipeline_http_client.py     # http方式发送pipeline预测请求的脚本pipeline_rpc_client.py      # rpc方式发送pipeline预测请求的脚本recognition_web_service.py  # 启动pipeline服务端的脚本readme.md                   # 识别模型服务化部署文档run_cpp_serving.sh          # 启动C++ Pipeline Serving部署的脚本test_cpp_serving_client.py  # rpc方式发送C++ Pipeline serving预测请求的脚本

修改config.yml文件如下:

#worker_num, 最大并发数。当build_dag_each_worker=True时, 框架会创建worker_num个进程,每个进程内构建grpcSever和DAG
##当build_dag_each_worker=False时,框架会设置主线程grpc线程池的max_workers=worker_num
worker_num: 1#http端口, rpc_port和http_port不允许同时为空。当rpc_port可用且http_port为空时,不自动生成http_port
http_port: 8899
#rpc_port: 9994dag:#op资源类型, True, 为线程模型;False,为进程模型is_thread_op: False
op:rec_brands:#并发数,is_thread_op=True时,为线程并发;否则为进程并发concurrency: 1#当op配置没有server_endpoints时,从local_service_conf读取本地服务配置local_service_conf:#uci模型路径model_config: ../../models/rec_brands_v1.0_serving#计算硬件类型: 空缺时由devices决定(CPU/GPU),0=cpu, 1=gpu, 2=tensorRT, 3=arm cpu, 4=kunlun xpudevice_type: 1#计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡devices: "0" # "0,1"#client类型,包括brpc, grpc和local_predictor.local_predictor不启动Serving服务,进程内预测client_type: local_predictor#Fetch结果列表,以client_config中fetch_var的alias_name为准fetch_list: ["features"]rec_series:#并发数,is_thread_op=True时,为线程并发;否则为进程并发concurrency: 1#当op配置没有server_endpoints时,从local_service_conf读取本地服务配置local_service_conf:#uci模型路径model_config: ../../models/rec_series_v1.0_serving#计算硬件类型: 空缺时由devices决定(CPU/GPU),0=cpu, 1=gpu, 2=tensorRT, 3=arm cpu, 4=kunlun xpudevice_type: 1#计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡devices: "0" # "0,1"#client类型,包括brpc, grpc和local_predictor.local_predictor不启动Serving服务,进程内预测client_type: local_predictor#Fetch结果列表,以client_config中fetch_var的alias_name为准fetch_list: ["features"]det:concurrency: 1local_service_conf:client_type: local_predictordevice_type: 1devices: '0'fetch_list:- save_infer_model/scale_0.tmp_1model_config: ../../models/picodet_PPLCNet_x2_5_mainbody_lite_v1.0_serving/

修改recognition_web_service.py文件如下:

# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetimefrom paddle_serving_server.web_service import WebService, Op
from paddle_serving_server.pipeline import RequestOp, ResponseOp
from paddle_serving_server.pipeline import PipelineServer
from paddle_serving_server.pipeline.proto import pipeline_service_pb2
from paddle_serving_server.pipeline.channel import ChannelDataErrcode
import logging
import numpy as np
import sys
import cv2
from paddle_serving_app.reader import *
import base64
import os
import faiss
import pickle
import jsonclass TestRequestOp(RequestOp):def init_op(self):passdef unpack_request_package(self, request):# print(str(request.method))dict_data = {}log_id = Noneif request is None:raise ValueError("request is None")for idx, key in enumerate(request.key):dict_data[key] = request.value[idx]log_id = request.logidreturn dict_data, log_id, None, ""class DetOp(Op):def init_op(self):self.img_preprocess = Sequential([BGR2RGB(), Div(255.0),Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225], False),Resize((640, 640)), Transpose((2, 0, 1))])self.img_postprocess = RCNNPostprocess("label_list.txt", "output")self.threshold = 0.2self.max_det_results = 5def generate_scale(self, im):"""Args:im (np.ndarray): image (np.ndarray)Returns:im_scale_x: the resize ratio of Xim_scale_y: the resize ratio of Y"""target_size = [640, 640]origin_shape = im.shape[:2]resize_h, resize_w = target_sizeim_scale_y = resize_h / float(origin_shape[0])im_scale_x = resize_w / float(origin_shape[1])return im_scale_y, im_scale_xdef preprocess(self, input_dicts, data_id, log_id):print("{} detect begin --> data_id: {}".format(datetime.datetime.now(), data_id))(_, input_dict), = input_dicts.items()imgs = []raw_imgs = []for key in input_dict.keys():data = base64.b64decode(input_dict[key].encode('utf8'))raw_imgs.append(data)data = np.fromstring(data, np.uint8)raw_im = cv2.imdecode(data, cv2.IMREAD_COLOR)im_scale_y, im_scale_x = self.generate_scale(raw_im)im = self.img_preprocess(raw_im)im_shape = np.array(im.shape[1:]).reshape(-1)scale_factor = np.array([im_scale_y, im_scale_x]).reshape(-1)imgs.append({"image": im[np.newaxis, :],"im_shape": im_shape[np.newaxis, :],"scale_factor": scale_factor[np.newaxis, :],})self.raw_img = raw_imgsfeed_dict = {"image": np.concatenate([x["image"] for x in imgs], axis=0),"im_shape": np.concatenate([x["im_shape"] for x in imgs], axis=0),"scale_factor": np.concatenate([x["scale_factor"] for x in imgs], axis=0)}return feed_dict, False, None, ""def postprocess(self, input_dicts, fetch_dict, data_id, log_id):boxes = self.img_postprocess(fetch_dict, visualize=False)boxes.sort(key=lambda x: x["score"], reverse=True)boxes = filter(lambda x: x["score"] >= self.threshold,boxes[:self.max_det_results])boxes = list(boxes)for i in range(len(boxes)):boxes[i]["bbox"][2] += boxes[i]["bbox"][0] - 1boxes[i]["bbox"][3] += boxes[i]["bbox"][1] - 1result = json.dumps(boxes)res_dict = {"bbox_result": result, "image": self.raw_img}print("{} detect finish --> data_id: {}".format(datetime.datetime.now(), data_id))return res_dict, None, ""class BrandsRecOp(Op):def init_op(self):self.seq = Sequential([BGR2RGB(), Resize((224, 224)), Div(255),Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225],False), Transpose((2, 0, 1))])index_dir = "../../brand_dataset_v1.0/index"assert os.path.exists(os.path.join(index_dir, "vector.index")), "vector.index not found ..."assert os.path.exists(os.path.join(index_dir, "id_map.pkl")), "id_map.pkl not found ... "self.searcher = faiss.read_index(os.path.join(index_dir, "vector.index"))with open(os.path.join(index_dir, "id_map.pkl"), "rb") as fd:self.id_map = pickle.load(fd)self.rec_nms_thresold = 0.05self.rec_score_thres = 0.5self.feature_normalize = Trueself.return_k = 1self.area_ratio_thresold=0.1def preprocess(self, input_dicts, data_id, log_id):(_, input_dict), = input_dicts.items()raw_img = input_dict["image"][0]data = np.frombuffer(raw_img, np.uint8)origin_img = cv2.imdecode(data, cv2.IMREAD_COLOR)dt_boxes = input_dict["bbox_result"]boxes = json.loads(dt_boxes)boxes.append({"category_id": 0,"score": 1.0,"bbox": [0, 0, origin_img.shape[1], origin_img.shape[0]]})self.det_boxes = boxes#construct batch images for recimgs = []for box in boxes:box = [int(x) for x in box["bbox"]]im = origin_img[box[1]:box[3], box[0]:box[2]].copy()img = self.seq(im)imgs.append(img[np.newaxis, :].copy())input_imgs = np.concatenate(imgs, axis=0)return {"x": input_imgs}, False, None, ""def nms_to_rec_results(self, results, thresh=0.1):filtered_results = []x1 = np.array([r["bbox"][0] for r in results]).astype("float32")y1 = np.array([r["bbox"][1] for r in results]).astype("float32")x2 = np.array([r["bbox"][2] for r in results]).astype("float32")y2 = np.array([r["bbox"][3] for r in results]).astype("float32")scores = np.array([r["rec_scores"] for r in results])areas = (x2 - x1 + 1) * (y2 - y1 + 1)order = scores.argsort()[::-1]while order.size > 0:i = order[0]xx1 = np.maximum(x1[i], x1[order[1:]])yy1 = np.maximum(y1[i], y1[order[1:]])xx2 = np.minimum(x2[i], x2[order[1:]])yy2 = np.minimum(y2[i], y2[order[1:]])w = np.maximum(0.0, xx2 - xx1 + 1)h = np.maximum(0.0, yy2 - yy1 + 1)inter = w * hovr = inter / (areas[i] + areas[order[1:]] - inter)inds = np.where(ovr <= thresh)[0]order = order[inds + 1]filtered_results.append(results[i])return filtered_resultsdef check_boxes(self, results, area_ratio_thresh=0.1):filtered_results = []for result in results:if result["area_ratio"]>=area_ratio_thresh:filtered_results.append(result)if len(filtered_results)>0:return filtered_resultselse:return resultsdef postprocess(self, input_dicts, fetch_dict, data_id, log_id):batch_features = fetch_dict["features"]if self.feature_normalize:feas_norm = np.sqrt(np.sum(np.square(batch_features), axis=1, keepdims=True))batch_features = np.divide(batch_features, feas_norm)scores, docs = self.searcher.search(batch_features, self.return_k)origin_img_box = self.det_boxes[len(self.det_boxes) - 1]["bbox"]total_pixes = origin_img_box[2] * origin_img_box[3]results = []for i in range(scores.shape[0]):pred = {}xmin, ymin, xmax, ymax = self.det_boxes[i]["bbox"]area_pix = (xmax - xmin) * (ymax - ymin)ratio = 0.0if total_pixes > 0:ratio = area_pix * 1.0 / total_pixesif scores[i][0] >= self.rec_score_thres:pred["bbox"] = [int(x) for x in self.det_boxes[i]["bbox"]]pred["rec_docs"] = self.id_map[docs[i][0]].split()[1]pred["rec_scores"] = scores[i][0]pred["area_ratio"] = round(ratio, 4)results.append(pred)#do nmsresults = self.nms_to_rec_results(results, self.rec_nms_thresold)print("{} BrandsRecOp data_id: {} --> Nms Result: {}".format(datetime.datetime.now(), data_id, results))results = self.check_boxes(results, self.area_ratio_thresold)print("{} BrandsRecOp data_id: {} --> Out Result: {}".format(datetime.datetime.now(), data_id, results))return {"result": str(results)}, None, ""class SeriesRecOp(Op):def init_op(self):self.seq = Sequential([BGR2RGB(), Resize((224, 224)), Div(255),Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225],False), Transpose((2, 0, 1))])index_dir = "../../series_dataset_v1.0/index"assert os.path.exists(os.path.join(index_dir, "vector.index")), "vector.index not found ..."assert os.path.exists(os.path.join(index_dir, "id_map.pkl")), "id_map.pkl not found ... "self.searcher = faiss.read_index(os.path.join(index_dir, "vector.index"))with open(os.path.join(index_dir, "id_map.pkl"), "rb") as fd:self.id_map = pickle.load(fd)self.rec_nms_thresold = 0.05self.rec_score_thres = 0.5self.feature_normalize = Trueself.return_k = 1self.area_ratio_thresold=0.1def preprocess(self, input_dicts, data_id, log_id):(_, input_dict), = input_dicts.items()raw_img = input_dict["image"][0]data = np.frombuffer(raw_img, np.uint8)origin_img = cv2.imdecode(data, cv2.IMREAD_COLOR)dt_boxes = input_dict["bbox_result"]boxes = json.loads(dt_boxes)boxes.append({"category_id": 0,"score": 1.0,"bbox": [0, 0, origin_img.shape[1], origin_img.shape[0]]})self.det_boxes = boxes#construct batch images for recimgs = []for box in boxes:box = [int(x) for x in box["bbox"]]im = origin_img[box[1]:box[3], box[0]:box[2]].copy()img = self.seq(im)imgs.append(img[np.newaxis, :].copy())input_imgs = np.concatenate(imgs, axis=0)return {"x": input_imgs}, False, None, ""def nms_to_rec_results(self, results, thresh=0.1):filtered_results = []x1 = np.array([r["bbox"][0] for r in results]).astype("float32")y1 = np.array([r["bbox"][1] for r in results]).astype("float32")x2 = np.array([r["bbox"][2] for r in results]).astype("float32")y2 = np.array([r["bbox"][3] for r in results]).astype("float32")scores = np.array([r["rec_scores"] for r in results])areas = (x2 - x1 + 1) * (y2 - y1 + 1)order = scores.argsort()[::-1]while order.size > 0:i = order[0]xx1 = np.maximum(x1[i], x1[order[1:]])yy1 = np.maximum(y1[i], y1[order[1:]])xx2 = np.minimum(x2[i], x2[order[1:]])yy2 = np.minimum(y2[i], y2[order[1:]])w = np.maximum(0.0, xx2 - xx1 + 1)h = np.maximum(0.0, yy2 - yy1 + 1)inter = w * hovr = inter / (areas[i] + areas[order[1:]] - inter)inds = np.where(ovr <= thresh)[0]order = order[inds + 1]filtered_results.append(results[i])return filtered_resultsdef check_boxes(self, results, area_ratio_thresh=0.1):filtered_results = []for result in results:if result["area_ratio"]>=area_ratio_thresh:filtered_results.append(result)if len(filtered_results)>0:return filtered_resultselse:return resultsdef postprocess(self, input_dicts, fetch_dict, data_id, log_id):batch_features = fetch_dict["features"]if self.feature_normalize:feas_norm = np.sqrt(np.sum(np.square(batch_features), axis=1, keepdims=True))batch_features = np.divide(batch_features, feas_norm)scores, docs = self.searcher.search(batch_features, self.return_k)origin_img_box = self.det_boxes[len(self.det_boxes) - 1]["bbox"]total_pixes = origin_img_box[2] * origin_img_box[3]results = []for i in range(scores.shape[0]):pred = {}xmin, ymin, xmax, ymax = self.det_boxes[i]["bbox"]area_pix = (xmax - xmin) * (ymax - ymin)ratio = 0.0if total_pixes > 0:ratio = area_pix * 1.0 / total_pixesif scores[i][0] >= self.rec_score_thres:pred["bbox"] = [int(x) for x in self.det_boxes[i]["bbox"]]pred["rec_docs"] = self.id_map[docs[i][0]].split()[1]pred["rec_scores"] = scores[i][0]pred["area_ratio"] = round(ratio, 4)results.append(pred)#do nmsresults = self.nms_to_rec_results(results, self.rec_nms_thresold)print("{} SeriesRecOp data_id: {} --> Nms Result: {}".format(datetime.datetime.now(), data_id, results))results = self.check_boxes(results, self.area_ratio_thresold)print("{} SeriesRecOp data_id: {} --> Out Result: {}".format(datetime.datetime.now(), data_id, results))return {"result": str(results)}, None, ""class CombineOp(Op):def preprocess(self, input_data, data_id, log_id):return None, False, None, ""def postprocess(self, input_dicts, fetch_dict, data_id, log_id):print("{} CombineOp data_id: {} --> input_dicts: {}".format(datetime.datetime.now(), data_id, input_dicts))results = {}for op_name, data in input_dicts.items():if "brands" in op_name:ret = data["result"]if ret is not None:results["brands"] = json.loads(ret.replace("'", "\""))else:results["brands"] = "[]"elif "series" in op_name:ret = data["result"]if ret is not None:results["series"] = json.loads(ret.replace("'", "\""))else:results["series"] = "[]"print("{} CombineOp data_id: {} --> Out Result: {}".format(datetime.datetime.now(), data_id, results))return {"result": str(results)}, None, ""class RecognitionService(WebService):def get_pipeline_response(self, read_op):read_op2 = TestRequestOp()det_op = DetOp(name="det", input_ops=[read_op2])rec_brands_op = BrandsRecOp(name="rec_brands", input_ops=[det_op])rec_series_op = SeriesRecOp(name="rec_series", input_ops=[det_op])combine_op = CombineOp("combine", input_ops=[rec_brands_op, rec_series_op])return combine_opproduct_recog_service = RecognitionService(name="recognition")
product_recog_service.prepare_pipeline_config("config.yml")
product_recog_service.run_service()

启动服务:

# 启动服务,运行日志保存在 log.txt
nohup python3.8 recognition_web_service.py &>log.txt &

如果出现faiss没找到,请参考这里:https://blog.csdn.net/weixin_43882112/article/details/107614217

查看进程

ps -ef|grep python

关闭进程

kill -9 19913

查看日志

tail -f 1000 log.log

如何查看端口占用

$: netstat -anp | grep 8888
tcp        0      0 127.0.0.1:8888          0.0.0.0:*               LISTEN      13404/python3
tcp        0      1 172.17.0.10:34036       115.42.35.84:8888       SYN_SENT    14586/python3

强制杀掉进程:通过pid

$: kill -9 13404
$: kill -9 14586
$: netstat -anp | grep 8888
$:

服务测试

修改pipeline_http_client.py文件如下:

import requests
import json
import base64
import osimgpath = "图片路径.jpg"def cv2_to_base64(image):return base64.b64encode(image).decode('utf8')if __name__ == "__main__":url = "http://127.0.0.1:8899/recognition/prediction"with open(os.path.join(".",  imgpath), 'rb') as file:image_data1 = file.read()image = cv2_to_base64(image_data1)data = {"key": ["image"], "value": [image]}for i in range(1):r = requests.post(url=url, data=json.dumps(data))print(r.json())

发送请求:

python3.8 pipeline_http_client.py

成功运行后,模型预测的结果会打印在客户端中,如下所示:

{'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["{'brands': [{'bbox': [16, 19, 492, 565], 'rec_docs': '1', 'rec_scores': 0.98805684, 'area_ratio': 0.7432}], 'series': [{'bbox': [16, 19, 492, 565], 'rec_docs': '6', 'rec_scores': 0.9267364, 'area_ratio': 0.7432}]}"], 'tensors': []}

华为云GPU服务器使用PaddleServing方式部署PaddleClas多个自己训练的识别模型服务相关推荐

  1. 华为云GPU服务器使用PaddleClas和PaddleServing训练、部署车辆类型分类模型服务

    0 前言 以下针对最近使用PaddleClas和PaddleServing在华为云GPU服务器上训练和部署一个车辆类型识别模型过程进行记录,以供日后自己参考和其他有需要的朋友一些帮助,接触这方面东西时 ...

  2. 华为云ECS服务器中通过docker部署ELK-kibana

    华为云ECS服务器中通过docker部署ELK-kibana 0.阅读说明 1.ELK简介 2.在华为云ECS中通过docker部署kibana 4.关于Kibana server is not re ...

  3. 华为云ECS服务器中通过docker部署ELK-elasticsearch

    华为云ECS服务器中通过docker部署ELK-elasticsearch 0.阅读说明 1.ELK简介 2.在华为云ECS中通过docker部署Elasticsearch 3.设置elasticse ...

  4. 训练、标注成本节省90%!华为云自动化AI开发平台ModelArts 3.0发布,从训练数据到模型落地一站式打通...

    鱼羊 发自 凹非寺 量子位 报道 | 公众号 QbitAI 今年的华为,着实遭遇了不小的困难. 尤其是供应链,包括芯片方面的打击,让华为轮值董事长郭平坦承"的确对华为的生产.运营带来了很大困 ...

  5. 华为云平台使用手册_训练、标注成本节省90%!华为云自动化AI开发平台ModelArts 3.0发布,从训练数据到模型落地一站式打通...

    鱼羊 发自 凹非寺 量子位 报道 | 公众号 QbitAI 今年的华为,着实遭遇了不小的困难. 尤其是供应链,包括芯片方面的打击,让华为轮值董事长郭平坦承"的确对华为的生产.运营带来了很大困 ...

  6. 华为云GPU服务器部署PaddleOCR中英文识别服务

    前言 最近在公司项目中使用到OCR服务,刚开始使用的是百度云上的通用文字识别接口,后来无意中了解到百度开源的飞浆平台的PaddleOCR模块直接有现成的模型可以使用,于是在公司服务器上搭了一个CPU版 ...

  7. 华为云GPU服务器深度学习环境搭建

    Author:ZERO-A-ONE Date:2021-02-26 ​ 想了想还是给华为云做一个环境搭建的文档吧,因为某些私人问题 ​ 下面是本人购买的服务器的配置,选择的是按需付费: 机型: CPU ...

  8. 华为云ECS服务器中通过docker部署jenkins

    1.什么是docker? Docker解决了软件环境部署复杂的问题. 对于一个传统的软件工程,开发人员把写好的代码放到服务器上去运行是一件很头疼的事情,因为常常会出现环境不兼容二导致各种各样的bug. ...

  9. 利用华为云ECS服务器搭建安防视频监控平台【华为云至简致远】

    1. 前言 华为云的弹性云服务器(Elastic Cloud Server)是一种可随时自助获取.可弹性伸缩的云服务器,帮助用户打造可靠.安全.灵活.高效的应用环境,确保服务持久稳定运行,提升运维效率 ...

最新文章

  1. 每日一皮:雷神索尔的锤子为什么这么重?
  2. 暑期项目经验(十)--struts + poi
  3. The import com.google cannot be resolved解决方法
  4. 【Hadoop Summit Tokyo 2016】Hivemall: Apache Hive/Spark/Pig 的可扩展机器学习库
  5. TabError- inconsistent use of tabs and spaces in indentation 查验及解决方法
  6. 2021年中国一氧化碳传感器市场趋势报告、技术动态创新及2027年市场预测
  7. 连接oracle数据库代码,oracle数据库的连接代码
  8. quartz入门案例
  9. Oracle RAC tns 00505,Alert Log Errors: 12170 TNS-12535/TNS-00505: Operation Timed Out
  10. 小甲鱼c语言入门冒泡,小甲鱼 排序算法 冒泡排序
  11. Project(9)——收货地址 -查看列表
  12. ESP8266-Arduino编程实例-SHT20温湿度传感器驱动
  13. 二维码设备巡检解决方案
  14. 编码转换参考范例大全
  15. 计算机输入法切换用户,电脑输入法切换不了怎么办
  16. win11电脑加密文件夹的两种方法
  17. 防御100gDDoS需要多少钱
  18. Proteus8 发生关键仿真错误
  19. jws webservice 跳过https认证_【大连学为贵5周年庆典】多邻国考试不能认证是怎么回事?这些雷区不要踩!...
  20. AC_AttitudeControl_Heli.cpp的void AC_AttitudeControl_Heli::rate_bf_to_motor_roll_pitch函数代码分析

热门文章

  1. 试题 算法训练 强力党逗志芃
  2. 已经谈不起恋爱的80后
  3. 爬虫第二式:猫眼电影前100排行榜
  4. 什么是用户story?
  5. 前端知识学习案例-Clipboard API一键复制和一键粘贴功能
  6. Ae 效果快速参考:生成
  7. vue 中动态绑定class 和 style的方法
  8. 简单快速PS制作绚丽光斑散景效果
  9. 现代密码学(二)——DH密钥协商协议
  10. Flash实现透明度渐变遮罩的方法