目标检测库:https://github.com/open-mmlab/mmdetection

1、部署:

参照https://github.com/open-mmlab/mmdetection/blob/master/docs/get_started.md 安装环境

注意版本,我试验后可用的如下:

conda create -n open-mmlab python=3.7 -y
conda activate open-mmlabconda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.1 -c pytorch -y# install the latest mmcv
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu_102/torch_1.6.0/index.html# install mmdetection
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -r requirements/build.txt
pip install -v -e .

2、生成COCO格式JSON文件,代码参考如下:

"""
Dataset: VinBigData Chest X-ray Abnormalities Detection
https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/data
1) 150,000 X-ray images with disease labels and bounding box
2)Label:['Aortic enlargement', 'Atelectasis', 'Calcification','Cardiomegaly', 'Consolidation', 'ILD', 'Infiltration', \'Lung Opacity', 'Nodule/Mass', 'Other lesion', 'Pleural effusion', 'Pleural thickening', 'Pneumothorax', 'Pulmonary fibrosis', 'No Finding']
"""
# -*- coding: utf-8 -*-
'''
@data: 2021/03/01
@author: Jason.Fang
'''
import os
import json
import numpy as np
import pandas as pd
import glob
import cv2
import os
import shutil
import sys
from sklearn.model_selection import train_test_split#class name, 'No finding'/background:0
classname_to_id = {'Aortic enlargement':1, 'Atelectasis':2, 'Calcification':3, 'Cardiomegaly':4,'Consolidation':5, 'ILD':6, 'Infiltration':7, 'Lung Opacity':8, 'Nodule/Mass':9,'Other lesion':10, 'Pleural effusion':11, 'Pleural thickening':12, 'Pneumothorax':13, 'Pulmonary fibrosis':14}#https://github.com/Klawens/dataset_prepare/blob/main/csv2coco.py
class Csv2CoCo:def __init__(self,image_dir,total_annos):self.images = []self.annotations = []self.categories = []self.img_id = 0self.ann_id = 0self.image_dir = image_dirself.total_annos = total_annosdef save_coco_json(self, instance, save_path):json.dump(instance, open(save_path, 'w'), ensure_ascii=False, indent=2)  # indent=2 更加美观显示# 由txt文件构建COCOdef to_coco(self, keys):self._init_categories()for key in keys:self.images.append(self._image(key))shapes = self.total_annos[key]for shape in shapes:annotation = self._annotation(shape, key)self.annotations.append(annotation)self.ann_id += 1self.img_id += 1instance = {}instance['info'] = 'Jason.Fang created'instance['license'] = ['J.F']instance['images'] = self.imagesinstance['annotations'] = self.annotationsinstance['categories'] = self.categoriesreturn instance# 构建类别def _init_categories(self):for k, v in classname_to_id.items():category = {}category['id'] = vcategory['name'] = kself.categories.append(category)# 构建COCO的image字段def _image(self, path):image = {}img = cv2.imread(self.image_dir + path + '.jpeg')image['height'] = img.shape[0]image['width'] = img.shape[1]image['id'] = pathimage['file_name'] = path + '.jpeg'return image# 构建COCO的annotation字段def _annotation(self, shape, path):label = shape[0]points = shape[3:]annotation = {}annotation['id'] = self.ann_idannotation['image_id'] = pathannotation['category_id'] = int(classname_to_id[str(label)])annotation['segmentation'] = self._get_seg(points)annotation['bbox'] = self._get_box(points)annotation['iscrowd'] = 0annotation['area'] = self._get_area(points)return annotation# COCO的格式: [x1,y1,w,h] 对应COCO的bbox格式def _get_box(self, points):min_x = points[0]min_y = points[1]max_x = points[2]max_y = points[3]return [min_x, min_y, max_x - min_x, max_y - min_y]# 计算面积def _get_area(self, points):min_x = points[0]min_y = points[1]max_x = points[2]max_y = points[3]return (max_x - min_x+1) * (max_y - min_y+1)# segmentationdef _get_seg(self, points):min_x = points[0]min_y = points[1]max_x = points[2]max_y = points[3]h = max_y - min_yw = max_x - min_xa = []a.append([min_x,min_y, min_x,min_y+0.5*h, min_x,max_y, min_x+0.5*w,max_y, max_x,max_y, max_x,max_y-0.5*h, max_x,min_y, max_x-0.5*w,min_y])return adef main():vin_csv_file = '/data/fjsdata/Vin-CXR/train.csv'vin_image_dir = '/data/fjsdata/Vin-CXR/train_val_jpg/'vin_coco_path = '/data/comcode/mmdetection/vincxr/data/'# 整合csv格式标注文件total_csv_annotations = {}annotations = pd.read_csv(vin_csv_file, sep=',')annotations.fillna(0, inplace = True)annotations.loc[annotations["class_id"] == 14, ['x_max', 'y_max']] = 1.0annotations["class_id"] = annotations["class_id"] + 1annotations.loc[annotations["class_id"] == 15, ["class_id"]] = 0annotations = annotations[annotations.class_name!='No finding'].reset_index(drop=True)annotations = annotations.values #dataframe -> numpyfor annotation in annotations:key = annotation[0].split(os.sep)[-1] value = np.array([annotation[1:]])if key in total_csv_annotations.keys():total_csv_annotations[key] = np.concatenate((total_csv_annotations[key],value),axis=0)else:total_csv_annotations[key] = valuesys.stdout.write('\r key {} completed'.format(key))sys.stdout.flush()  # 按照键值划分数据total_keys = list(total_csv_annotations.keys())# 把训练集转化为COCO的json格式l2c_train = Csv2CoCo(image_dir=vin_image_dir, total_annos=total_csv_annotations)train_instance = l2c_train.to_coco(total_keys)l2c_train.save_coco_json(train_instance, vin_coco_path+'vin_coco_ann.json')def check():json_file = '/data/comcode/mmdetection/vincxr/data/vin_coco_ann.json'annos = json.loads(open(json_file).read())print(annos.keys())   # 键print(annos["info"])   # 键值print(annos["license"])print(annos["categories"])print(annos["images"][0]) print(annos["annotations"][0])if __name__ == "__main__":main()#check()

3、配置检测模型,这里选择maskrcnn,如下:

# The new config inherits a base config to highlight the necessary modification
_base_ = '/data/comcode/mmdetection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py'# We also need to change the num_classes in head to match the dataset's annotation
model = dict(roi_head=dict(bbox_head=dict(num_classes=14),mask_head=dict(num_classes=14)))# Modify dataset related settings
dataset_type = 'COCODataset'
classes = ('Aortic enlargement', 'Atelectasis', 'Calcification', 'Cardiomegaly','Consolidation', 'ILD', 'Infiltration', 'Lung Opacity', 'Nodule/Mass','Other lesion', 'Pleural effusion', 'Pleural thickening', 'Pneumothorax','Pulmonary fibrosis')
data = dict(train=dict(img_prefix='/data/fjsdata/Vin-CXR/train_val_jpg/',classes=classes,ann_file='/data/comcode/mmdetection/vincxr/data/vin_coco_ann.json'),val=dict(img_prefix='/data/fjsdata/Vin-CXR/train_val_jpg/',classes=classes,ann_file='/data/comcode/mmdetection/vincxr/data/vin_coco_ann.json'),test=dict(img_prefix='/data/fjsdata/Vin-CXR/train_val_jpg/',classes=classes,ann_file='/data/comcode/mmdetection/vincxr/data/vin_coco_ann.json'))work_dir = '/data/comcode/mmdetection/vincxr/workdir/'
load_from = '/data/comcode/mmdetection/vincxr/workdir/epoch_100_best.pth' #latest.pth
evaluation = dict(metric=['bbox', 'segm'], interval=50)
checkpoint_config = dict(interval=20)
gpu_ids = range(0,5) #gpus = 6
runner = dict(type='EpochBasedRunner', max_epochs=200)

执行命令参考:

# SingleGPU for training: python tools/train.py vincxr/code/maskrcnn.py
# MultiGPU for training: ./tools/dist_train.sh vincxr/code/maskrcnn.py 6
# Test: python tools/test.py vincxr/code/maskrcnn.py vincxr/workdir/latest.pth --eval bbox segm
# Evaluation: https://cocodataset.org/#detection-eval

4、训练后测试和检测代码:

"""
Dataset: VinBigData Chest X-ray Abnormalities Detection
https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection/data
1) 150,000 X-ray images with disease labels and bounding box
2)Label:['Aortic enlargement', 'Atelectasis', 'Calcification','Cardiomegaly', 'Consolidation', 'ILD', 'Infiltration', \'Lung Opacity', 'Nodule/Mass', 'Other lesion', 'Pleural effusion', 'Pleural thickening', 'Pneumothorax', 'Pulmonary fibrosis', 'No Finding']
"""
# -*- coding: utf-8 -*-
'''
@data: 2021/03/01
@author: Jason.Fang
'''
import os
import sys
import json
import numpy as np
import pandas as pd
import glob
import cv2
import os
import shutil
import torch
from sklearn.metrics import roc_auc_score, roc_curve, auc, f1_score, confusion_matrix
import matplotlib.patches as patches
import matplotlib.pyplot as plt
from PIL import Image
from mmdet.apis import init_detector, inference_detector
import mmcvnp.set_printoptions(suppress=True)
np.set_printoptions(precision=4)
CLASS_NAMES_Vin = ['Aortic enlargement', 'Atelectasis', 'Calcification','Cardiomegaly', 'Consolidation', 'ILD', 'Infiltration', \'Lung Opacity', 'Nodule/Mass', 'Other lesion', 'Pleural effusion', 'Pleural thickening', 'Pneumothorax', 'Pulmonary fibrosis']def format_prediction_string(labels, boxes, scores):pred_strings = []for j in zip(labels, scores, boxes):pred_strings.append("{0} {1:.4f} {2} {3} {4} {5}".format(j[0], j[1], int(j[2][0]), int(j[2][1]), int(j[2][2]), int(j[2][3])))return " ".join(pred_strings)def TestInfer(score_thr=0.5):vin_test_file = '/data/pycode/CXRAD/dataset/VinCXR_test.txt'vin_test_image = '/data/fjsdata/Vin-CXR/test_jpg/'vin_test_data = '/data/comcode/mmdetection/vincxr/test/'# Specify the path to model config and checkpoint fileconfig_file = 'vincxr/code/maskrcnn.py'checkpoint_file = 'vincxr/workdir/latest.pth'# build the model from a config file and a checkpoint filemodel = init_detector(config_file, checkpoint_file, device='cuda:6')# test images and show the resultsimages = pd.read_csv(vin_test_file, sep=',', header=None).valuessub_res = []for image in images:img = vin_test_image + image[0]+'.jpeg'result = inference_detector(model, img)#extract resultif isinstance(result, tuple):bbox_result, segm_result = resultif isinstance(segm_result, tuple):segm_result = segm_result[0]  # ms rcnnelse:bbox_result, segm_result = result, Nonebboxes = np.vstack(bbox_result)labels = [np.full(bbox.shape[0], i, dtype=np.int32)for i, bbox in enumerate(bbox_result)]labels = np.concatenate(labels)#predictionassert bboxes.shape[1] == 5scores = bboxes[:, -1]sub_tmp = {'image_id': image[0], 'PredictionString': '14 1.0 0 0 1 1'}if len(scores)>0:inds = scores > score_thrbboxes = bboxes[inds, :]labels = labels[inds]scores = scores[inds]if len(scores)>0:sub_tmp['PredictionString'] = format_prediction_string(labels, bboxes, scores)#sub = {'image_id': image[0],'PredictionString': format_prediction_string(labels, bboxes, scores)}sub_res.append(sub_tmp)sys.stdout.write('\r process: = {}'.format(len(sub_res)))sys.stdout.flush()#Save submission filetest_df = pd.DataFrame(sub_res, columns=['image_id', 'PredictionString'])print("\r set shape: {}".format(test_df.shape)) print("\r set Columns: {}".format(test_df.columns))test_df.to_csv(vin_test_data+'submission.csv', index=False)def compute_IoUs(xywh1, xywh2):x1, y1, w1, h1 = xywh1w1 = w1-x1h1 = h1-y1x2, y2, w2, h2 = xywh2w2 = w2-x2h2 = h2-y2dx = min(x1+w1, x2+w2) - max(x1, x2)dy = min(y1+h1, y2+h2) - max(y1, y2)intersection = dx * dy if (dx >=0 and dy >= 0) else 0.union = w1 * h1 + w2 * h2 - intersectionIoUs = intersection / unionreturn IoUsdef ValInfer(score_thr=0.5, show_thr=0.80):vin_val_file = '/data/pycode/CXRAD/dataset/VinCXR_val.txt'vin_val_image = '/data/fjsdata/Vin-CXR/train_val_jpg/'vin_val_data = '/data/comcode/mmdetection/vincxr/val/'# Specify the path to model config and checkpoint fileconfig_file = 'vincxr/code/maskrcnn.py'checkpoint_file = 'vincxr/workdir/latest.pth'# build the model from a config file and a checkpoint filemodel = init_detector(config_file, checkpoint_file, device='cuda:6')# test images and show the resultsimages = pd.read_csv(vin_val_file, sep=',', header=None).valuesIoUs = []for image in images:img = vin_val_image + image[0]+'.jpeg'gtlbl = image[1]gtbox = [float(eval(i)) for i in image[3].split(' ')]#extract resultresult = inference_detector(model, img)if isinstance(result, tuple):bbox_result, segm_result = resultif isinstance(segm_result, tuple):segm_result = segm_result[0]  # ms rcnnelse:bbox_result, segm_result = result, Nonebboxes = np.vstack(bbox_result)labels = [np.full(bbox.shape[0], i, dtype=np.int32)for i, bbox in enumerate(bbox_result)]labels = np.concatenate(labels)#predictionassert bboxes.shape[1] == 5scores = bboxes[:, -1]IoU = 0.0if len(scores)>0:inds = scores > score_thrbboxes = bboxes[inds, :]labels = labels[inds]scores = scores[inds]if gtlbl in labels: #hit ratioinds =  labels == gtlblbboxes = bboxes[inds, :]for box in bboxes:IoU_tmp = compute_IoUs(gtbox, box[:-1])if IoU_tmp > IoU: IoU = IoU_tmpif IoU_tmp > show_thr: #showfig, ax = plt.subplots()# Create figure and axesax.imshow(Image.open(img))rect = patches.Rectangle((gtbox[0], gtbox[1]), gtbox[2]-gtbox[0], gtbox[3]-gtbox[1], linewidth=2, edgecolor='b', facecolor='none')ax.add_patch(rect)# add groundtruthrect = patches.Rectangle((box[0], box[1]), box[2]-box[0], box[3]-box[1], linewidth=2, edgecolor='r', facecolor='none')ax.add_patch(rect) #add predicted boundingboxax.text(gtbox[0], gtbox[1], CLASS_NAMES_Vin[gtlbl])ax.axis('off')fig.savefig(vin_val_data + image[0]+'.jpeg')IoUs.append(IoU)sys.stdout.write('\r testing process: = {}'.format(len(IoUs)))sys.stdout.flush()#evaluationprint('The average IoU is {:.4f}'.format(np.array(IoUs).mean()))print('The Accuracy is {:.4f}'.format(Acc/len(images)))def main():#ValInfer()TestInfer()if __name__ == "__main__":main()

5、由于我的图像扩展名是jpeg,因此修改loading.py中的源码:

更多细节可以参考github上。

MMDetection库部署和训练相关推荐

  1. TVM:在树莓派上部署预训练的模型

    TVM:在树莓派上部署预训练的模型 之前我们已经介绍如何通过Python接口(AutoTVM)来编译和优化模型.本文将介绍如何在远程(如本例中的树莓派)上部署预训练的模型. 在设备上构建 TVM Ru ...

  2. 【全流程】从头在树莓派4B上部署自己训练的yolov5模型(配合NCS2加速)

    目录 0.前言 1.我的环境 2.整个流程 3.具体过程 3.1 训练自己的yolov5模型 3.2 将.pt模型转换为.onnx模型 3.3 在本地将.onnx转换成IR模型 3.4 在树莓派4B上 ...

  3. “一键”部署分布式训练,微软“群策MARO”上新集群管理助手

    作者 | 李开琪.王金予 编者按:2020年,微软亚洲研究院发布并开源了多智能体资源优化平台"群策MARO".为了帮助不同需求的用户进行更加便捷.高效的集群管理,也希望用户可以方便 ...

  4. mmdetection多类目标训练查看单类准确率(AP)以及使用模型测试看结果(show)

    本文主要是个人笔记,以后便于查询,也供借鉴.通常我们在mmdetection平台上就训练一类目标,训练过程中每跑完一个epoch就可以查看到该目标的0.5,0.75等阈值下的准确率,还有一个整体的mA ...

  5. pt->onnx->ncnn(pytorch部署自己训练的模型)

    pt->onnx->ncnn(pytorch部署自己训练的模型) yolov6似乎有部分操作ncnn不支持,需要改一下网络结构,所以这里使用 yolov7-tiny 首先,找一个目标检测的 ...

  6. mmdetection的安装并训练自己的VOC数据集

    mmdetection的安装并训练自己的VOC数据集 mmdetection的安装与VOC数据集的训练 一. mmdetection的安装 1.使用conda创建虚拟环境 2.安装Cython 3.安 ...

  7. Oracle 18c RAC(cdb多租户)+ADG备库部署以及维护

    Oracle 18c RAC+ADG备库部署以及维护 一.环境部署 1.1 主机配置 1.2 实施步骤 1.3 部署FAQ 二.DataGuard 数据库应用日志模式 2.1 Active DataG ...

  8. GitHub趋势榜第一:超强PyTorch目标检测库Detectron2,训练更快,支持更多任务

    栗子 发自 凹非寺 量子位 报道 | 公众号 QbitAI PyTorch目标检测库Detectron2诞生了,Facebook出品. 站在初代的肩膀上,它训练比从前更快,功能比从前更全,支持的模型也 ...

  9. tensorflow Lite 2---- 移动端部署--yolov5+训练自己的数据集

    一.模型移动端环境部署 可以参考: tensorflow lite 1---- 移动端部署--object detection 官方历程手把手教程_行码阁119的博客-CSDN博客 二.训练模型 本文 ...

最新文章

  1. win10+centos7+Anaconda3+python+pytorch
  2. Katalon Studio自动化测试框架使用【2】--- 项目设置(MacOS)
  3. 【Android 电量优化】JobScheduler 相关源码分析 ( JobSchedulerService 源码分析 | Android 源码在线网址推荐 )
  4. mysql正则表达式regexp_mysql - 正则表达式 RegExp
  5. SQL语句 SELECT LIKE用法详解
  6. c语言中bluetooth函数,C语言中的低功耗蓝牙-使用Bluez创建GATT服务器
  7. AVR单片机计算器C语言源程序,AVR单片机简单计算器的Proteus仿真实现+源码
  8. 项目实战,平均负载过高,最后发现却是这个搞鬼
  9. 每日记载内容总结14
  10. 浏览器内核(navigator.appName显示的不是内核信息!!)。
  11. UILAbel 设置了attributedText 后省略号不显示
  12. cat 查看文件内容,默认输出到屏幕
  13. Landsat 8 数据获取
  14. 微信程序开发系列教程(二)微信订阅号+人工智能问答服务
  15. 《全基因组测序WGS数据分析——1.DNA测序技术》学习笔记
  16. 太空互联网能否连接下一个10亿人?| 银河航天徐鸣访谈...
  17. Android 安卓动画 属性动画 - 缩放动画
  18. Uncaught RangeError: Maximum call stack size exceeded.
  19. yum安装zabbix5.0并配置grafana模板【2W字超详细】
  20. [UE5]物体沿指定路径(样条线)循环往复的移动

热门文章

  1. linux 连接文件,Linux 链接文件
  2. 谷歌搜索没有相机图标_谷歌Pixel 2/3a/4 XL中招!更新安卓11相机崩溃
  3. python实战项目_11 个实战项目,掌握 Python 数据可视化
  4. 安装好Pycharm后如何配置Python解释器简易教程
  5. 插上翅膀,让Excel飞起来——xlwings(一)
  6. C# CheckBox与RadioButton
  7. 2.PyCharm安装和使用之HelloWorld
  8. tensorflow 之 tf.reshape 之 -1
  9. hihoCoder - 1082 - 然而沼跃鱼早就看穿了一切 (字符串处理!!)
  10. [LeetCode]Perfect Squares