日萌社

人工智能AI:Keras PyTorch MXNet TensorFlow PaddlePaddle 深度学习实战(不定时更新)


CNN:RCNN、SPPNet、Fast RCNN、Faster RCNN、YOLO V1 V2 V3、SSD、FCN、SegNet、U-Net、DeepLab V1 V2 V3、Mask RCNN

单目标跟踪SiamMask:特定目标车辆追踪 part1

单目标跟踪SiamMask:特定目标车辆追踪 part2

单目标跟踪 Siamese系列网络:SiamFC、SiamRPN、one-shot跟踪、one-shotting单样本学习、DaSiamRPN、SiamRPN++、SiamMask

单目标跟踪:跟踪效果

单目标跟踪:数据集处理

单目标跟踪:模型搭建

单目标跟踪:模型训练

单目标跟踪:模型测试


6.特定目标车辆追踪

项目介绍:

运动目标跟踪一直以来都是一项具有挑战性的 工作, 也是研究的热点方向. 现阶段, 随着硬件设施 的不断完善和人工智能技术的快速发展, 运动目标 跟踪技术越来越重要. 目标跟踪在现实生活中有很 多应用, 包括交通视频监控、运动员比赛分析、智能人机交互 、跟踪系统的设计 等. 由于在目标跟踪中存在形态变化、图像分辨率低、背景复杂等情 况, 因此研究出一个性能优良的跟踪器势在必行。

早期的目标跟踪算法主要是根据目标建模或者 对目标特征进行跟踪, 主要的方法有:

1) 基于目标模型建模的方法: 通过对目标外观模型进行建模, 然 后在之后的帧中找到目标. 例如, 区域匹配、特征点 跟踪、基于主动轮廓的跟踪算法、光流法等. 最常用 的是特征匹配法, 首先提取目标特征, 然后在后续的 帧中找到最相似的特征进行目标定位, 常用的特征 有: SIFT 特征、SURF 特征、Harris 角点等.

2) 基于搜索的方法: 随着研究的深入, 人们发现基 于目标模型建模的方法对整张图片进行处理, 实 时性差. 人们将预测算法加入跟踪中, 在预测值附近 进行目标搜索, 减少了搜索的范围. 常见一类的预测 算法有 Kalman滤波、粒子滤波方法. 另一种减小搜索范围的方法是内核方法: 运用最速下降法的原理, 向梯度下降方向对目标模板逐步迭代, 直到迭代到最优位置。

我们给大家介绍的方式是基于深度学习的方法对目标进行跟踪,使用的模型是:

网络的工作流如下所示:

在首帧图像选中跟踪的目标,在视频中对目标进行跟踪:

该项目中的代码主要在以下几个文件夹中,如下图所示:

其中:

data: 包括所需要的数据,训练数据和测试数据

utils: 包含需要的辅助函数,包括目标框处理函数,学习率更新函数等

Datasets: 数据处理,增强及构建训练集的内容在该文件夹中

models: 模型的构建

experiments: 模型的载体

tools:包含模型的训练测试的内容

接下来我们就对该项目进行介绍


SiamMask\datasets

siam_mask_dataset.py

# --------------------------------------------------------
# SiamMask
# Licensed under The MIT License
# Written by Qiang Wang (wangqiang2015 at ia.ac.cn)
# 数据集的处理
# --------------------------------------------------------
from __future__ import division
from torch.utils.data import Dataset
import numpy as np
import json
import random
import logging
from os.path import join
from utils.bbox_helper import *
from utils.anchors import Anchors
import math
import sys
pyv = sys.version[0]
import cv2
if pyv[0] == '3':cv2.ocl.setUseOpenCL(False)logger = logging.getLogger('global')sample_random = random.Random()
sample_random.seed(123456)class SubDataSet(object):def __init__(self, cfg):for string in ['root', 'anno']:if string not in cfg:raise Exception('SubDataSet need "{}"'.format(string))# 打开标注文件with open(cfg['anno']) as fin:logger.info("loading " + cfg['anno'])# 获取标注数据self.labels = self.filter_zero(json.load(fin), cfg)# 判断参数是否为数值def isint(x):try:int(x)return Trueexcept:return False# add frames args into labels 将图像的描述信息加入到label标注中to_del = []# 遍历标注数据中的vediofor video in self.labels:# 遍历video中跟踪器for track in self.labels[video]:# 得到跟踪目标的帧图像frames = self.labels[video][track]frames = list(map(int, filter(lambda x: isint(x), frames.keys())))# 排序frames.sort()# 填充到labels中self.labels[video][track]['frames'] = frames# 若得到的帧数小于0,则进行删除if len(frames) <= 0:logger.info("warning {}/{} has no frames.".format(video, track))to_del.append((video, track))# 删除不在video中的图像 delete tracks with no framesfor video, track in to_del:del self.labels[video][track]# ( delete videos with no valid track)to_del = []# 判断video中是否包含跟踪目标的图像for video in self.labels:if len(self.labels[video]) <= 0:logger.info("warning {} has no tracks".format(video))to_del.append(video)# 删除不包含跟踪目标的videofor video in to_del:del self.labels[video]# 将包含标注信息的video赋值给videosself.videos = list(self.labels.keys())logger.info(cfg['anno'] + " loaded.")# 默认参数 default argsself.root = "/"self.start = 0# 标注数量self.num = len(self.labels)self.num_use = self.numself.frame_range = 100self.mark = "vid"# 图片格式self.path_format = "{}.{}.{}.jpg"# 掩膜格式self.mask_format = "{}.{}.m.png"self.pick = []# input argsself.__dict__.update(cfg)# 是否包含掩膜self.has_mask = self.mark in ['coco', 'ytb_vos']self.num_use = int(self.num_use)# 打乱shuffleself.shuffle()def filter_zero(self, anno, cfg):"过滤跟踪框为0的标注数据"name = cfg.get('mark', '')# 过滤为0的跟踪框后的结果out = {}# 记录跟踪框的个数tot = 0# 记录跟踪框不为0的个数new = 0# 记录跟踪框为0的个数zero = 0# 遍历标注数据for video, tracks in anno.items():new_tracks = {}# 遍历跟踪数据for trk, frames in tracks.items():new_frames = {}# 遍历帧数据得到跟踪框for frm, bbox in frames.items():tot += 1# bbox的表示方式,获取其宽高if len(bbox) == 4:x1, y1, x2, y2 = bboxw, h = x2 - x1, y2 - y1else:w, h = bbox# 判断跟踪框bbox是否为0if w == 0 or h == 0:logger.info('Error, {name} {video} {trk} {bbox}'.format(**locals()))zero += 1continuenew += 1new_frames[frm] = bboxif len(new_frames) > 0:new_tracks[trk] = new_frames# 记录跟踪框不为0的标注数据if len(new_tracks) > 0:out[video] = new_tracksreturn outdef log(self):"日志信息"logger.info('SubDataSet {name} start-index {start} select [{select}/{num}] path {format}'.format(name=self.mark, start=self.start, select=self.num_use, num=self.num, format=self.path_format))def shuffle(self):"打乱数据"# 要进行打乱的列表lists = list(range(self.start, self.start + self.num))m = 0pick = []# 对数据进行分段打乱while m < self.num_use:sample_random.shuffle(lists)pick += listsm += self.num# 数据打乱后的结果self.pick = pick[:self.num_use]return self.pickdef get_image_anno(self, video, track, frame):"获取图像及标注信息"frame = "{:06d}".format(frame)# 图像路径image_path = join(self.root, video, self.path_format.format(frame, track, 'x'))# 图像标注image_anno = self.labels[video][track][frame]# 掩膜路径mask_path = join(self.root, video, self.mask_format.format(frame, track))return image_path, image_anno, mask_pathdef get_positive_pair(self, index):"正样本"# 获取video,track的相关信息video_name = self.videos[index]video = self.labels[video_name]track = random.choice(list(video.keys()))track_info = video[track]frames = track_info['frames']# 存在hardif 'hard' not in track_info:template_frame = random.randint(0, len(frames)-1)# 获取搜索范围left = max(template_frame - self.frame_range, 0)right = min(template_frame + self.frame_range, len(frames)-1) + 1search_range = frames[left:right]# 获取模板和搜索图像template_frame = frames[template_frame]search_frame = random.choice(search_range)else:# 搜索图像的帧数search_frame = random.choice(track_info['hard'])# 范围left = max(search_frame - self.frame_range, 0)right = min(search_frame + self.frame_range, len(frames)-1) + 1  # python [left:right+1) = [left:right]template_range = frames[left:right]# 获取模板和搜索图像template_frame = random.choice(template_range)search_frame = frames[search_frame]# 返回模板帧和图像帧return self.get_image_anno(video_name, track, template_frame), \self.get_image_anno(video_name, track, search_frame)def get_random_target(self, index=-1):"获取任意目标"if index == -1:index = random.randint(0, self.num-1)# 获取videoh和track信息video_name = self.videos[index]video = self.labels[video_name]track = random.choice(list(video.keys()))track_info = video[track]# 获取图像帧数frames = track_info['frames']frame = random.choice(frames)# 返回图像帧return self.get_image_anno(video_name, track, frame)def crop_hwc(image, bbox, out_sz, padding=(0, 0, 0)):"""对图像进行裁剪:param image: 要裁剪的图像:param bbox::param out_sz::param padding::return:"""# 将图像bbox进行裁剪bbox = [float(x) for x in bbox]a = (out_sz-1) / (bbox[2]-bbox[0])b = (out_sz-1) / (bbox[3]-bbox[1])c = -a * bbox[0]d = -b * bbox[1]mapping = np.array([[a, 0, c],[0, b, d]]).astype(np.float)# 反射变换crop = cv2.warpAffine(image, mapping, (out_sz, out_sz), borderMode=cv2.BORDER_CONSTANT, borderValue=padding)return crop"""
图像增强
"""
class Augmentation:def __init__(self, cfg):# 默认参数 default argsself.shift = 0# 尺度变化self.scale = 0self.blur = 0  # Falseself.resize = Falseself.rgbVar = np.array([[-0.55919361,  0.98062831, - 0.41940627],[1.72091413,  0.19879334, - 1.82968581],[4.64467907,  4.73710203, 4.88324118]], dtype=np.float32)self.flip = 0# # 本证向量# self.eig_vec = np.array([#     [0.4009, 0.7192, -0.5675],#     [-0.8140, -0.0045, -0.5808],#     [0.4203, -0.6948, -0.5836],# ], dtype=np.float32)# # 本征值# self.eig_val = np.array([[0.2175, 0.0188, 0.0045]], np.float32)self.__dict__.update(cfg)@staticmethoddef random():return random.random() * 2 - 1.0# 图像模糊def blur_image(self, image):# 核函数def rand_kernel():size = np.random.randn(1)size = int(np.round(size)) * 2 + 1if size < 0: return Noneif random.random() < 0.5: return Nonesize = min(size, 45)kernel = np.zeros((size, size))c = int(size/2)# 随机数wx = random.random()# 生成核函数kernel[:, c] += 1. / size * wxkernel[c, :] += 1. / size * (1-wx)return kernelkernel = rand_kernel()# 图像模糊if kernel is not None:image = cv2.filter2D(image, -1, kernel)return imagedef __call__(self, image, bbox, size, gray=False, mask=None):# 灰度图像if gray:# 将彩图转换为灰度图grayed = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)# 生成三个通道都是灰度值的彩图image = np.zeros((grayed.shape[0], grayed.shape[1], 3), np.uint8)image[:, :, 0] = image[:, :, 1] = image[:, :, 2] = grayed# 图像尺寸shape = image.shape# 在图像中心生成bboxcrop_bbox = center2corner((shape[0]//2, shape[1]//2, size-1, size-1))param = {}# 图像平移if self.shift:param['shift'] = (Augmentation.random() * self.shift, Augmentation.random() * self.shift)# 图像尺度变换if self.scale:param['scale'] = ((1.0 + Augmentation.random() * self.scale), (1.0 + Augmentation.random() * self.scale))# 矩形框也要相应的进行平移和尺度变换crop_bbox, _ = aug_apply(Corner(*crop_bbox), param, shape)# 获取bbox的左上角坐标x1 = crop_bbox.x1y1 = crop_bbox.y1# 获取bboxbbox = BBox(bbox.x1 - x1, bbox.y1 - y1,bbox.x2 - x1, bbox.y2 - y1)# bbox尺度变换if self.scale:scale_x, scale_y = param['scale']bbox = Corner(bbox.x1 / scale_x, bbox.y1 / scale_y, bbox.x2 / scale_x, bbox.y2 / scale_y)# 裁剪图像image = crop_hwc(image, crop_bbox, size)if not mask is None:# mask也要进行裁剪mask = crop_hwc(mask, crop_bbox, size)# 偏置offset = np.dot(self.rgbVar, np.random.randn(3, 1))offset = offset[::-1]  # bgr 2 rgboffset = offset.reshape(3)# 图像减去偏置image = image - offset# 图像模糊if self.blur > random.random():image = self.blur_image(image)# 图像缩放if self.resize:imageSize = image.shape[:2]# 缩放比例ratio = max(math.pow(random.random(), 0.5), 0.2)  # 25 ~ 255rand_size = (int(round(ratio*imageSize[0])), int(round(ratio*imageSize[1])))# 缩放image = cv2.resize(image, rand_size)# 缩放image = cv2.resize(image, tuple(imageSize))# 图像翻转if self.flip and self.flip > Augmentation.random():# 图像翻转image = cv2.flip(image, 1)# 掩膜翻转mask = cv2.flip(mask, 1)width = image.shape[1]# bbox进行变换bbox = Corner(width - 1 - bbox.x2, bbox.y1, width - 1 - bbox.x1, bbox.y2)return image, bbox, maskclass AnchorTargetLayer:# 计算anchor的类别,RPN的返回结果,IOUdef __init__(self, cfg):# 设置阈值self.thr_high = 0.6self.thr_low = 0.3# 正负样本self.negative = 16self.rpn_batch = 64self.positive = 16self.__dict__.update(cfg)def __call__(self, anchor, target, size, neg=False, need_iou=False):# 获取anchor的数目anchor_num = anchor.anchors.shape[0]# anchor的类别cls = np.zeros((anchor_num, size, size), dtype=np.int64)cls[...] = -1  # -1 ignore 0 negative 1 positive# rpn的输出结果delta = np.zeros((4, anchor_num, size, size), dtype=np.float32)delta_weight = np.zeros((anchor_num, size, size), dtype=np.float32)# 计算值的集合和数量def select(position, keep_num=16):num = position[0].shape[0]if num <= keep_num:return position, numslt = np.arange(num)np.random.shuffle(slt)slt = slt[:keep_num]return tuple(p[slt] for p in position), keep_num# 负样本if neg:# 获取左右位置l = size // 2 - 3r = size // 2 + 3 + 1# 将cls置为0cls[:, l:r, l:r] = 0# 获取样本的索引和总数neg, neg_num = select(np.where(cls == 0), self.negative)cls[:] = -1cls[neg] = 0# 返回结果if not need_iou:# 分类结果,rpn预测的位置,惩罚系数return cls, delta, delta_weightelse:overlap = np.zeros((anchor_num, size, size), dtype=np.float32)return cls, delta, delta_weight, overlap# bbox表示方式转换tcx, tcy, tw, th = corner2center(target)# 获取anchor的坐标anchor_box = anchor.all_anchors[0]anchor_center = anchor.all_anchors[1]x1, y1, x2, y2 = anchor_box[0], anchor_box[1], anchor_box[2], anchor_box[3]cx, cy, w, h = anchor_center[0], anchor_center[1], anchor_center[2], anchor_center[3]# delta RPN网络预测的位置delta[0] = (tcx - cx) / wdelta[1] = (tcy - cy) / hdelta[2] = np.log(tw / w)delta[3] = np.log(th / h)# IoU 交并比overlap = IoU([x1, y1, x2, y2], target)# 获取正负样本pos = np.where(overlap > self.thr_high)neg = np.where(overlap < self.thr_low)# 获取正负样本及其索引pos, pos_num = select(pos, self.positive)neg, neg_num = select(neg, self.rpn_batch - pos_num)# 正样本置为1cls[pos] = 1# 设置惩罚系数delta_weight[pos] = 1. / (pos_num + 1e-6)# 负样本为0cls[neg] = 0# 返回结果if not need_iou:# 分类结果,rpn预测的位置,惩罚系数return cls, delta, delta_weightelse:return cls, delta, delta_weight, overlap"""
获取数据集
"""
class DataSets(Dataset):def __init__(self, cfg, anchor_cfg, num_epoch=1):super(DataSets, self).__init__()global loggerlogger = logging.getLogger('global')# anchorsself.anchors = Anchors(anchor_cfg)# sizeself.template_size = 127self.origin_size = 127self.search_size = 255self.size = 17self.base_size = 0self.crop_size = 0# 根据配置文件更新参数if 'template_size' in cfg:self.template_size = cfg['template_size']if 'origin_size' in cfg:self.origin_size = cfg['origin_size']if 'search_size' in cfg:self.search_size = cfg['search_size']if 'base_size' in cfg:self.base_size = cfg['base_size']if 'size' in cfg:self.size = cfg['size']if (self.search_size - self.template_size) / self.anchors.stride + 1 + self.base_size != self.size:raise Exception("size not match!")  # TODO: calculate size onlineif 'crop_size' in cfg:self.crop_size = cfg['crop_size']self.template_small = Falseif 'template_small' in cfg and cfg['template_small']:self.template_small = True# 生成anchorself.anchors.generate_all_anchors(im_c=self.search_size//2, size=self.size)if 'anchor_target' not in cfg:cfg['anchor_target'] = {}# 生成anchor的信息:cls,reg,maskself.anchor_target = AnchorTargetLayer(cfg['anchor_target'])# data setsif 'datasets' not in cfg:raise(Exception('DataSet need "{}"'.format('datasets')))self.all_data = []start = 0self.num = 0for name in cfg['datasets']:dataset = cfg['datasets'][name]dataset['mark'] = namedataset['start'] = start# 加载数据dataset = SubDataSet(dataset)dataset.log()self.all_data.append(dataset)# 数据数量start += dataset.num  # real video number# 打乱的数据数量self.num += dataset.num_use  # the number used for subset shuffle# 数据增强data augmentationaug_cfg = cfg['augmentation']self.template_aug = Augmentation(aug_cfg['template'])self.search_aug = Augmentation(aug_cfg['search'])self.gray = aug_cfg['gray']self.neg = aug_cfg['neg']self.inner_neg = 0 if 'inner_neg' not in aug_cfg else aug_cfg['inner_neg']self.pick = None  # list to save id for each imgif 'num' in cfg:  # number used in training for all datasetself.num = int(cfg['num'])self.num *= num_epochself.shuffle()self.infos = {'template': self.template_size,'search': self.search_size,'template_small': self.template_small,'gray': self.gray,'neg': self.neg,'inner_neg': self.inner_neg,'crop_size': self.crop_size,'anchor_target': self.anchor_target.__dict__,'num': self.num // num_epoch}logger.info('dataset informations: \n{}'.format(json.dumps(self.infos, indent=4)))def imread(self, path):# 数据读取img = cv2.imread(path)if self.origin_size == self.template_size:# 返回图像return img, 1.0def map_size(exe, size):return int(round(((exe + 1) / (self.origin_size + 1) * (size+1) - 1)))# 尺寸调整nsize = map_size(self.template_size, img.shape[1])# 调整图像大小img = cv2.resize(img, (nsize, nsize))# 返回图像和缩放比例return img, nsize / img.shape[1]def shuffle(self):"打乱"pick = []m = 0# 获取数据while m < self.num:p = []for subset in self.all_data:sub_p = subset.shuffle()p += sub_p# 打乱数据sample_random.shuffle(p)# 将打乱的结果进行拼接pick += pm = len(pick)# 将打乱的结果赋值给pickself.pick = picklogger.info("shuffle done!")logger.info("dataset length {}".format(self.num))def __len__(self):return self.numdef find_dataset(self, index):"查找数据"for dataset in self.all_data:if dataset.start + dataset.num > index:# 返回索引范围内的数据return dataset, index - dataset.startdef __getitem__(self, index, debug=False):# 在打乱的结果中找到索引index = self.pick[index]# 查找得到数据dataset, index = self.find_dataset(index)# 灰度图gray = self.gray and self.gray > random.random()# 负样本neg = self.neg and self.neg > random.random()# 负样本if neg:# 获取templatetemplate = dataset.get_random_target(index)# 根据设置,从数据生成负样本或随机选择负样本if self.inner_neg and self.inner_neg > random.random():search = dataset.get_random_target()else:search = random.choice(self.all_data).get_random_target()else:# 获得正样本对template, search = dataset.get_positive_pair(index)# 裁剪图像的中央大小为size的部分def center_crop(img, size):# 获取图像的形状shape = img.shape[1]# 若为size,则直接返回if shape == size: return img# 否则,裁剪中央位置为size大小的图像c = shape // 2l = c - size // 2r = c + size // 2 + 1return img[l:r, l:r]# 读取模板图像template_image, scale_z = self.imread(template[0])# 若设置为小模板时,则从模板图像中进行裁剪if self.template_small:template_image = center_crop(template_image, self.template_size)# 读取待搜索图像search_image, scale_x = self.imread(search[0])# 若存在掩膜并且不是负样本数据if dataset.has_mask and not neg:# 读取掩膜数据search_mask = (cv2.imread(search[2], 0) > 0).astype(np.float32)else:# 掩膜数据用全零数组替代search_mask = np.zeros(search_image.shape[:2], dtype=np.float32)# 若裁剪size大于0,对搜索图像和掩膜进行裁剪if self.crop_size > 0:search_image = center_crop(search_image, self.crop_size)search_mask = center_crop(search_mask, self.crop_size)# 根据图像大小生成bbox,shape是模板图像中bbox的形状def toBBox(image, shape):# 图像的大小imh, imw = image.shape[:2]# 获取shape的宽高if len(shape) == 4:w, h = shape[2]-shape[0], shape[3]-shape[1]else:w, h = shape# 扩展比例context_amount = 0.5# 模板尺寸exemplar_size = self.template_size  # 127# 获取宽高wc_z = w + context_amount * (w+h)hc_z = h + context_amount * (w+h)# 等效边长s_z = np.sqrt(wc_z * hc_z)# 比例scale_z = exemplar_size / s_z# 宽高w = w*scale_zh = h*scale_z# 中心点坐标cx, cy = imw//2, imh//2bbox = center2corner(Center(cx, cy, w, h))return bbox# 生成模板图像和待搜索图像中的bboxtemplate_box = toBBox(template_image, template[1])search_box = toBBox(search_image, search[1])# 模板数据增强template, _, _ = self.template_aug(template_image, template_box, self.template_size, gray=gray)# 待搜索图像的数据增强search, bbox, mask = self.search_aug(search_image, search_box, self.search_size, gray=gray, mask=search_mask)# def draw(image, box, name):#     image = image.copy()#     x1, y1, x2, y2 = map(lambda x: int(round(x)), box)#     cv2.rectangle(image, (x1, y1), (x2, y2), (0, 255, 0))#     cv2.imwrite(name, image)## if debug:#     draw(template_image, template_box, "debug/{:06d}_ot.jpg".format(index))#     draw(search_image, search_box, "debug/{:06d}_os.jpg".format(index))#     draw(template, _, "debug/{:06d}_t.jpg".format(index))#     draw(search, bbox, "debug/{:06d}_s.jpg".format(index))# 生成anchor对应的信息cls, delta, delta_weight = self.anchor_target(self.anchors, bbox, self.size, neg)if dataset.has_mask and not neg:# 掩膜图像mask_weight = cls.max(axis=0, keepdims=True)else:mask_weight = np.zeros([1, cls.shape[1], cls.shape[2]], dtype=np.float32)# 模板和搜索图像template, search = map(lambda x: np.transpose(x, (2, 0, 1)).astype(np.float32), [template, search])# 掩膜结果mask = (np.expand_dims(mask, axis=0) > 0.5) * 2 - 1  # 1*H*W# 返回结果return template, search, cls, delta, delta_weight, np.array(bbox, np.float32), \np.array(mask, np.float32), np.array(mask_weight, np.float32)

SiamMask\experiments\siammask_base

config.json

{"network": {"arch": "Custom"},"hp": {"instance_size": 255,"base_size": 8},"lr": {"type": "log","start_lr": 0.005,"end_lr": 0.0025,"warmup": {"start_lr": 0.001,"end_lr": 0.005,"type": "step","step": 1,"epoch": 5}},"loss": {"weight": [1.0, 1.2, 36]},"train_datasets": {"datasets": {"coco": {"root": "../../data/coco/crop511","anno": "../../data/coco/train2017.json","frame_range": 1}},"template_size": 127,"search_size": 255,"base_size": 8,"size": 25,"num" : 6000,"augmentation": {"template": {"shift": 4, "scale": 0.05},"search": {"shift": 64, "scale": 0.18, "blur": 0.18},"neg": 0.2,"gray": 0.25}},"val_datasets": {"datasets": {"vid": {"root": "../../data/coco/crop511","anno": "../../data/coco/val2017.json","num_use": 1000}},"template_size": 127,"search_size": 255,"size": 17,"num" : 10,"augmentation": {"template": {"shift": 0, "scale": 0.00},"search": {"shift": 12, "scale": 0.18},"neg": 0,"gray": 0}},"anchors": {"stride": 8,"ratios": [0.33, 0.5, 1, 2, 3],"scales": [8],"round_dight": 0}
}

custom.py

from models.siammask import SiamMask
from models.features import MultiStageFeature
from models.rpn import RPN, DepthCorr
from models.mask import Mask
import torch
import torch.nn as nn
from utils.load_helper import load_pretrain
from resnet import resnet50class ResDownS(nn.Module):def __init__(self, inplane, outplane):super(ResDownS, self).__init__()self.downsample = nn.Sequential(nn.Conv2d(inplane, outplane, kernel_size=1, bias=False),nn.BatchNorm2d(outplane))def forward(self, x):x = self.downsample(x)if x.size(3) < 20:l = 4r = -4x = x[:, :, l:r, l:r]return xclass ResDown(MultiStageFeature):""""""def __init__(self, pretrain=False):super(ResDown, self).__init__()self.features = resnet50(layer3=True, layer4=False)if pretrain:load_pretrain(self.features, 'resnet.model')self.downsample = ResDownS(1024, 256)self.layers = [self.downsample, self.features.layer2, self.features.layer3]self.train_nums = [1, 3]self.change_point = [0, 0.5]self.unfix(0.0)def param_groups(self, start_lr, feature_mult=1):lr = start_lr * feature_multdef _params(module, mult=1):params = list(filter(lambda x:x.requires_grad, module.parameters()))if len(params):return [{'params': params, 'lr': lr * mult}]else:return []groups = []groups += _params(self.downsample)groups += _params(self.features, 0.1)return groupsdef forward(self, x):output = self.features(x)p3 = self.downsample(output[1])return p3class UP(RPN):def __init__(self, anchor_num=5, feature_in=256, feature_out=256):super(UP, self).__init__()self.anchor_num = anchor_numself.feature_in = feature_inself.feature_out = feature_outself.cls_output = 2 * self.anchor_numself.loc_output = 4 * self.anchor_numself.cls = DepthCorr(feature_in, feature_out, self.cls_output)self.loc = DepthCorr(feature_in, feature_out, self.loc_output)def forward(self, z_f, x_f):cls = self.cls(z_f, x_f)loc = self.loc(z_f, x_f)return cls, locclass MaskCorr(Mask):def __init__(self, oSz=63):super(MaskCorr, self).__init__()self.oSz = oSzself.mask = DepthCorr(256, 256, self.oSz**2)def forward(self, z, x):return self.mask(z, x)class Custom(SiamMask):"""Custom是SiamMask的载体"""def __init__(self, pretrain=False, **kwargs):super(Custom, self).__init__(**kwargs)# 特征提取self.features = ResDown(pretrain=pretrain)# rpnself.rpn_model = UP(anchor_num=self.anchor_num, feature_in=256, feature_out=256)# maskself.mask_model = MaskCorr()def template(self, template):"""对模板进行特征提取:param template::return:"""self.zf = self.features(template)def track(self, search):"""对待搜索图像进行特征提取:param search::return:"""search = self.features(search)rpn_pred_cls, rpn_pred_loc = self.rpn(self.zf, search)return rpn_pred_cls, rpn_pred_locdef track_mask(self, search):"""对目标进行跟踪:param search::return:"""# 特征提取search = self.features(search)# rpn预测分类和位置rpn_pred_cls, rpn_pred_loc = self.rpn(self.zf, search)# 分割掩膜pred_mask = self.mask(self.zf, search)return rpn_pred_cls, rpn_pred_loc, pred_mask

resnet.py

import torch.nn as nn
import torch
from torch.autograd import Variable
import math
import torch.utils.model_zoo as model_zoo
from models.features import Features
from utils.log_helper import log_once__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101','resnet152']model_urls = {'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth','resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth','resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth','resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth','resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}def conv3x3(in_planes, out_planes, stride=1):"3x3 convolution with padding"return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,padding=1, bias=False)class BasicBlock(nn.Module):expansion = 1def __init__(self, inplanes, planes, stride=1, downsample=None):super(BasicBlock, self).__init__()self.conv1 = conv3x3(inplanes, planes, stride)self.bn1 = nn.BatchNorm2d(planes)self.relu = nn.ReLU(inplace=True)self.conv2 = conv3x3(planes, planes)self.bn2 = nn.BatchNorm2d(planes)self.downsample = downsampleself.stride = stridedef forward(self, x):residual = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)if self.downsample is not None:residual = self.downsample(x)out += residualout = self.relu(out)return outclass Bottleneck(Features):expansion = 4def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):super(Bottleneck, self).__init__()self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)self.bn1 = nn.BatchNorm2d(planes)# padding = (2 - stride) + (dilation // 2 - 1)padding = 2 - strideassert stride==1 or dilation==1, "stride and dilation must have one equals to zero at least"if dilation > 1:padding = dilationself.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,padding=padding, bias=False, dilation=dilation)self.bn2 = nn.BatchNorm2d(planes)self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)self.bn3 = nn.BatchNorm2d(planes * 4)self.relu = nn.ReLU(inplace=True)self.downsample = downsampleself.stride = stridedef forward(self, x):residual = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)out = self.relu(out)out = self.conv3(out)out = self.bn3(out)if self.downsample is not None:residual = self.downsample(x)if out.size() != residual.size():print(out.size(), residual.size())out += residualout = self.relu(out)return outclass Bottleneck_nop(nn.Module):expansion = 4def __init__(self, inplanes, planes, stride=1, downsample=None):super(Bottleneck_nop, self).__init__()self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)self.bn1 = nn.BatchNorm2d(planes)self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,padding=0, bias=False)self.bn2 = nn.BatchNorm2d(planes)self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)self.bn3 = nn.BatchNorm2d(planes * 4)self.relu = nn.ReLU(inplace=True)self.downsample = downsampleself.stride = stridedef forward(self, x):residual = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)out = self.relu(out)out = self.conv3(out)out = self.bn3(out)if self.downsample is not None:residual = self.downsample(x)s = residual.size(3)residual = residual[:, :, 1:s-1, 1:s-1]out += residualout = self.relu(out)return outclass ResNet(nn.Module):def __init__(self, block, layers, layer4=False, layer3=False):self.inplanes = 64super(ResNet, self).__init__()self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=0, # 3bias=False)self.bn1 = nn.BatchNorm2d(64)self.relu = nn.ReLU(inplace=True)self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)self.layer1 = self._make_layer(block, 64, layers[0])self.layer2 = self._make_layer(block, 128, layers[1], stride=2) # 31x31, 15x15self.feature_size = 128 * block.expansionif layer3:self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2) # 15x15, 7x7self.feature_size = (256 + 128) * block.expansionelse:self.layer3 = lambda x:x # identityif layer4:self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4) # 7x7, 3x3self.feature_size = 512 * block.expansionelse:self.layer4 = lambda x:x  # identityfor m in self.modules():if isinstance(m, nn.Conv2d):n = m.kernel_size[0] * m.kernel_size[1] * m.out_channelsm.weight.data.normal_(0, math.sqrt(2. / n))elif isinstance(m, nn.BatchNorm2d):m.weight.data.fill_(1)m.bias.data.zero_()def _make_layer(self, block, planes, blocks, stride=1, dilation=1):downsample = Nonedd = dilationif stride != 1 or self.inplanes != planes * block.expansion:if stride == 1 and dilation == 1:downsample = nn.Sequential(nn.Conv2d(self.inplanes, planes * block.expansion,kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(planes * block.expansion),)else:if dilation > 1:dd = dilation // 2padding = ddelse:dd = 1padding = 0downsample = nn.Sequential(nn.Conv2d(self.inplanes, planes * block.expansion,kernel_size=3, stride=stride, bias=False,padding=padding, dilation=dd),nn.BatchNorm2d(planes * block.expansion),)layers = []# layers.append(block(self.inplanes, planes, stride, downsample, dilation=dilation))layers.append(block(self.inplanes, planes, stride, downsample, dilation=dd))self.inplanes = planes * block.expansionfor i in range(1, blocks):layers.append(block(self.inplanes, planes, dilation=dilation))return nn.Sequential(*layers)def forward(self, x):x = self.conv1(x)x = self.bn1(x)x = self.relu(x)# print x.size()x = self.maxpool(x)# print x.size()p1 = self.layer1(x)p2 = self.layer2(p1)p3 = self.layer3(p2)# p3 = torch.cat([p2, p3], 1)log_once("p3 {}".format(p3.size()))p4 = self.layer4(p3)return p2, p3, p4class ResAdjust(nn.Module):def __init__(self,block=Bottleneck,out_channels=256,adjust_number=1,fuse_layers=[2,3,4]):super(ResAdjust, self).__init__()self.fuse_layers = set(fuse_layers)if 2 in self.fuse_layers:self.layer2 = self._make_layer(block, 128, 1, out_channels, adjust_number)if 3 in self.fuse_layers:self.layer3 = self._make_layer(block, 256, 2, out_channels, adjust_number)if 4 in self.fuse_layers:self.layer4 = self._make_layer(block, 512, 4, out_channels, adjust_number)self.feature_size = out_channels * len(self.fuse_layers)def _make_layer(self, block, plances, dilation, out, number=1):layers = []for _ in range(number):layer = block(plances * block.expansion, plances, dilation=dilation)layers.append(layer)downsample = nn.Sequential(nn.Conv2d(plances * block.expansion, out, kernel_size=3, padding=1, bias=False),nn.BatchNorm2d(out))layers.append(downsample)return nn.Sequential(*layers)def forward(self, p2, p3, p4):outputs = []if 2 in self.fuse_layers:outputs.append(self.layer2(p2))if 3 in self.fuse_layers:outputs.append(self.layer3(p3))if 4 in self.fuse_layers:outputs.append(self.layer4(p4))# return torch.cat(outputs, 1)return outputsdef resnet18(pretrained=False, **kwargs):"""Constructs a ResNet-18 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))return modeldef resnet34(pretrained=False, **kwargs):"""Constructs a ResNet-34 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))return modeldef resnet50(pretrained=False, **kwargs):"""Constructs a ResNet-50 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))return modeldef resnet101(pretrained=False, **kwargs):"""Constructs a ResNet-101 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))return modeldef resnet152(pretrained=False, **kwargs):"""Constructs a ResNet-152 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))return modelif __name__ == '__main__':net = resnet50()print(net)net = net.cuda()var = torch.FloatTensor(1,3,127,127).cuda()var = Variable(var)net(var)print('*************')var = torch.FloatTensor(1,3,255,255).cuda()var = Variable(var)net(var)

SiamMask\experiments\siammask_sharp

config.json

{"network": {"arch": "Custom"},"hp": {"instance_size": 255,"base_size": 8,"out_size": 127,"seg_thr": 0.35,"penalty_k": 0.04,"window_influence": 0.4,"lr": 1.0},"lr": {"type": "log","start_lr": 0.01,"end_lr": 0.0025},"loss": {"weight": [0, 0, 36]},"train_datasets": {"datasets": {"coco": {"root": "../../data/coco/crop511","anno": "../../data/coco/train2017.json","frame_range": 1}},"template_size": 127,"search_size": 143,"base_size": 0,"size": 3,"num" : 2000,"augmentation": {"template": {"shift": 4, "scale": 0.05},"search": {"shift": 8, "scale": 0.18, "blur": 0.18},"neg": 0,"gray": 0.25}},"anchors": {"stride": 8,"ratios": [0.33, 0.5, 1, 2, 3],"scales": [8],"round_dight": 0}
}

custom.py

from models.siammask_sharp import SiamMask
from models.features import MultiStageFeature
from models.rpn import RPN, DepthCorr
from models.mask import Mask
import torch
import torch.nn as nn
import torch.nn.functional as F
from utils.load_helper import load_pretrain
from resnet import resnet50class ResDownS(nn.Module):"""对应于网络中的adjust"""# inplane对应输入通道数,outplane对应输出通道数def __init__(self, inplane, outplane):super(ResDownS, self).__init__()self.downsample = nn.Sequential(nn.Conv2d(inplane, outplane, kernel_size=1, bias=False),nn.BatchNorm2d(outplane))def forward(self, x):# adjust操作x = self.downsample(x)# 图像宽度小于20的,只取中间的部分if x.size(3) < 20:l = 4r = -4x = x[:, :, l:r, l:r]return xclass ResDown(MultiStageFeature):"""孪生网络特征提取层,对应ResNet-50和adjust 操作"""def __init__(self, pretrain=False):super(ResDown, self).__init__()# 利用resnet50进行特征提取self.features = resnet50(layer3=True, layer4=False)# 若存在预训练网络则将其直接赋值给featureif pretrain:load_pretrain(self.features, 'resnet.model')# adjustself.downsample = ResDownS(1024, 256)# 网络层self.layers = [self.downsample, self.features.layer2, self.features.layer3]self.train_nums = [1, 3]self.change_point = [0, 0.5]self.unfix(0.0)def param_groups(self, start_lr, feature_mult=1):lr = start_lr * feature_multdef _params(module, mult=1):params = list(filter(lambda x:x.requires_grad, module.parameters()))if len(params):return [{'params': params, 'lr': lr * mult}]else:return []groups = []groups += _params(self.downsample)groups += _params(self.features, 0.1)return groupsdef forward(self, x):"""前向传输,返回adjust结果:param x::return:"""output = self.features(x)p3 = self.downsample(output[-1])return p3def forward_all(self, x):"""前向传输,返回特征和adjust结果:param x::return:"""output = self.features(x)p3 = self.downsample(output[-1])return output, p3class UP(RPN):"""边界回归和分类网络"""def __init__(self, anchor_num=5, feature_in=256, feature_out=256):super(UP, self).__init__()# 参数设置self.anchor_num = anchor_numself.feature_in = feature_inself.feature_out = feature_outself.cls_output = 2 * self.anchor_numself.loc_output = 4 * self.anchor_num# 分类和回归结果self.cls = DepthCorr(feature_in, feature_out, self.cls_output)self.loc = DepthCorr(feature_in, feature_out, self.loc_output)def forward(self, z_f, x_f):"""返回分类和回归结果:param z_f::param x_f::return:"""cls = self.cls(z_f, x_f)loc = self.loc(z_f, x_f)return cls, locclass MaskCorr(Mask):"""目标分割,利用DepthCorr完成"""def __init__(self, oSz=63):super(MaskCorr, self).__init__()self.oSz = oSzself.mask = DepthCorr(256, 256, self.oSz**2)def forward(self, z, x):return self.mask(z, x)class Refine(nn.Module):"""对掩码进行细化"""def __init__(self):"""掩膜操作融合模块的实现"""super(Refine, self).__init__()# self.v2、self.v1、self.v0为垂直分支(vertical),压缩通道;self.v0 = nn.Sequential(nn.Conv2d(64, 16, 3, padding=1), nn.ReLU(),nn.Conv2d(16, 4, 3, padding=1),nn.ReLU())self.v1 = nn.Sequential(nn.Conv2d(256, 64, 3, padding=1), nn.ReLU(),nn.Conv2d(64, 16, 3, padding=1), nn.ReLU())self.v2 = nn.Sequential(nn.Conv2d(512, 128, 3, padding=1), nn.ReLU(),nn.Conv2d(128, 32, 3, padding=1), nn.ReLU())# self.h2、self.h1、self.h0作用于水平分支(horizontal),消化融合结果。self.h2 = nn.Sequential(nn.Conv2d(32, 32, 3, padding=1), nn.ReLU(),nn.Conv2d(32, 32, 3, padding=1), nn.ReLU())self.h1 = nn.Sequential(nn.Conv2d(16, 16, 3, padding=1), nn.ReLU(),nn.Conv2d(16, 16, 3, padding=1), nn.ReLU())self.h0 = nn.Sequential(nn.Conv2d(4, 4, 3, padding=1), nn.ReLU(),nn.Conv2d(4, 4, 3, padding=1), nn.ReLU())# 由多个输入平面组成的输入图像上应用2D 转置卷积运算符。该模块可以看作 Conv2d 相对于其输入的梯度。它也被称为分数步长卷积或反卷积self.deconv = nn.ConvTranspose2d(256, 32, 15, 15)# post0,post1,post2属性分别对应U2,U3,U4self.post0 = nn.Conv2d(32, 16, 3, padding=1)self.post1 = nn.Conv2d(16, 4, 3, padding=1)self.post2 = nn.Conv2d(4, 1, 3, padding=1)# 卷积层使用kaiming分布初始化其参数for modules in [self.v0, self.v1, self.v2, self.h2, self.h1, self.h0, self.deconv, self.post0, self.post1, self.post2,]:for l in modules.modules():if isinstance(l, nn.Conv2d):nn.init.kaiming_uniform_(l.weight, a=1)def forward(self, f, corr_feature, pos=None, test=False):if test:# 进行测试时:# f为 ResNet 的特征图元组。# f[0]形状为[1, 64, 125, 125],# f[1]形状为[1, 256, 63, 63],# f[2]形状为[1, 512, 31, 31],# p0,p1,p2表示补0填充后,取出目标位置的特征图p0 = torch.nn.functional.pad(f[0], [16, 16, 16, 16])[:, :, 4*pos[0]:4*pos[0]+61, 4*pos[1]:4*pos[1]+61]p1 = torch.nn.functional.pad(f[1], [8, 8, 8, 8])[:, :, 2 * pos[0]:2 * pos[0] + 31, 2 * pos[1]:2 * pos[1] + 31]p2 = torch.nn.functional.pad(f[2], [4, 4, 4, 4])[:, :, pos[0]:pos[0] + 15, pos[1]:pos[1] + 15]else:# 训练# 利用滑动窗口取得特征图p0 = F.unfold(f[0], (61, 61), padding=0, stride=4).permute(0, 2, 1).contiguous().view(-1, 64, 61, 61)if not (pos is None): p0 = torch.index_select(p0, 0, pos)p1 = F.unfold(f[1], (31, 31), padding=0, stride=2).permute(0, 2, 1).contiguous().view(-1, 256, 31, 31)if not (pos is None): p1 = torch.index_select(p1, 0, pos)p2 = F.unfold(f[2], (15, 15), padding=0, stride=1).permute(0, 2, 1).contiguous().view(-1, 512, 15, 15)if not (pos is None): p2 = torch.index_select(p2, 0, pos)if not(pos is None):# 训练,P3相关特征上的特征向量p3 = corr_feature[:, :, pos[0], pos[1]].view(-1, 256, 1, 1)else:# 测试p3 = corr_feature.permute(0, 2, 3, 1).contiguous().view(-1, 256, 1, 1)# 反卷积out = self.deconv(p3)# 进行特征的融和out = self.post0(F.upsample(self.h2(out) + self.v2(p2), size=(31, 31)))out = self.post1(F.upsample(self.h1(out) + self.v1(p1), size=(61, 61)))out = self.post2(F.upsample(self.h0(out) + self.v0(p0), size=(127, 127)))out = out.view(-1, 127*127)return outdef param_groups(self, start_lr, feature_mult=1):"""参数集合:param start_lr::param feature_mult::return:"""params = filter(lambda x:x.requires_grad, self.parameters())params = [{'params': params, 'lr': start_lr * feature_mult}]return paramsclass Custom(SiamMask):def __init__(self, pretrain=False, **kwargs):super(Custom, self).__init__(**kwargs)self.features = ResDown(pretrain=pretrain)self.rpn_model = UP(anchor_num=self.anchor_num, feature_in=256, feature_out=256)self.mask_model = MaskCorr()self.refine_model = Refine()def refine(self, f, pos=None):"""特征融合:param f::param pos::return:"""return self.refine_model(f, pos)def template(self, template):"""对模板进行特征提取:param template::return:"""self.zf = self.features(template)def track(self, search):"""目标追踪:param search:进行跟踪的图像块:return: 目标分类和回归结果"""# 目标特征提取search = self.features(search)# 利用rpn网络进行回归与分类:得到目标类型及位置rpn_pred_cls, rpn_pred_loc = self.rpn(self.zf, search)return rpn_pred_cls, rpn_pred_locdef track_mask(self, search):"""目标跟踪并进行分割:param search: 进行跟踪的图像块:return: 目标分类,回归及分割结果"""# 目标特征提取,self.feature, self.search = self.features.forward_all(search)# 分类和回归rpn_pred_cls, rpn_pred_loc = self.rpn(self.zf, self.search)# 相关滤波self.corr_feature = self.mask_model.mask.forward_corr(self.zf, self.search)# 掩膜结果pred_mask = self.mask_model.mask.head(self.corr_feature)return rpn_pred_cls, rpn_pred_loc, pred_maskdef track_refine(self, pos):# 对特征进行融合pred_mask = self.refine_model(self.feature, self.corr_feature, pos=pos, test=True)return pred_mask

resnet.py

import torch.nn as nn
import torch
from torch.autograd import Variable
import math
import torch.utils.model_zoo as model_zoo
from models.features import Features__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101','resnet152']# 已进行预训练的resnet模型
model_urls = {'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth','resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth','resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth','resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth','resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}def conv3x3(in_planes, out_planes, stride=1):""""3x3 convolution with padding"3*3卷积:param in_planes: 输入通道数:param out_planes: 输出通道数:param stride: 步长:return:"""return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,padding=1, bias=False)class BasicBlock(nn.Module):"""基础的瓶颈模块,由两个叠加的3*3卷积组成,用于res18和res34"""# 对输出深度的倍乘expansion = 1def __init__(self, inplanes, planes, stride=1, downsample=None):super(BasicBlock, self).__init__()# 3*3卷积 BN层 Relu激活self.conv1 = conv3x3(inplanes, planes, stride)self.bn1 = nn.BatchNorm2d(planes)self.relu = nn.ReLU(inplace=True)self.conv2 = conv3x3(planes, planes)self.bn2 = nn.BatchNorm2d(planes)# shortcutself.downsample = downsampleself.stride = stridedef forward(self, x):# 上一层的输出residual = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)# shortcut存在时if self.downsample is not None:# 将上一层的输出x输入downsample,将结果赋给residual# 目的就是为了应对上下层输出输入深度一致residual = self.downsample(x)# 将BN层结果与shortcut相加out += residual# relu激活out = self.relu(out)return outclass Bottleneck(Features):"""瓶颈模块,有1*1 3*3 1*1三个卷积层构成,分别用来降低维度,卷积处理和升高维度继承在feature,用于特征提取"""# 将输入深度进行倍乘(若输入深度为64,那么扩张4倍后就变为了256)expansion = 4def __init__(self, inplanes, planes, stride=1, downsample=None, dilation=1):super(Bottleneck, self).__init__()# 1*1卷积self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)# BNself.bn1 = nn.BatchNorm2d(planes)# padding = (2 - stride) + (dilation // 2 - 1)padding = 2 - strideassert stride==1 or dilation==1, "stride and dilation must have one equals to zero at least"if dilation > 1:padding = dilation# 3*3 卷积self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,padding=padding, bias=False, dilation=dilation)self.bn2 = nn.BatchNorm2d(planes)# 1*1 卷积self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)self.bn3 = nn.BatchNorm2d(planes * 4)self.relu = nn.ReLU(inplace=True)# shortcutself.downsample = downsampleself.stride = stridedef forward(self, x):residual = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)out = self.relu(out)out = self.conv3(out)out = self.bn3(out)if self.downsample is not None:residual = self.downsample(x)if out.size() != residual.size():print(out.size(), residual.size())out += residualout = self.relu(out)return outclass Bottleneck_nop(nn.Module):"""官网原始的瓶颈块"""expansion = 4def __init__(self, inplanes, planes, stride=1, downsample=None):super(Bottleneck_nop, self).__init__()self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)self.bn1 = nn.BatchNorm2d(planes)self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,padding=0, bias=False)self.bn2 = nn.BatchNorm2d(planes)self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)self.bn3 = nn.BatchNorm2d(planes * 4)self.relu = nn.ReLU(inplace=True)self.downsample = downsampleself.stride = stridedef forward(self, x):residual = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)out = self.relu(out)out = self.conv3(out)out = self.bn3(out)if self.downsample is not None:residual = self.downsample(x)s = residual.size(3)residual = residual[:, :, 1:s-1, 1:s-1]out += residualout = self.relu(out)return outclass ResNet(nn.Module):"""ResNet主体部分实现"""def __init__(self, block, layers, layer4=False, layer3=False):"""主体实现:param block: 基础块:BasicBlock或者BottleNeck:param layers: 每个大的layer中的block个数:param layer4::param layer3:是否加入layer3和layer4"""# 输入深度self.inplanes = 64super(ResNet, self).__init__()# 卷积self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=0,bias=False)# BNself.bn1 = nn.BatchNorm2d(64)self.relu = nn.ReLU(inplace=True)# 池化self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)# 将block块添加到layer中self.layer1 = self._make_layer(block, 64, layers[0])self.layer2 = self._make_layer(block, 128, layers[1], stride=2) # 31x31, 15x15self.feature_size = 128 * block.expansion# 添加layer3if layer3:self.layer3 = self._make_layer(block, 256, layers[2], stride=1, dilation=2) # 15x15, 7x7self.feature_size = (256 + 128) * block.expansionelse:self.layer3 = lambda x:x # identity# 添加layer4if layer4:self.layer4 = self._make_layer(block, 512, layers[3], stride=1, dilation=4) # 7x7, 3x3self.feature_size = 512 * block.expansionelse:self.layer4 = lambda x:x  # identity# 参数初始化for m in self.modules():if isinstance(m, nn.Conv2d):n = m.kernel_size[0] * m.kernel_size[1] * m.out_channelsm.weight.data.normal_(0, math.sqrt(2. / n))elif isinstance(m, nn.BatchNorm2d):m.weight.data.fill_(1)m.bias.data.zero_()def _make_layer(self, block, planes, blocks, stride=1, dilation=1):""":param block: basicbottle或者是bottleneck:param planes:通道数:param blocks:添加block的个数:param stride::param dilation::return:"""downsample = Nonedd = dilation# shortcut的设置if stride != 1 or self.inplanes != planes * block.expansion:if stride == 1 and dilation == 1:downsample = nn.Sequential(nn.Conv2d(self.inplanes, planes * block.expansion,kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(planes * block.expansion),)else:if dilation > 1:dd = dilation // 2padding = ddelse:dd = 1padding = 0downsample = nn.Sequential(nn.Conv2d(self.inplanes, planes * block.expansion,kernel_size=3, stride=stride, bias=False,padding=padding, dilation=dd),nn.BatchNorm2d(planes * block.expansion),)layers = []# layers.append(block(self.inplanes, planes, stride, downsample, dilation=dilation))layers.append(block(self.inplanes, planes, stride, downsample, dilation=dd))self.inplanes = planes * block.expansionfor i in range(1, blocks):layers.append(block(self.inplanes, planes, dilation=dilation))# 构建网络return nn.Sequential(*layers)def forward(self, x):x = self.conv1(x)x = self.bn1(x)p0 = self.relu(x)x = self.maxpool(p0)p1 = self.layer1(x)p2 = self.layer2(p1)p3 = self.layer3(p2)return p0, p1, p2, p3class ResAdjust(nn.Module):"""对模块进行adjust,未使用"""def __init__(self,block=Bottleneck,out_channels=256,adjust_number=1,fuse_layers=[2,3,4]):super(ResAdjust, self).__init__()self.fuse_layers = set(fuse_layers)if 2 in self.fuse_layers:self.layer2 = self._make_layer(block, 128, 1, out_channels, adjust_number)if 3 in self.fuse_layers:self.layer3 = self._make_layer(block, 256, 2, out_channels, adjust_number)if 4 in self.fuse_layers:self.layer4 = self._make_layer(block, 512, 4, out_channels, adjust_number)self.feature_size = out_channels * len(self.fuse_layers)def _make_layer(self, block, plances, dilation, out, number=1):layers = []for _ in range(number):layer = block(plances * block.expansion, plances, dilation=dilation)layers.append(layer)downsample = nn.Sequential(nn.Conv2d(plances * block.expansion, out, kernel_size=3, padding=1, bias=False),nn.BatchNorm2d(out))layers.append(downsample)return nn.Sequential(*layers)def forward(self, p2, p3, p4):outputs = []if 2 in self.fuse_layers:outputs.append(self.layer2(p2))if 3 in self.fuse_layers:outputs.append(self.layer3(p3))if 4 in self.fuse_layers:outputs.append(self.layer4(p4))# return torch.cat(outputs, 1)return outputsdef resnet18(pretrained=False, **kwargs):"""Constructs a ResNet-18 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))return modeldef resnet34(pretrained=False, **kwargs):"""Constructs a ResNet-34 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))return modeldef resnet50(pretrained=False, **kwargs):"""Constructs a ResNet-50 model.重构resnet50Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))return modeldef resnet101(pretrained=False, **kwargs):"""Constructs a ResNet-101 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))return modeldef resnet152(pretrained=False, **kwargs):"""Constructs a ResNet-152 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))return modelif __name__ == '__main__':net = resnet50()print(net)# net = net.cuda()## var = torch.FloatTensor(1,3,127,127).cuda()# var = Variable(var)## net(var)# print('*************')# var = torch.FloatTensor(1,3,255,255).cuda()# var = Variable(var)# net(var)var = torch.FloatTensor(1,3,127,127)var = Variable(var)print(var)out = net(var)print(out)

SiamMask\models

features.py

# --------------------------------------------------------
# 特征提取基础类
# --------------------------------------------------------
import torch.nn as nn
import logginglogger = logging.getLogger('global')class Features(nn.Module):"""特征提取模块,用于resnet"""def __init__(self):super(Features, self).__init__()self.feature_size = -1def forward(self, x):raise NotImplementedErrordef param_groups(self, start_lr, feature_mult=1):"""需要进行梯度更新的参数:param start_lr::param feature_mult::return:"""params = filter(lambda x:x.requires_grad, self.parameters())params = [{'params': params, 'lr': start_lr * feature_mult}]return paramsdef load_model(self, f='pretrain.model'):with open(f) as f:pretrained_dict = torch.load(f)model_dict = self.state_dict()print(pretrained_dict.keys())pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}print(pretrained_dict.keys())model_dict.update(pretrained_dict)self.load_state_dict(model_dict)class MultiStageFeature(Features):"""多阶段的特征提取模块,用于custom中的ResDown"""def __init__(self):"""初始化"""super(MultiStageFeature, self).__init__()self.layers = []self.train_num = -1self.change_point = []self.train_nums = []def unfix(self, ratio=0.0):"""设置在train和test网络层的更新:param ratio::return:"""if self.train_num == -1:self.train_num = 0self.unlock()self.eval()for p, t in reversed(list(zip(self.change_point, self.train_nums))):if ratio >= p:if self.train_num != t:self.train_num = tself.unlock()return Truebreakreturn Falsedef train_layers(self):"""返回进行参数更新的网络层:return:"""return self.layers[:self.train_num]def unlock(self):"""设置网络层是否进行梯度更新:return:"""# 将所有参数设置为不可训练for p in self.parameters():p.requires_grad = Falselogger.info('Current training {} layers:\n\t'.format(self.train_num, self.train_layers()))# 将train_layers中的参数,参与梯度更新for m in self.train_layers():for p in m.parameters():p.requires_grad = Truedef train(self, mode):"""根据mode对网络进行训练:param mode::return:"""self.training = mode# 若mode为false则不进行训练if mode == False:super(MultiStageFeature,self).train(False)else:# 否则训练指定层for m in self.train_layers():m.train(True)return self

mask.py

# --------------------------------------------------------
# Mask
# --------------------------------------------------------
import torch.nn as nnclass Mask(nn.Module):"""mask的基本信息"""def __init__(self):super(Mask, self).__init__()def forward(self, z_f, x_f):raise NotImplementedErrordef template(self, template):raise NotImplementedErrordef track(self, search):raise NotImplementedErrordef param_groups(self, start_lr, feature_mult=1):"""过滤掉不符合条件的元素:param start_lr::param feature_mult::return:"""params = filter(lambda x:x.requires_grad, self.parameters())params = [{'params': params, 'lr': start_lr * feature_mult}]return params

rpn.py

# --------------------------------------------------------
# SiamMask
# Licensed under The MIT License
# Written by Qiang Wang (wangqiang2015 at ia.ac.cn)
# --------------------------------------------------------
import torch.nn as nn
import torch.nn.functional as Fclass RPN(nn.Module):"""RPN网络的基本信息"""def __init__(self):"初始化"super(RPN, self).__init__()def forward(self, z_f, x_f):"前向传播"raise NotImplementedErrordef template(self, template):"模板信息"raise NotImplementedErrordef track(self, search):"跟踪信息"raise NotImplementedErrordef param_groups(self, start_lr, feature_mult=1, key=None):# 若key为空,返回需要进行梯度更新的参数if key is None:params = filter(lambda x:x.requires_grad, self.parameters())# 否则返回key中需要更新的参数else:params = [v for k, v in self.named_parameters() if (key in k) and v.requires_grad]params = [{'params': params, 'lr': start_lr * feature_mult}]return paramsdef conv2d_dw_group(x, kernel):"""模板与待搜索图像之间的相关,并变换维度:param x::param kernel::return:"""# 获得batch size 和 channelbatch, channel = kernel.shape[:2]# 进行维度重构x = x.view(1, batch*channel, x.size(2), x.size(3))  # 1 * (b*c) * k * kkernel = kernel.view(batch*channel, 1, kernel.size(2), kernel.size(3))  # (b*c) * 1 * H * W# 计算图像和模板的相似度,使用分组卷积。out = F.conv2d(x, kernel, groups=batch*channel)# 维度重构out = out.view(batch, channel, out.size(2), out.size(3))return out# 计算模板与搜索图像的关系,目标置信度和检测位置
class DepthCorr(nn.Module):def __init__(self, in_channels, hidden, out_channels, kernel_size=3):super(DepthCorr, self).__init__()# 调整图层的不对称性特征 adjust layer for asymmetrical features# 对模板进行处理self.conv_kernel = nn.Sequential(nn.Conv2d(in_channels, hidden, kernel_size=kernel_size, bias=False),nn.BatchNorm2d(hidden),nn.ReLU(inplace=True),)# 对搜索图像进行处理self.conv_search = nn.Sequential(nn.Conv2d(in_channels, hidden, kernel_size=kernel_size, bias=False),nn.BatchNorm2d(hidden),nn.ReLU(inplace=True),)# 网络输出位置:位置,类别等self.head = nn.Sequential(nn.Conv2d(hidden, hidden, kernel_size=1, bias=False),nn.BatchNorm2d(hidden),nn.ReLU(inplace=True),nn.Conv2d(hidden, out_channels, kernel_size=1))def forward_corr(self, kernel, input):# 计算模板和搜索图像的特征kernel = self.conv_kernel(kernel)input = self.conv_search(input)# 计算模板与搜索图像检测相关性feature = conv2d_dw_group(input, kernel)return featuredef forward(self, kernel, search):"""前向传播:param kernel: 模板:param search: 搜索的图像:return:"""feature = self.forward_corr(kernel, search)out = self.head(feature)return out

siammask.py

# --------------------------------------------------------
# SiamMask定义了网络的主要的模块
# --------------------------------------------------------
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from utils.anchors import Anchorsclass SiamMask(nn.Module):"""主要用来定义siamMask网络的框架,及其主要模块"""def __init__(self, anchors=None, o_sz=63, g_sz=127):super(SiamMask, self).__init__()self.anchors = anchors  # anchor_cfg anchors中的配置信息self.anchor_num = len(self.anchors["ratios"]) * len(self.anchors["scales"])  # anchor的数目self.anchor = Anchors(anchors)  # anchorself.features = None  # 特征提取网络模型self.rpn_model = None  # rpn网络模型self.mask_model = None  # 图像分割的网络模型self.o_sz = o_sz  # 输入尺寸self.g_sz = g_sz  # 输出尺寸self.upSample = nn.UpsamplingBilinear2d(size=[g_sz, g_sz])  # 2d数据的双线性插值self.all_anchors = Nonedef set_all_anchors(self, image_center, size):"""初始化anchors(该方法未使用):param image_center: 图像中心:param size::return:"""# cx,cy,w,hif not self.anchor.generate_all_anchors(image_center, size):returnall_anchors = self.anchor.all_anchors[1]  # cx, cy, w, hself.all_anchors = torch.from_numpy(all_anchors).float().cuda()self.all_anchors = [self.all_anchors[i] for i in range(4)]def feature_extractor(self, x):"""特征提取:param x:输入数据:return:数据特征"""return self.features(x)def rpn(self, template, search):"""rpn网络:param template: 模板:param search: 搜索图像:return:"""# 预测分类和位置结果pred_cls, pred_loc = self.rpn_model(template, search)return pred_cls, pred_locdef mask(self, template, search):"""分割预测结果:param template: 模板:param search: 待搜索图像:return: 掩膜结果"""pred_mask = self.mask_model(template, search)return pred_maskdef _add_rpn_loss(self, label_cls, label_loc, lable_loc_weight, label_mask, label_mask_weight,rpn_pred_cls, rpn_pred_loc, rpn_pred_mask):"""rpn损失函数"""# 分类的损失结果(交叉熵损失)rpn_loss_cls = select_cross_entropy_loss(rpn_pred_cls, label_cls)# 回归的损失结果rpn_loss_loc = weight_l1_loss(rpn_pred_loc, label_loc, lable_loc_weight)# 分割的损失结果和准确率rpn_loss_mask, iou_m, iou_5, iou_7 = select_mask_logistic_loss(rpn_pred_mask, label_mask, label_mask_weight)return rpn_loss_cls, rpn_loss_loc, rpn_loss_mask, iou_m, iou_5, iou_7def run(self, template, search, softmax=False):'''构建网络:param template: 模板:param search: 待搜索图像:param softmax::return:'''# 提取模板特征template_feature = self.feature_extractor(template)# 提取图像特征search_feature = self.feature_extractor(search)# 预测结果rpn_pred_cls, rpn_pred_loc = self.rpn(template_feature, search_feature)rpn_pred_mask = self.mask(template_feature, search_feature)  # (b, 63*63, w, h)# 利用softmax进行分类if softmax:rpn_pred_cls = self.softmax(rpn_pred_cls)return rpn_pred_cls, rpn_pred_loc, rpn_pred_mask, template_feature, search_featuredef softmax(self, cls):"""softmax:param cls::return:"""# 获取cls的结果,及其对应的anchor的大小b, a2, h, w = cls.size()# 维度变换cls = cls.view(b, 2, a2//2, h, w)# 高维转置cls = cls.permute(0, 2, 3, 4, 1).contiguous()# 对softmax结果求以4为底的对数cls = F.log_softmax(cls, dim=4)return clsdef forward(self, input):"""torch中正向传递的算法,所有的子函数将覆盖函数:param input: dict of input with keys of:'template': [b, 3, h1, w1], input template image.输入的模板图像'search': [b, 3, h2, w2], input search image.待搜索图像'label_cls':[b, max_num_gts, 5] or None(self.training==False),each gt contains x1,y1,x2,y2,class.:return: dict of loss, predict, accuracy 损失 预测结果 准确率"""# 参数设置template = input['template']search = input['search']if self.training:label_cls = input['label_cls']label_loc = input['label_loc']lable_loc_weight = input['label_loc_weight']label_mask = input['label_mask']label_mask_weight = input['label_mask_weight']# 运行网络rpn_pred_cls, rpn_pred_loc, rpn_pred_mask, template_feature, search_feature = \self.run(template, search, softmax=self.training)outputs = dict()# 预测结果outputs['predict'] = [rpn_pred_loc, rpn_pred_cls, rpn_pred_mask, template_feature, search_feature]if self.training:# 损失函数rpn_loss_cls, rpn_loss_loc, rpn_loss_mask, iou_acc_mean, iou_acc_5, iou_acc_7 = \self._add_rpn_loss(label_cls, label_loc, lable_loc_weight, label_mask, label_mask_weight,rpn_pred_cls, rpn_pred_loc, rpn_pred_mask)# 输出损失函数和精度结果outputs['losses'] = [rpn_loss_cls, rpn_loss_loc, rpn_loss_mask]outputs['accuracy'] = [iou_acc_mean, iou_acc_5, iou_acc_7]return outputsdef template(self, z):"""用于处理模板图像:param z: 跟踪目标的模板:return: 模板的分类和回归结果"""self.zf = self.feature_extractor(z)cls_kernel, loc_kernel = self.rpn_model.template(self.zf)return cls_kernel, loc_kerneldef track(self, x, cls_kernel=None, loc_kernel=None, softmax=False):"""目标跟踪:param x::param cls_kernel::param loc_kernel::param softmax::return:"""# 特征提取xf = self.feature_extractor(x)# 跟踪结果rpn_pred_cls, rpn_pred_loc = self.rpn_model.track(xf, cls_kernel, loc_kernel)if softmax:rpn_pred_cls = self.softmax(rpn_pred_cls)# 返回目标跟踪的位置和分类结果return rpn_pred_cls, rpn_pred_locdef get_cls_loss(pred, label, select):"""计算分类的损失:param pred: 预测结果:param label: 真实结果:param select: 预测位置:return:"""# 预测位置为0个,返回0if select.nelement() == 0: return pred.sum()*0.# 获取预测结果和真实结果pred = torch.index_select(pred, 0, select)label = torch.index_select(label, 0, select)# 计算最大似然函数return F.nll_loss(pred, label)def select_cross_entropy_loss(pred, label):"""交叉熵损失:param pred: 预测值:param label: 标签值(真实值):return: 返回正负类的损失值"""# 将预测数据展成...*2的形式pred = pred.view(-1, 2)# 将标签值展成一维形式label = label.view(-1)# 指明标签值# GPU# pos = Variable(label.data.eq(1).nonzero().squeeze()).cuda()# neg = Variable(label.data.eq(0).nonzero().squeeze()).cuda()pos = Variable(label.data.eq(1).nonzero().squeeze())neg = Variable(label.data.eq(0).nonzero().squeeze())# 计算正负样本的分类损失loss_pos = get_cls_loss(pred, label, pos)loss_neg = get_cls_loss(pred, label, neg)return loss_pos * 0.5 + loss_neg * 0.5def weight_l1_loss(pred_loc, label_loc, loss_weight):"""smooth L1 损失:param pred_loc: [b, 4k, h, w]:param label_loc: [b, 4k, h, w]:param loss_weight:  [b, k, h, w]:return: loc loss value"""# 预测位置的中心坐标和大小b, _, sh, sw = pred_loc.size()# 变换维度:pred_loc = pred_loc.view(b, 4, -1, sh, sw)# 计算预测与真实值之间的差值diff = (pred_loc - label_loc).abs()# 计算梯度diff = diff.sum(dim=1).view(b, -1, sh, sw)# 损失loss = diff * loss_weightreturn loss.sum().div(b)def select_mask_logistic_loss(p_m, mask, weight, o_sz=63, g_sz=127):"""计算图像分割分支的损失函数及精度信息:param p_m:预测的分割结果:param mask: 掩膜真实结果:param weight::param o_sz:模板的大小:param g_sz:图像的大小:return:"""weight = weight.view(-1)pos = Variable(weight.data.eq(1).nonzero().squeeze())if pos.nelement() == 0: return p_m.sum() * 0, p_m.sum() * 0, p_m.sum() * 0, p_m.sum() * 0# 维度转换p_m = p_m.permute(0, 2, 3, 1).contiguous().view(-1, 1, o_sz, o_sz)p_m = torch.index_select(p_m, 0, pos)# 2d升采样p_m = nn.UpsamplingBilinear2d(size=[g_sz, g_sz])(p_m)p_m = p_m.view(-1, g_sz * g_sz)# 对掩膜的真实结果进行处理mask_uf = F.unfold(mask, (g_sz, g_sz), padding=32, stride=8)mask_uf = torch.transpose(mask_uf, 1, 2).contiguous().view(-1, g_sz * g_sz)mask_uf = torch.index_select(mask_uf, 0, pos)# 计算损失函数loss = F.soft_margin_loss(p_m, mask_uf)# 计算精度iou_m, iou_5, iou_7 = iou_measure(p_m, mask_uf)# 返回结果return loss, iou_m, iou_5, iou_7def iou_measure(pred, label):"""iou计算:param pred: 预测值:param label: 真实值:return: iou平均值,iou>0.5的比例,iou>0.7的比例"""# pred中大于0的置为1pred = pred.ge(0)# 将pred中等于1的与label中为1的相加mask_sum = pred.eq(1).add(label.eq(1))# mask_sum中为2的表示交intxn = torch.sum(mask_sum == 2, dim=1).float()# mask_sum中大于0的表示并union = torch.sum(mask_sum > 0, dim=1).float()# 交并比iou = intxn/unionreturn torch.mean(iou), (torch.sum(iou > 0.5).float()/iou.shape[0]), (torch.sum(iou > 0.7).float()/iou.shape[0])if __name__ == "__main__":p_m = torch.randn(4, 63*63, 25, 25)cls = torch.randn(4, 1, 25, 25) > 0.9mask = torch.randn(4, 1, 255, 255) * 2 - 1loss = select_mask_logistic_loss(p_m, mask, cls)print(loss)

siammask_sharp.py

# --------------------------------------------------------
# SiamMask
# Licensed under The MIT License
# Written by Qiang Wang (wangqiang2015 at ia.ac.cn)
# --------------------------------------------------------
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from utils.anchors import Anchors"""该文件中主要定义了Siammask-refine的网络框架,与siammask.py中的网络架构基本相同在run函数中增加了refine——model的应用
"""
class SiamMask(nn.Module):def __init__(self, anchors=None, o_sz=127, g_sz=127):super(SiamMask, self).__init__()self.anchors = anchors  # anchor_cfgself.anchor_num = len(self.anchors["ratios"]) * len(self.anchors["scales"])self.anchor = Anchors(anchors)self.features = Noneself.rpn_model = Noneself.mask_model = Noneself.o_sz = o_szself.g_sz = g_szself.upSample = nn.UpsamplingBilinear2d(size=[g_sz, g_sz])self.all_anchors = Nonedef set_all_anchors(self, image_center, size):# cx,cy,w,hif not self.anchor.generate_all_anchors(image_center, size):returnall_anchors = self.anchor.all_anchors[1]  # cx, cy, w, hself.all_anchors = torch.from_numpy(all_anchors).float().cuda()self.all_anchors = [self.all_anchors[i] for i in range(4)]def feature_extractor(self, x):return self.features(x)def rpn(self, template, search):pred_cls, pred_loc = self.rpn_model(template, search)return pred_cls, pred_locdef mask(self, template, search):pred_mask = self.mask_model(template, search)return pred_maskdef _add_rpn_loss(self, label_cls, label_loc, lable_loc_weight, label_mask, label_mask_weight,rpn_pred_cls, rpn_pred_loc, rpn_pred_mask):rpn_loss_cls = select_cross_entropy_loss(rpn_pred_cls, label_cls)rpn_loss_loc = weight_l1_loss(rpn_pred_loc, label_loc, lable_loc_weight)rpn_loss_mask, iou_m, iou_5, iou_7 = select_mask_logistic_loss(rpn_pred_mask, label_mask, label_mask_weight)return rpn_loss_cls, rpn_loss_loc, rpn_loss_mask, iou_m, iou_5, iou_7def run(self, template, search, softmax=False):"""run network"""template_feature = self.feature_extractor(template)feature, search_feature = self.features.forward_all(search)rpn_pred_cls, rpn_pred_loc = self.rpn(template_feature, search_feature)corr_feature = self.mask_model.mask.forward_corr(template_feature, search_feature)  # (b, 256, w, h)rpn_pred_mask = self.refine_model(feature, corr_feature)if softmax:rpn_pred_cls = self.softmax(rpn_pred_cls)return rpn_pred_cls, rpn_pred_loc, rpn_pred_mask, template_feature, search_featuredef softmax(self, cls):b, a2, h, w = cls.size()cls = cls.view(b, 2, a2//2, h, w)cls = cls.permute(0, 2, 3, 4, 1).contiguous()cls = F.log_softmax(cls, dim=4)return clsdef forward(self, input):""":param input: dict of input with keys of:'template': [b, 3, h1, w1], input template image.'search': [b, 3, h2, w2], input search image.'label_cls':[b, max_num_gts, 5] or None(self.training==False),each gt contains x1,y1,x2,y2,class.:return: dict of loss, predict, accuracy"""template = input['template']search = input['search']if self.training:label_cls = input['label_cls']label_loc = input['label_loc']lable_loc_weight = input['label_loc_weight']label_mask = input['label_mask']label_mask_weight = input['label_mask_weight']rpn_pred_cls, rpn_pred_loc, rpn_pred_mask, template_feature, search_feature = \self.run(template, search, softmax=self.training)outputs = dict()outputs['predict'] = [rpn_pred_loc, rpn_pred_cls, rpn_pred_mask, template_feature, search_feature]if self.training:rpn_loss_cls, rpn_loss_loc, rpn_loss_mask, iou_acc_mean, iou_acc_5, iou_acc_7 = \self._add_rpn_loss(label_cls, label_loc, lable_loc_weight, label_mask, label_mask_weight,rpn_pred_cls, rpn_pred_loc, rpn_pred_mask)outputs['losses'] = [rpn_loss_cls, rpn_loss_loc, rpn_loss_mask]outputs['accuracy'] = [iou_acc_mean, iou_acc_5, iou_acc_7]return outputsdef template(self, z):self.zf = self.feature_extractor(z)cls_kernel, loc_kernel = self.rpn_model.template(self.zf)return cls_kernel, loc_kerneldef track(self, x, cls_kernel=None, loc_kernel=None, softmax=False):xf = self.feature_extractor(x)rpn_pred_cls, rpn_pred_loc = self.rpn_model.track(xf, cls_kernel, loc_kernel)if softmax:rpn_pred_cls = self.softmax(rpn_pred_cls)return rpn_pred_cls, rpn_pred_locdef get_cls_loss(pred, label, select):if select.nelement() == 0: return pred.sum()*0.pred = torch.index_select(pred, 0, select)label = torch.index_select(label, 0, select)return F.nll_loss(pred, label)def select_cross_entropy_loss(pred, label):pred = pred.view(-1, 2)label = label.view(-1)# GPU# pos = Variable(label.data.eq(1).nonzero().squeeze()).cuda()# neg = Variable(label.data.eq(0).nonzero().squeeze()).cuda()pos = Variable(label.data.eq(1).nonzero().squeeze())neg = Variable(label.data.eq(0).nonzero().squeeze())loss_pos = get_cls_loss(pred, label, pos)loss_neg = get_cls_loss(pred, label, neg)return loss_pos * 0.5 + loss_neg * 0.5def weight_l1_loss(pred_loc, label_loc, loss_weight):""":param pred_loc: [b, 4k, h, w]:param label_loc: [b, 4k, h, w]:param loss_weight:  [b, k, h, w]:return: loc loss value"""b, _, sh, sw = pred_loc.size()pred_loc = pred_loc.view(b, 4, -1, sh, sw)diff = (pred_loc - label_loc).abs()diff = diff.sum(dim=1).view(b, -1, sh, sw)loss = diff * loss_weightreturn loss.sum().div(b)def select_mask_logistic_loss(p_m, mask, weight, o_sz=63, g_sz=127):weight = weight.view(-1)pos = Variable(weight.data.eq(1).nonzero().squeeze())if pos.nelement() == 0: return p_m.sum() * 0, p_m.sum() * 0, p_m.sum() * 0, p_m.sum() * 0if len(p_m.shape) == 4:p_m = p_m.permute(0, 2, 3, 1).contiguous().view(-1, 1, o_sz, o_sz)p_m = torch.index_select(p_m, 0, pos)p_m = nn.UpsamplingBilinear2d(size=[g_sz, g_sz])(p_m)p_m = p_m.view(-1, g_sz * g_sz)else:p_m = torch.index_select(p_m, 0, pos)mask_uf = F.unfold(mask, (g_sz, g_sz), padding=0, stride=8)mask_uf = torch.transpose(mask_uf, 1, 2).contiguous().view(-1, g_sz * g_sz)mask_uf = torch.index_select(mask_uf, 0, pos)loss = F.soft_margin_loss(p_m, mask_uf)iou_m, iou_5, iou_7 = iou_measure(p_m, mask_uf)return loss, iou_m, iou_5, iou_7def iou_measure(pred, label):pred = pred.ge(0)mask_sum = pred.eq(1).add(label.eq(1))intxn = torch.sum(mask_sum == 2, dim=1).float()union = torch.sum(mask_sum > 0, dim=1).float()iou = intxn/unionreturn torch.mean(iou), (torch.sum(iou > 0.5).float()/iou.shape[0]), (torch.sum(iou > 0.7).float()/iou.shape[0])if __name__ == "__main__":p_m = torch.randn(4, 63*63, 25, 25)cls = torch.randn(4, 1, 25, 25) > 0.9mask = torch.randn(4, 1, 255, 255) * 2 - 1loss = select_mask_logistic_loss(p_m, mask, cls)print(loss)

单目标跟踪SiamMask:特定目标车辆追踪 part1相关推荐

  1. 单目标跟踪SiamMask:特定目标车辆追踪 part2

    日萌社 人工智能AI:Keras PyTorch MXNet TensorFlow PaddlePaddle 深度学习实战(不定时更新) CNN:RCNN.SPPNet.Fast RCNN.Faste ...

  2. ccot 目标跟踪全称_目标跟踪--CamShift

    转载请注明出处! !! 目标跟踪--CamShift CamShift全称是ContinuouslyAdaptive Mean Shift,即连续自适应的MeanShift算法.而MeanShift算 ...

  3. matlab目标跟踪物流应用,目标跟踪系统的MATLAB 源程序包

    [实例简介] 目标跟踪系统的MATLAB 源程序包目标跟踪系统的MATLAB 源程序包 [实例截图] [核心代码] 目标跟踪系统的MATLAB源程序包 ├── auc.m ├── auction_2D ...

  4. 关于目标跟踪SiamMask的Youtube-VOS 数据库下载

    关于SiamMask的Youtube-VOS  train.zip文件下载 链接:https://pan.baidu.com/s/1NIfCPhJrcl6tHBVoXBc5Qg 提取码:2254 Yo ...

  5. Video Target Tracking Based on Online Learning—TLD单目标跟踪算法详解

    视频目标跟踪问题分析         视频跟踪技术的主要目的是从复杂多变的的背景环境中准确提取相关的目标特征,准确地识别出跟踪目标,并且对目标的位置和姿态等信息精确地定位,为后续目标物体行为分析提供足 ...

  6. 视觉单目标跟踪任务概述

      视觉目标跟踪的主要目的是:模仿生理视觉系统的运动感知功能,通过对摄像头捕获到的图像序列进行分析,计算出运动目标在每一帧图像中的位置:然后,根据运动目标相关的特征值,将图像序列中连续帧的同一运动目标 ...

  7. 目标跟踪介绍(单目标)

    Abstract 视觉跟踪技术是计算机视觉领域(人工智能分支)的一个重要课题,有着重要的研究意义:且在军事制导.视频监控.机器人视觉导航.人机交互.以及医疗诊断等许多方面有着广泛的应用前景.随着研究人 ...

  8. 计算机视觉中,目标跟踪相关算法论文总结

    目标跟踪相关算法&论文总结 作为小白,近期想看一些目标跟踪相关的内容,但又无从下手,花了几天时间,找各种资料,总结了网上大佬们写的文章.(大部分来自CSDN.知乎.微信公众号,均已注明出处) ...

  9. 目标跟踪经典算法汇总(持续更新...)

    如题,虽然这个问题是经典目标跟踪算法,但事实上,可能我们并不需要那些曾经辉煌但已被拍在沙滩上的tracker(目标跟踪算法),而是那些即将成为经典的,或者就目前来说最好用.速度和性能都看的过去trac ...

  10. 收藏 一文熟悉视频目标跟踪技术

    01 研究背景介绍 近年来,随着大数据.云计算.人工智能等领域日新月异的发展及交互融合,智慧电商.智慧交通.智慧城市等概念越发受到关注.随着人们对更智能.更便捷.更高质量生活的向往,同时伴随着重大的学 ...

最新文章

  1. 院士论坛|李德仁:测绘遥感能为智能驾驶做什么? ——论测绘遥感与智能驾驶
  2. 2017年1月22日 10:00
  3. Navicat120_premium_cs_x64
  4. ASP.NET Core Windows 环境配置 - ASP.NET Core 基础教程 - 简单教程,简单编程
  5. linux重启ipv6_Linux关闭、开启、配置IPv6
  6. linux 快组描述符,Linux 进程描述符 task struct
  7. python+Django+test 测试数据库生成报错
  8. vs2019,C#,MySQL创建图书管理系统2(登录功能实现)
  9. TCP/IP详解(全面)
  10. sphinx使用笔记
  11. system.badimageformatexception 未能加载文件或程序集问题解决
  12. 疯狂的java 目录,疯狂创客圈 JAVA死磕系列 总目录
  13. 前端解决图片404的问题
  14. 关于常用(?)字符串处理函数的合集
  15. C/C++编程题之简单密码
  16. impdp 导入报错
  17. 区块链和大数据结合方案
  18. 主機名稱控制者: DNS 伺服器
  19. 电脑浏览器可以打开网页,但所有软件都无法联网
  20. linux awk 命令使用详解

热门文章

  1. 如何查看计算机的硬盘序列号,电脑硬盘序列号怎么查?一招查看硬盘序列号的简单方法...
  2. 2022/2023届-Matlab数字图像处理—选题推荐
  3. sqlServer基础知识
  4. lgg7刷机包下载_LG G7 恢复Download回官方。
  5. 商城 源码 java_java网上商城平台源码(含数据库脚本)
  6. 【Android】 开发即时聊天工具 YQ (仿QQ) 教程:目录
  7. 1-通用对话系统-阿里大文娱-李亚楠
  8. 微型计算机原理与接口技术第六版课后答案,微型计算机原理与接口技术(第6版)...
  9. VBa运行c语言程序,Excel VBA编程详解
  10. php获取用户手机imei id,获取手机设备信息  IMEI