首先Git clone mmpose:  GitHub - open-mmlab/mmpose: OpenMMLab Pose Estimation Toolbox and Benchmark.

然后根据github  安装依赖环境,这里不再赘述。

背景

官方的coco keypoints 是基于17点计算的,coco也给出了coco_whole_body的133个关键点

目的是与openpose 2019的25点作对比,但是目前貌似没有人去复现openpose2019的,本人正在复现的过程的中,后续会给出。但是官方给出的caffemodel 我千方百计的去计算它的mAP 也不停的查阅相关论文。

最后算出的mAP为52.3% 和官方给出的数据差异太大。仔细看看,论文中计算mAP的数据集并不是我测试的基于val2017 全量,而是东抽西抽的coco数据集的子集。

并且看到一片文章在coco_whole_body中计算body的mAP为56.3%

同时我也测了openpose在cocowholebody 中的表现:body为50.2 。

精度很低,但是我需要body+foot的关键点。

基于以上背景,改造23点。

hrnet body+foot

回到mmpose,hrnet 有在coco_whole_body 上进行训练,以及有它的配置文件:

_base_ = ['../../../../_base_/default_runtime.py','../../../../_base_/datasets/coco_wholebody.py'
]
evaluation = dict(interval=10, metric='mAP', save_best='AP')optimizer = dict(type='Adam',lr=5e-4,
)
optimizer_config = dict(grad_clip=None)
# learning policy
lr_config = dict(policy='step',warmup=None,# warmup='linear',# warmup_iters=500,# warmup_ratio=0.001,step=[170, 200])
total_epochs = 210
channel_cfg = dict(num_output_channels=133,dataset_joints=133,dataset_channel=[list(range(133)),],inference_channel=list(range(133)))# model settings
model = dict(type='TopDown',pretrained='https://download.openmmlab.com/mmpose/''pretrain_models/hrnet_w48-8ef0771d.pth',backbone=dict(type='HRNet',in_channels=3,extra=dict(stage1=dict(num_modules=1,num_branches=1,block='BOTTLENECK',num_blocks=(4, ),num_channels=(64, )),stage2=dict(num_modules=1,num_branches=2,block='BASIC',num_blocks=(4, 4),num_channels=(48, 96)),stage3=dict(num_modules=4,num_branches=3,block='BASIC',num_blocks=(4, 4, 4),num_channels=(48, 96, 192)),stage4=dict(num_modules=3,num_branches=4,block='BASIC',num_blocks=(4, 4, 4, 4),num_channels=(48, 96, 192, 384))),),keypoint_head=dict(type='TopdownHeatmapSimpleHead',in_channels=48,out_channels=channel_cfg['num_output_channels'],num_deconv_layers=0,extra=dict(final_conv_kernel=1, ),loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),train_cfg=dict(),test_cfg=dict(flip_test=True,post_process='default',shift_heatmap=True,modulate_kernel=11))data_cfg = dict(image_size=[288, 384],heatmap_size=[72, 96],num_output_channels=channel_cfg['num_output_channels'],num_joints=channel_cfg['dataset_joints'],dataset_channel=channel_cfg['dataset_channel'],inference_channel=channel_cfg['inference_channel'],soft_nms=False,nms_thr=1.0,oks_thr=0.9,vis_thr=0.2,use_gt_bbox=False,det_bbox_thr=0.0,bbox_file='data/coco/person_detection_results/''COCO_val2017_detections_AP_H_56_person.json',
)train_pipeline = [dict(type='LoadImageFromFile'),dict(type='TopDownGetBboxCenterScale', padding=1.25),dict(type='TopDownRandomShiftBboxCenter', shift_factor=0.16, prob=0.3),dict(type='TopDownRandomFlip', flip_prob=0.5),dict(type='TopDownHalfBodyTransform',num_joints_half_body=8,prob_half_body=0.3),dict(type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),dict(type='TopDownAffine'),dict(type='ToTensor'),dict(type='NormalizeTensor',mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]),dict(type='TopDownGenerateTarget', sigma=3),dict(type='Collect',keys=['img', 'target', 'target_weight'],meta_keys=['image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale','rotation', 'bbox_score', 'flip_pairs']),
]val_pipeline = [dict(type='LoadImageFromFile'),dict(type='TopDownGetBboxCenterScale', padding=1.25),dict(type='TopDownAffine'),dict(type='ToTensor'),dict(type='NormalizeTensor',mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]),dict(type='Collect',keys=['img'],meta_keys=['image_file', 'center', 'scale', 'rotation', 'bbox_score','flip_pairs']),
]test_pipeline = val_pipelinedata_root = 'data/coco'
data = dict(samples_per_gpu=32,workers_per_gpu=2,val_dataloader=dict(samples_per_gpu=32),test_dataloader=dict(samples_per_gpu=32),train=dict(type='TopDownCocoWholeBodyDataset',ann_file=f'{data_root}/annotations/coco_wholebody_train_v1.0.json',img_prefix=f'{data_root}/train2017/',data_cfg=data_cfg,pipeline=train_pipeline,dataset_info={{_base_.dataset_info}}),val=dict(type='TopDownCocoWholeBodyDataset',ann_file=f'{data_root}/annotations/coco_wholebody_val_v1.0.json',img_prefix=f'{data_root}/val2017/',data_cfg=data_cfg,pipeline=val_pipeline,dataset_info={{_base_.dataset_info}}),test=dict(type='TopDownCocoWholeBodyDataset',ann_file=f'{data_root}/annotations/coco_wholebody_val_v1.0.json',img_prefix=f'{data_root}/val2017/',data_cfg=data_cfg,pipeline=test_pipeline,dataset_info={{_base_.dataset_info}}),
)

但他是基于133个关键点的,但是分为了 face body hand foot 几个部分,那么就简单了,将数据集的配置文件改为body+foot,并把hrnet 的配置文件也改为body+foot,同时计算mAP的方式也要更改为body+foot。在config目录下重新创建一个body23 文件夹,将config文件复制一份。

一、查看数据集关键点的配置项:

train_pipeline = [dict(type='LoadImageFromFile'),dict(type='TopDownGetBboxCenterScale', padding=1.25),dict(type='TopDownRandomShiftBboxCenter', shift_factor=0.16, prob=0.3),dict(type='TopDownRandomFlip', flip_prob=0.5),dict(type='TopDownHalfBodyTransform',num_joints_half_body=8,prob_half_body=0.3),dict(type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),dict(type='TopDownAffine'),dict(type='ToTensor'),dict(type='NormalizeTensor',mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]),dict(type='TopDownGenerateTarget', sigma=3),dict(type='Collect',keys=['img', 'target', 'target_weight'],meta_keys=['image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale','rotation', 'bbox_score', 'flip_pairs']),
]val_pipeline = [dict(type='LoadImageFromFile'),dict(type='TopDownGetBboxCenterScale', padding=1.25),dict(type='TopDownAffine'),dict(type='ToTensor'),dict(type='NormalizeTensor',mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]),dict(type='Collect',keys=['img'],meta_keys=['image_file', 'center', 'scale', 'rotation', 'bbox_score','flip_pairs']),
]test_pipeline = val_pipeline
data_root = 'data/coco'
data = dict(samples_per_gpu=32,workers_per_gpu=2,val_dataloader=dict(samples_per_gpu=32),test_dataloader=dict(samples_per_gpu=32),train=dict(type='TopDownCocoWholeBodyDataset',ann_file=f'{data_root}/annotations/coco_wholebody_train_v1.0.json',img_prefix=f'{data_root}/train2017/',data_cfg=data_cfg,pipeline=train_pipeline,dataset_info={{_base_.dataset_info}}),val=dict(type='TopDownCocoWholeBodyDataset',ann_file=f'{data_root}/annotations/coco_wholebody_val_v1.0.json',img_prefix=f'{data_root}/val2017/',data_cfg=data_cfg,pipeline=val_pipeline,dataset_info={{_base_.dataset_info}}),test=dict(type='TopDownCocoWholeBodyDataset',ann_file=f'{data_root}/annotations/coco_wholebody_val_v1.0.json',img_prefix=f'{data_root}/val2017/',data_cfg=data_cfg,pipeline=test_pipeline,dataset_info={{_base_.dataset_info}}),
)

这部分是pipeline的配置不需要改动。查看dataload部分也是数据集最关键的部分,也就是代码的最上方 base项:

_base_ = ['../../../../_base_/default_runtime.py','../../../../_base_/datasets/coco_wholebody.py'
]

找到coco_wholebody.py文件

dataset_info = dict(dataset_name='coco_wholebody',paper_info=dict(author='Jin, Sheng and Xu, Lumin and Xu, Jin and ''Wang, Can and Liu, Wentao and ''Qian, Chen and Ouyang, Wanli and Luo, Ping',title='Whole-Body Human Pose Estimation in the Wild',container='Proceedings of the European ''Conference on Computer Vision (ECCV)',year='2020',homepage='https://github.com/jin-s13/COCO-WholeBody/',),keypoint_info={0:dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),1:dict(name='left_eye',id=1,color=[51, 153, 255],type='upper',swap='right_eye'),2:dict(name='right_eye',id=2,color=[51, 153, 255],type='upper',swap='left_eye'),3:dict(name='left_ear',id=3,color=[51, 153, 255],type='upper',swap='right_ear'),4:dict(name='right_ear',id=4,color=[51, 153, 255],type='upper',swap='left_ear'),5:dict(name='left_shoulder',id=5,color=[0, 255, 0],type='upper',swap='right_shoulder'),6:dict(name='right_shoulder',id=6,color=[255, 128, 0],type='upper',swap='left_shoulder'),7:dict(name='left_elbow',id=7,color=[0, 255, 0],type='upper',swap='right_elbow'),8:dict(name='right_elbow',id=8,color=[255, 128, 0],type='upper',swap='left_elbow'),9:dict(name='left_wrist',id=9,color=[0, 255, 0],type='upper',swap='right_wrist'),10:dict(name='right_wrist',id=10,color=[255, 128, 0],type='upper',swap='left_wrist'),11:dict(name='left_hip',id=11,color=[0, 255, 0],type='lower',swap='right_hip'),12:dict(name='right_hip',id=12,color=[255, 128, 0],type='lower',swap='left_hip'),13:dict(name='left_knee',id=13,color=[0, 255, 0],type='lower',swap='right_knee'),14:dict(name='right_knee',id=14,color=[255, 128, 0],type='lower',swap='left_knee'),15:dict(name='left_ankle',id=15,color=[0, 255, 0],type='lower',swap='right_ankle'),16:dict(name='right_ankle',id=16,color=[255, 128, 0],type='lower',swap='left_ankle'),17:dict(name='left_big_toe',id=17,color=[255, 128, 0],type='lower',swap='right_big_toe'),18:dict(name='left_small_toe',id=18,color=[255, 128, 0],type='lower',swap='right_small_toe'),19:dict(name='left_heel',id=19,color=[255, 128, 0],type='lower',swap='right_heel'),20:dict(name='right_big_toe',id=20,color=[255, 128, 0],type='lower',swap='left_big_toe'),21:dict(name='right_small_toe',id=21,color=[255, 128, 0],type='lower',swap='left_small_toe'),22:dict(name='right_heel',id=22,color=[255, 128, 0],type='lower',swap='left_heel'),23:dict(name='face-0',id=23,color=[255, 255, 255],type='',swap='face-16'),24:dict(name='face-1',id=24,color=[255, 255, 255],type='',swap='face-15'),25:dict(name='face-2',id=25,color=[255, 255, 255],type='',swap='face-14'),26:dict(name='face-3',id=26,color=[255, 255, 255],type='',swap='face-13'),27:dict(name='face-4',id=27,color=[255, 255, 255],type='',swap='face-12'),28:dict(name='face-5',id=28,color=[255, 255, 255],type='',swap='face-11'),29:dict(name='face-6',id=29,color=[255, 255, 255],type='',swap='face-10'),30:dict(name='face-7',id=30,color=[255, 255, 255],type='',swap='face-9'),31:dict(name='face-8', id=31, color=[255, 255, 255], type='', swap=''),32:dict(name='face-9',id=32,color=[255, 255, 255],type='',swap='face-7'),33:dict(name='face-10',id=33,color=[255, 255, 255],type='',swap='face-6'),34:dict(name='face-11',id=34,color=[255, 255, 255],type='',swap='face-5'),35:dict(name='face-12',id=35,color=[255, 255, 255],type='',swap='face-4'),36:dict(name='face-13',id=36,color=[255, 255, 255],type='',swap='face-3'),37:dict(name='face-14',id=37,color=[255, 255, 255],type='',swap='face-2'),38:dict(name='face-15',id=38,color=[255, 255, 255],type='',swap='face-1'),39:dict(name='face-16',id=39,color=[255, 255, 255],type='',swap='face-0'),40:dict(name='face-17',id=40,color=[255, 255, 255],type='',swap='face-26'),41:dict(name='face-18',id=41,color=[255, 255, 255],type='',swap='face-25'),42:dict(name='face-19',id=42,color=[255, 255, 255],type='',swap='face-24'),43:dict(name='face-20',id=43,color=[255, 255, 255],type='',swap='face-23'),44:dict(name='face-21',id=44,color=[255, 255, 255],type='',swap='face-22'),45:dict(name='face-22',id=45,color=[255, 255, 255],type='',swap='face-21'),46:dict(name='face-23',id=46,color=[255, 255, 255],type='',swap='face-20'),47:dict(name='face-24',id=47,color=[255, 255, 255],type='',swap='face-19'),48:dict(name='face-25',id=48,color=[255, 255, 255],type='',swap='face-18'),49:dict(name='face-26',id=49,color=[255, 255, 255],type='',swap='face-17'),50:dict(name='face-27', id=50, color=[255, 255, 255], type='', swap=''),51:dict(name='face-28', id=51, color=[255, 255, 255], type='', swap=''),52:dict(name='face-29', id=52, color=[255, 255, 255], type='', swap=''),53:dict(name='face-30', id=53, color=[255, 255, 255], type='', swap=''),54:dict(name='face-31',id=54,color=[255, 255, 255],type='',swap='face-35'),55:dict(name='face-32',id=55,color=[255, 255, 255],type='',swap='face-34'),56:dict(name='face-33', id=56, color=[255, 255, 255], type='', swap=''),57:dict(name='face-34',id=57,color=[255, 255, 255],type='',swap='face-32'),58:dict(name='face-35',id=58,color=[255, 255, 255],type='',swap='face-31'),59:dict(name='face-36',id=59,color=[255, 255, 255],type='',swap='face-45'),60:dict(name='face-37',id=60,color=[255, 255, 255],type='',swap='face-44'),61:dict(name='face-38',id=61,color=[255, 255, 255],type='',swap='face-43'),62:dict(name='face-39',id=62,color=[255, 255, 255],type='',swap='face-42'),63:dict(name='face-40',id=63,color=[255, 255, 255],type='',swap='face-47'),64:dict(name='face-41',id=64,color=[255, 255, 255],type='',swap='face-46'),65:dict(name='face-42',id=65,color=[255, 255, 255],type='',swap='face-39'),66:dict(name='face-43',id=66,color=[255, 255, 255],type='',swap='face-38'),67:dict(name='face-44',id=67,color=[255, 255, 255],type='',swap='face-37'),68:dict(name='face-45',id=68,color=[255, 255, 255],type='',swap='face-36'),69:dict(name='face-46',id=69,color=[255, 255, 255],type='',swap='face-41'),70:dict(name='face-47',id=70,color=[255, 255, 255],type='',swap='face-40'),71:dict(name='face-48',id=71,color=[255, 255, 255],type='',swap='face-54'),72:dict(name='face-49',id=72,color=[255, 255, 255],type='',swap='face-53'),73:dict(name='face-50',id=73,color=[255, 255, 255],type='',swap='face-52'),74:dict(name='face-51', id=74, color=[255, 255, 255], type='', swap=''),75:dict(name='face-52',id=75,color=[255, 255, 255],type='',swap='face-50'),76:dict(name='face-53',id=76,color=[255, 255, 255],type='',swap='face-49'),77:dict(name='face-54',id=77,color=[255, 255, 255],type='',swap='face-48'),78:dict(name='face-55',id=78,color=[255, 255, 255],type='',swap='face-59'),79:dict(name='face-56',id=79,color=[255, 255, 255],type='',swap='face-58'),80:dict(name='face-57', id=80, color=[255, 255, 255], type='', swap=''),81:dict(name='face-58',id=81,color=[255, 255, 255],type='',swap='face-56'),82:dict(name='face-59',id=82,color=[255, 255, 255],type='',swap='face-55'),83:dict(name='face-60',id=83,color=[255, 255, 255],type='',swap='face-64'),84:dict(name='face-61',id=84,color=[255, 255, 255],type='',swap='face-63'),85:dict(name='face-62', id=85, color=[255, 255, 255], type='', swap=''),86:dict(name='face-63',id=86,color=[255, 255, 255],type='',swap='face-61'),87:dict(name='face-64',id=87,color=[255, 255, 255],type='',swap='face-60'),88:dict(name='face-65',id=88,color=[255, 255, 255],type='',swap='face-67'),89:dict(name='face-66', id=89, color=[255, 255, 255], type='', swap=''),90:dict(name='face-67',id=90,color=[255, 255, 255],type='',swap='face-65'),91:dict(name='left_hand_root',id=91,color=[255, 255, 255],type='',swap='right_hand_root'),92:dict(name='left_thumb1',id=92,color=[255, 128, 0],type='',swap='right_thumb1'),93:dict(name='left_thumb2',id=93,color=[255, 128, 0],type='',swap='right_thumb2'),94:dict(name='left_thumb3',id=94,color=[255, 128, 0],type='',swap='right_thumb3'),95:dict(name='left_thumb4',id=95,color=[255, 128, 0],type='',swap='right_thumb4'),96:dict(name='left_forefinger1',id=96,color=[255, 153, 255],type='',swap='right_forefinger1'),97:dict(name='left_forefinger2',id=97,color=[255, 153, 255],type='',swap='right_forefinger2'),98:dict(name='left_forefinger3',id=98,color=[255, 153, 255],type='',swap='right_forefinger3'),99:dict(name='left_forefinger4',id=99,color=[255, 153, 255],type='',swap='right_forefinger4'),100:dict(name='left_middle_finger1',id=100,color=[102, 178, 255],type='',swap='right_middle_finger1'),101:dict(name='left_middle_finger2',id=101,color=[102, 178, 255],type='',swap='right_middle_finger2'),102:dict(name='left_middle_finger3',id=102,color=[102, 178, 255],type='',swap='right_middle_finger3'),103:dict(name='left_middle_finger4',id=103,color=[102, 178, 255],type='',swap='right_middle_finger4'),104:dict(name='left_ring_finger1',id=104,color=[255, 51, 51],type='',swap='right_ring_finger1'),105:dict(name='left_ring_finger2',id=105,color=[255, 51, 51],type='',swap='right_ring_finger2'),106:dict(name='left_ring_finger3',id=106,color=[255, 51, 51],type='',swap='right_ring_finger3'),107:dict(name='left_ring_finger4',id=107,color=[255, 51, 51],type='',swap='right_ring_finger4'),108:dict(name='left_pinky_finger1',id=108,color=[0, 255, 0],type='',swap='right_pinky_finger1'),109:dict(name='left_pinky_finger2',id=109,color=[0, 255, 0],type='',swap='right_pinky_finger2'),110:dict(name='left_pinky_finger3',id=110,color=[0, 255, 0],type='',swap='right_pinky_finger3'),111:dict(name='left_pinky_finger4',id=111,color=[0, 255, 0],type='',swap='right_pinky_finger4'),112:dict(name='right_hand_root',id=112,color=[255, 255, 255],type='',swap='left_hand_root'),113:dict(name='right_thumb1',id=113,color=[255, 128, 0],type='',swap='left_thumb1'),114:dict(name='right_thumb2',id=114,color=[255, 128, 0],type='',swap='left_thumb2'),115:dict(name='right_thumb3',id=115,color=[255, 128, 0],type='',swap='left_thumb3'),116:dict(name='right_thumb4',id=116,color=[255, 128, 0],type='',swap='left_thumb4'),117:dict(name='right_forefinger1',id=117,color=[255, 153, 255],type='',swap='left_forefinger1'),118:dict(name='right_forefinger2',id=118,color=[255, 153, 255],type='',swap='left_forefinger2'),119:dict(name='right_forefinger3',id=119,color=[255, 153, 255],type='',swap='left_forefinger3'),120:dict(name='right_forefinger4',id=120,color=[255, 153, 255],type='',swap='left_forefinger4'),121:dict(name='right_middle_finger1',id=121,color=[102, 178, 255],type='',swap='left_middle_finger1'),122:dict(name='right_middle_finger2',id=122,color=[102, 178, 255],type='',swap='left_middle_finger2'),123:dict(name='right_middle_finger3',id=123,color=[102, 178, 255],type='',swap='left_middle_finger3'),124:dict(name='right_middle_finger4',id=124,color=[102, 178, 255],type='',swap='left_middle_finger4'),125:dict(name='right_ring_finger1',id=125,color=[255, 51, 51],type='',swap='left_ring_finger1'),126:dict(name='right_ring_finger2',id=126,color=[255, 51, 51],type='',swap='left_ring_finger2'),127:dict(name='right_ring_finger3',id=127,color=[255, 51, 51],type='',swap='left_ring_finger3'),128:dict(name='right_ring_finger4',id=128,color=[255, 51, 51],type='',swap='left_ring_finger4'),129:dict(name='right_pinky_finger1',id=129,color=[0, 255, 0],type='',swap='left_pinky_finger1'),130:dict(name='right_pinky_finger2',id=130,color=[0, 255, 0],type='',swap='left_pinky_finger2'),131:dict(name='right_pinky_finger3',id=131,color=[0, 255, 0],type='',swap='left_pinky_finger3'),132:dict(name='right_pinky_finger4',id=132,color=[0, 255, 0],type='',swap='left_pinky_finger4')},skeleton_info={0:dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),1:dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),2:dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),3:dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),4:dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),5:dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),6:dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),7:dict(link=('left_shoulder', 'right_shoulder'),id=7,color=[51, 153, 255]),8:dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),9:dict(link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),10:dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),11:dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),12:dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),13:dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),14:dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),15:dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),16:dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),17:dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),18:dict(link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255]),19:dict(link=('left_ankle', 'left_big_toe'), id=19, color=[0, 255, 0]),20:dict(link=('left_ankle', 'left_small_toe'), id=20, color=[0, 255, 0]),21:dict(link=('left_ankle', 'left_heel'), id=21, color=[0, 255, 0]),22:dict(link=('right_ankle', 'right_big_toe'), id=22, color=[255, 128, 0]),23:dict(link=('right_ankle', 'right_small_toe'),id=23,color=[255, 128, 0]),24:dict(link=('right_ankle', 'right_heel'), id=24, color=[255, 128, 0]),25:dict(link=('left_hand_root', 'left_thumb1'), id=25, color=[255, 128,0]),26:dict(link=('left_thumb1', 'left_thumb2'), id=26, color=[255, 128, 0]),27:dict(link=('left_thumb2', 'left_thumb3'), id=27, color=[255, 128, 0]),28:dict(link=('left_thumb3', 'left_thumb4'), id=28, color=[255, 128, 0]),29:dict(link=('left_hand_root', 'left_forefinger1'),id=29,color=[255, 153, 255]),30:dict(link=('left_forefinger1', 'left_forefinger2'),id=30,color=[255, 153, 255]),31:dict(link=('left_forefinger2', 'left_forefinger3'),id=31,color=[255, 153, 255]),32:dict(link=('left_forefinger3', 'left_forefinger4'),id=32,color=[255, 153, 255]),33:dict(link=('left_hand_root', 'left_middle_finger1'),id=33,color=[102, 178, 255]),34:dict(link=('left_middle_finger1', 'left_middle_finger2'),id=34,color=[102, 178, 255]),35:dict(link=('left_middle_finger2', 'left_middle_finger3'),id=35,color=[102, 178, 255]),36:dict(link=('left_middle_finger3', 'left_middle_finger4'),id=36,color=[102, 178, 255]),37:dict(link=('left_hand_root', 'left_ring_finger1'),id=37,color=[255, 51, 51]),38:dict(link=('left_ring_finger1', 'left_ring_finger2'),id=38,color=[255, 51, 51]),39:dict(link=('left_ring_finger2', 'left_ring_finger3'),id=39,color=[255, 51, 51]),40:dict(link=('left_ring_finger3', 'left_ring_finger4'),id=40,color=[255, 51, 51]),41:dict(link=('left_hand_root', 'left_pinky_finger1'),id=41,color=[0, 255, 0]),42:dict(link=('left_pinky_finger1', 'left_pinky_finger2'),id=42,color=[0, 255, 0]),43:dict(link=('left_pinky_finger2', 'left_pinky_finger3'),id=43,color=[0, 255, 0]),44:dict(link=('left_pinky_finger3', 'left_pinky_finger4'),id=44,color=[0, 255, 0]),45:dict(link=('right_hand_root', 'right_thumb1'),id=45,color=[255, 128, 0]),46:dict(link=('right_thumb1', 'right_thumb2'), id=46, color=[255, 128, 0]),47:dict(link=('right_thumb2', 'right_thumb3'), id=47, color=[255, 128, 0]),48:dict(link=('right_thumb3', 'right_thumb4'), id=48, color=[255, 128, 0]),49:dict(link=('right_hand_root', 'right_forefinger1'),id=49,color=[255, 153, 255]),50:dict(link=('right_forefinger1', 'right_forefinger2'),id=50,color=[255, 153, 255]),51:dict(link=('right_forefinger2', 'right_forefinger3'),id=51,color=[255, 153, 255]),52:dict(link=('right_forefinger3', 'right_forefinger4'),id=52,color=[255, 153, 255]),53:dict(link=('right_hand_root', 'right_middle_finger1'),id=53,color=[102, 178, 255]),54:dict(link=('right_middle_finger1', 'right_middle_finger2'),id=54,color=[102, 178, 255]),55:dict(link=('right_middle_finger2', 'right_middle_finger3'),id=55,color=[102, 178, 255]),56:dict(link=('right_middle_finger3', 'right_middle_finger4'),id=56,color=[102, 178, 255]),57:dict(link=('right_hand_root', 'right_ring_finger1'),id=57,color=[255, 51, 51]),58:dict(link=('right_ring_finger1', 'right_ring_finger2'),id=58,color=[255, 51, 51]),59:dict(link=('right_ring_finger2', 'right_ring_finger3'),id=59,color=[255, 51, 51]),60:dict(link=('right_ring_finger3', 'right_ring_finger4'),id=60,color=[255, 51, 51]),61:dict(link=('right_hand_root', 'right_pinky_finger1'),id=61,color=[0, 255, 0]),62:dict(link=('right_pinky_finger1', 'right_pinky_finger2'),id=62,color=[0, 255, 0]),63:dict(link=('right_pinky_finger2', 'right_pinky_finger3'),id=63,color=[0, 255, 0]),64:dict(link=('right_pinky_finger3', 'right_pinky_finger4'),id=64,color=[0, 255, 0])},joint_weights=[1.] * 133,# 'https://github.com/jin-s13/COCO-WholeBody/blob/master/'# 'evaluation/myeval_wholebody.py#L175'sigmas=[0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089, 0.068, 0.066, 0.066,0.092, 0.094, 0.094, 0.042, 0.043, 0.044, 0.043, 0.040, 0.035, 0.031,0.025, 0.020, 0.023, 0.029, 0.032, 0.037, 0.038, 0.043, 0.041, 0.045,0.013, 0.012, 0.011, 0.011, 0.012, 0.012, 0.011, 0.011, 0.013, 0.015,0.009, 0.007, 0.007, 0.007, 0.012, 0.009, 0.008, 0.016, 0.010, 0.017,0.011, 0.009, 0.011, 0.009, 0.007, 0.013, 0.008, 0.011, 0.012, 0.010,0.034, 0.008, 0.008, 0.009, 0.008, 0.008, 0.007, 0.010, 0.008, 0.009,0.009, 0.009, 0.007, 0.007, 0.008, 0.011, 0.008, 0.008, 0.008, 0.01,0.008, 0.029, 0.022, 0.035, 0.037, 0.047, 0.026, 0.025, 0.024, 0.035,0.018, 0.024, 0.022, 0.026, 0.017, 0.021, 0.021, 0.032, 0.02, 0.019,0.022, 0.031, 0.029, 0.022, 0.035, 0.037, 0.047, 0.026, 0.025, 0.024,0.035, 0.018, 0.024, 0.022, 0.026, 0.017, 0.021, 0.021, 0.032, 0.02,0.019, 0.022, 0.031])

复制一份命名为coco_body23.py

对他进行更改,我只需要body+foot,其他的对应的都不要,修改后为:这里提醒一句

simga值为每个关键点各自的数据集标准差,COCO上是对同⼀个目标的5000次不同标注产生的标准差。其值越大,说明在整个数据集中对这个点的标注一致性越差; 值越小,说明整个数据集中对这个点的标注一致性越好。

dataset_info = dict(dataset_name='coco_body25',paper_info=dict(author='Jin, Sheng and Xu, Lumin and Xu, Jin and ''Wang, Can and Liu, Wentao and ''Qian, Chen and Ouyang, Wanli and Luo, Ping',title='Whole-Body Human Pose Estimation in the Wild',container='Proceedings of the European ''Conference on Computer Vision (ECCV)',year='2020',homepage='https://github.com/jin-s13/COCO-WholeBody/',),keypoint_info={0:dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),1:dict(name='left_eye',id=1,color=[51, 153, 255],type='upper',swap='right_eye'),2:dict(name='right_eye',id=2,color=[51, 153, 255],type='upper',swap='left_eye'),3:dict(name='left_ear',id=3,color=[51, 153, 255],type='upper',swap='right_ear'),4:dict(name='right_ear',id=4,color=[51, 153, 255],type='upper',swap='left_ear'),5:dict(name='left_shoulder',id=5,color=[0, 255, 0],type='upper',swap='right_shoulder'),6:dict(name='right_shoulder',id=6,color=[255, 128, 0],type='upper',swap='left_shoulder'),7:dict(name='left_elbow',id=7,color=[0, 255, 0],type='upper',swap='right_elbow'),8:dict(name='right_elbow',id=8,color=[255, 128, 0],type='upper',swap='left_elbow'),9:dict(name='left_wrist',id=9,color=[0, 255, 0],type='upper',swap='right_wrist'),10:dict(name='right_wrist',id=10,color=[255, 128, 0],type='upper',swap='left_wrist'),11:dict(name='left_hip',id=11,color=[0, 255, 0],type='lower',swap='right_hip'),12:dict(name='right_hip',id=12,color=[255, 128, 0],type='lower',swap='left_hip'),13:dict(name='left_knee',id=13,color=[0, 255, 0],type='lower',swap='right_knee'),14:dict(name='right_knee',id=14,color=[255, 128, 0],type='lower',swap='left_knee'),15:dict(name='left_ankle',id=15,color=[0, 255, 0],type='lower',swap='right_ankle'),16:dict(name='right_ankle',id=16,color=[255, 128, 0],type='lower',swap='left_ankle'),17:dict(name='left_big_toe',id=17,color=[255, 128, 0],type='lower',swap='right_big_toe'),18:dict(name='left_small_toe',id=18,color=[255, 128, 0],type='lower',swap='right_small_toe'),19:dict(name='left_heel',id=19,color=[255, 128, 0],type='lower',swap='right_heel'),20:dict(name='right_big_toe',id=20,color=[255, 128, 0],type='lower',swap='left_big_toe'),21:dict(name='right_small_toe',id=21,color=[255, 128, 0],type='lower',swap='left_small_toe'),22:dict(name='right_heel',id=22,color=[255, 128, 0],type='lower',swap='left_heel')},skeleton_info={0:dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),1:dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),2:dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),3:dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),4:dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),5:dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),6:dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),7:dict(link=('left_shoulder', 'right_shoulder'),id=7,color=[51, 153, 255]),8:dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),9:dict(link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),10:dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),11:dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),12:dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),13:dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),14:dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),15:dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),16:dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),17:dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),18:dict(link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255]),19:dict(link=('left_ankle', 'left_big_toe'), id=19, color=[0, 255, 0]),20:dict(link=('left_ankle', 'left_small_toe'), id=20, color=[0, 255, 0]),21:dict(link=('left_ankle', 'left_heel'), id=21, color=[0, 255, 0]),22:dict(link=('right_ankle', 'right_big_toe'), id=22, color=[255, 128, 0]),23:dict(link=('right_ankle', 'right_small_toe'),id=23,color=[255, 128, 0]),24:dict(link=('right_ankle', 'right_heel'), id=24, color=[255, 128, 0]),},joint_weights=[1.] * 23,# 'https://github.com/jin-s13/COCO-WholeBody/blob/master/'# 'evaluation/myeval_wholebody.py#L175'sigmas=[0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089, 0.068, 0.066, 0.066,0.092, 0.094, 0.094])

二、dataset mAP

因为mmpose有注册机制因此在这建议先不要setup.py 更改后再setup.py。在build下的mmpose没有build 直接在根目录下的mmpose文件夹中找到datasets/datasets/topdown 下新建个top-down-cocobody23-datasets.py

# Copyright (c) OpenMMLab. All rights reserved.
import os
import warningsimport numpy as np
from mmcv import Config
from xtcocotools.cocoeval import COCOevalfrom ...builder import DATASETS
from .topdown_coco_dataset import TopDownCocoDataset@DATASETS.register_module()
class TopDownCocoBody25Dataset(TopDownCocoDataset):"""CocoWholeBodyDataset dataset for top-down pose estimation."Whole-Body Human Pose Estimation in the Wild", ECCV'2020.More details can be found in the `paper<https://arxiv.org/abs/2007.11858>`__ .The dataset loads raw features and apply specified transformsto return a dict containing the image tensors and other information.COCO-WholeBody keypoint indexes::0-16: 17 body keypoints,17-22: 6 foot keypoints,In total, we have 23 keypoints for body25 pose estimation.Args:ann_file (str): Path to the annotation file.img_prefix (str): Path to a directory where images are held.Default: None.data_cfg (dict): configpipeline (list[dict | callable]): A sequence of data transforms.dataset_info (DatasetInfo): A class containing all dataset info.test_mode (bool): Store True when building test orvalidation dataset. Default: False."""def __init__(self,ann_file,img_prefix,data_cfg,pipeline,dataset_info=None,test_mode=False):if dataset_info is None:warnings.warn('dataset_info is missing. ''Check https://github.com/open-mmlab/mmpose/pull/663 ''for details.', DeprecationWarning)cfg = Config.fromfile('configs/_base_/datasets/coco_body25.py')dataset_info = cfg._cfg_dict['dataset_info']super(TopDownCocoDataset, self).__init__(ann_file,img_prefix,data_cfg,pipeline,dataset_info=dataset_info,test_mode=test_mode)self.use_gt_bbox = data_cfg['use_gt_bbox']self.bbox_file = data_cfg['bbox_file']self.det_bbox_thr = data_cfg.get('det_bbox_thr', 0.0)self.use_nms = data_cfg.get('use_nms', True)self.soft_nms = data_cfg['soft_nms']self.nms_thr = data_cfg['nms_thr']self.oks_thr = data_cfg['oks_thr']self.vis_thr = data_cfg['vis_thr']self.body_num = 17self.foot_num = 6self.db = self._get_db()print(f'=> num_images: {self.num_images}')print(f'=> load {len(self.db)} samples')def _load_coco_keypoint_annotation_kernel(self, img_id):"""load annotation from COCOAPI.Note:bbox:[x1, y1, w, h]Args:img_id: coco image idReturns:dict: db entry"""img_ann = self.coco.loadImgs(img_id)[0]width = img_ann['width']height = img_ann['height']num_joints = self.ann_info['num_joints']ann_ids = self.coco.getAnnIds(imgIds=img_id, iscrowd=False)objs = self.coco.loadAnns(ann_ids)# sanitize bboxesvalid_objs = []for obj in objs:if 'bbox' not in obj:continuex, y, w, h = obj['bbox']x1 = max(0, x)y1 = max(0, y)x2 = min(width - 1, x1 + max(0, w))y2 = min(height - 1, y1 + max(0, h))if ('area' not in obj or obj['area'] > 0) and x2 > x1 and y2 > y1:obj['clean_bbox'] = [x1, y1, x2 - x1, y2 - y1]valid_objs.append(obj)objs = valid_objsrec = []bbox_id = 0for obj in objs:if 'keypoints' not in obj:continueif max(obj['keypoints']) == 0:continuejoints_3d = np.zeros((num_joints, 3), dtype=np.float32)joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)keypoints = np.array(obj['keypoints'] + obj['foot_kpts']).reshape(-1, 3)joints_3d[:, :2] = keypoints[:, :2]joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3] > 0)image_file = os.path.join(self.img_prefix, self.id2name[img_id])rec.append({'image_file': image_file,'bbox': obj['clean_bbox'][:4],'rotation': 0,'joints_3d': joints_3d,'joints_3d_visible': joints_3d_visible,'dataset': self.dataset_name,'bbox_score': 1,'bbox_id': bbox_id})bbox_id = bbox_id + 1return recdef _coco_keypoint_results_one_category_kernel(self, data_pack):"""Get coco keypoint results."""cat_id = data_pack['cat_id']keypoints = data_pack['keypoints']cat_results = []for img_kpts in keypoints:if len(img_kpts) == 0:continue_key_points = np.array([img_kpt['keypoints'] for img_kpt in img_kpts])key_points = _key_points.reshape(-1,self.ann_info['num_joints'] * 3)cuts = np.cumsum([0, self.body_num, self.foot_num]) * 3result = [{'image_id': img_kpt['image_id'],'category_id': cat_id,'keypoints': key_point[cuts[0]:cuts[1]].tolist(),'foot_kpts': key_point[cuts[1]:cuts[2]].tolist(),'score': float(img_kpt['score']),'center': img_kpt['center'].tolist(),'scale': img_kpt['scale'].tolist()} for img_kpt, key_point in zip(img_kpts, key_points)]cat_results.extend(result)return cat_resultsdef _do_python_keypoint_eval(self, res_file):"""Keypoint evaluation using COCOAPI."""coco_det = self.coco.loadRes(res_file)cuts = np.cumsum([0, self.body_num, self.foot_num])coco_eval = COCOeval(self.coco,coco_det,'keypoints_body',self.sigmas[cuts[0]:cuts[1]],use_area=True)coco_eval.params.useSegm = Nonecoco_eval.evaluate()coco_eval.accumulate()coco_eval.summarize()coco_eval = COCOeval(self.coco,coco_det,'keypoints_foot',self.sigmas[cuts[1]:cuts[2]],use_area=True)coco_eval.params.useSegm = Nonecoco_eval.evaluate()coco_eval.accumulate()coco_eval.summarize()stats_names = ['AP', 'AP .5', 'AP .75', 'AP (M)', 'AP (L)', 'AR', 'AR .5','AR .75', 'AR (M)', 'AR (L)']info_str = list(zip(stats_names, coco_eval.stats))return info_str

在几层的__init__.py 中导入此包。就如下图

然后通过mmpose 官方给的训练代码进行训练。

训练了120个epoch:

和论文在body上的效果差不多了

我整合了body+foot的整个mAP而不是body mAP+foot mAP如下:

下一篇mmpose(2):复现知乎大佬mmpose中的shufflenetv2+deeppose的方法

mmpose系列 (一):hrnet 基于mmpose 训练body+foot 23点关键点相关推荐

  1. 【MMPose】在HRNet应用SimDR(SimCC)/Part.3-处理头篇(Head)

    github代码已经上传:mmpose_simDR SimDR(现在已经改名叫SimCC,后文还是称SimDR)将姿态估计的Heatmap方法转换为分类方法,在HRNet上实现了涨点,并且减小了显存占 ...

  2. 【MMPose】在HRNet应用SimDR(SimCC)/Part.1-配置文件篇

    github代码已经上传:mmpose_simDR SimDR(现在已经改名叫SimCC,后文还是称SimDR)将姿态估计的Heatmap方法转换为分类方法,在HRNet上实现了涨点,并且减小了显存占 ...

  3. mmpose遇坑计--使用mmpose在mpii数据集上复现HRnet时PCKh@0.1较低解决方法

    一.遇到问题 最近在学习HRNet,在github上下载了作者的源码并在自己的电脑上复现,但是结果低了0.2个点,复现了几次也没有达到预期,听说商汤科技 OpenMMLab 中有大量代码库和模型库且调 ...

  4. 基于协同训练的半监督文本分类算法

    标签: 半监督学习,文本分类 作者:炼己者 --- 本博客所有内容以学习.研究和分享为主,如需转载,请联系本人,标明作者和出处,并且是非商业用途,谢谢! 如果大家觉得格式看着不舒服,也欢迎大家去看我的 ...

  5. 【NLP】bert4vec:一个基于预训练的句向量生成工具

    一个基于预训练的句向量生成工具 bert4vec:   https://github.com/zejunwang1/bert4vec 环境 transformers>=4.6.0,<5.0 ...

  6. 自动驾驶采标系列四:基于激光雷达的目标检测方法

        标注猿的第55篇原创        一个用数据视角看AI世界的标注猿   上一篇文章我们讲了基于图像的目标检测技术,但对于标注人员来说这部分内容就相对比较难一些,只是作为一个了解就可以,但是如 ...

  7. R语言构建文本分类模型并使用LIME进行模型解释实战:文本数据预处理、构建词袋模型、构建xgboost文本分类模型、基于文本训练数据以及模型构建LIME解释器解释多个测试语料的预测结果并可视化

    R语言构建文本分类模型并使用LIME进行模型解释实战:文本数据预处理.构建词袋模型.构建xgboost文本分类模型.基于文本训练数据以及模型构建LIME解释器解释多个测试语料的预测结果并可视化 目录

  8. 【论文写作分析】之三《基于预训练语言模型的案件要素识别方法》

    [1] 参考论文信息   论文名称:<基于预训练语言模型的案件要素识别方法>   发布期刊:<中文信息学报>   期刊信息:CSCD   论文写作分析摘要:本文非常典型.首先网 ...

  9. AI佳作解读系列(一)——深度学习模型训练痛点及解决方法

    AI佳作解读系列(一)--深度学习模型训练痛点及解决方法 参考文章: (1)AI佳作解读系列(一)--深度学习模型训练痛点及解决方法 (2)https://www.cnblogs.com/carson ...

最新文章

  1. 使用Microsoft Visual Studio International Pack获得中文字符串的所有拼音组合(处理多音字)...
  2. php 让字体闪烁,js实现文字闪烁特效的方法
  3. Hibernate学习之hibernate.cfg.xml
  4. 【干货下载】2020新基建展望:新战略、新动力、新格局.pdf(附下载链接)
  5. 兆比特每秒和兆字节每秒_宽带中的“M(兆)”是什么意思?
  6. 系统学习机器学习之弱监督学习(三)--Adversarial Autoencoders
  7. C#效率优化(2)-- 方法内联
  8. Linux电源管理-Suspend/Resume流程
  9. SpaceSniffer 界面让我眼前一亮
  10. 数据结构课程设计——宿舍管理查询软件
  11. masm汇编语言linux命令,Windows10下利用DOSBOX和MASM32搭建汇编语言开发环境
  12. SAP APO 取订单函数(取计划订单数据一)
  13. 百Google度搜索
  14. Matlab axis函数应用简介
  15. 视频教程-经典Vue从入门到案例到源码分析教程(含资料)-Vue
  16. 《统计学》笔记:第11章 一元线性回归
  17. android view设置按钮颜色_建议收藏!最全 Android 常用开源库总结!
  18. 基于ASP.NET的农产品销售平台的设计与实现
  19. 徐州2018年大学计算机比赛,建策杯2018年江苏省大学生计算机设计大赛暨2018年(第11.PDF...
  20. 7个实用网站,总有一个你能用到的!

热门文章

  1. Arduino 双路巡线传感器 巡线实验
  2. mybatis延迟加载、缓存(一级、二级)
  3. 淘宝购物车前端(JS和Angularjs版本)
  4. python--异常处理--常见的异常类型
  5. qq气泡php接口,QQ的HTTP接口PHP探究
  6. android 缩放透明动画,Android之高仿QQ6.6.0侧滑效果(背景动画、透明+沉浸式状态栏、渐变效果)...
  7. pc游戏端(QQ飞车)
  8. vuejs罗盘时间创意样式动画效果
  9. Image-Level 弱监督图像语义分割汇总简析
  10. ios 有很多种cell时的写法 以及 masonry 的使用