深度学习AI美颜系列---AI美颜磨皮算法一

转自:https://blog.csdn.net/trent1985/article/details/80661230

首先说明一点,为什么本结内容是“AI美颜磨皮算法一”?而不是“AI美颜磨皮算法”?

AI美颜磨皮算法目前还没有具体定义,各大公司也都处于摸索阶段,因此,这里只是依据自己的实现方案做了区分,本文算法与下一篇“AI美颜磨皮算法二”在算法角度,有着很大的差异,由此做了区分。

先看一下磨皮算法的一般流程:

这个流程图是一般传统的磨皮算法流程图,而本文将基于这个流程图,结合深度学习做一些改进。

在这个流程图中,主要的模块有两个:滤波模块和肤色区域检测模块;

滤波模块中,包含了三种算法:

1,保边滤波器滤波算法

该方法是指通过一些具有保留边缘的能力的滤波器,来将图像磨平,达到皮肤平滑的目的;

这类滤波器主要有:

①双边滤波器

②导向滤波器

③Surface Blur表面模糊滤波器

④局部均值滤波器

⑤加权最小二乘滤波器(WLS滤波器)

⑥Smart blur等等,详情可参考本人博客。

此方法皮肤区域比较平滑,细节较少,需要后期添加细节信息,来保留一些自然的纹理;

2,高反差减弱算法

高反差保留算法是指通过高反差来得到皮肤细节的MASK,根据MASK中细节区域,比如皮肤中的斑点区域位置,将原图对应区域进行颜色减淡处理,以此来达到斑点弱化,美肤的目的;

该方法在保留纹理的同时,减弱了皮肤瑕疵与斑点的颜色,使得皮肤看起来比较光滑自然;

3,其他算法

这里是指一些未知的算法,当然已知的也有,比如:基于保边滤波和高反差的磨皮算法,该方法同时对原图做了1-2步骤,得到一张光滑的滤波图和高反差对应的细节MASK,然后将MASK作为alpha通道,把原图和滤波图进行Alpha融合,达到平滑皮肤的同时,去除斑点,保留纹理的作用;

皮肤区域识别检测模块

目前常用的皮肤检测主要是基于颜色空间的皮肤颜色统计方法;

该方法具有较高的误检率,容易将类肤色判定为肤色,这样就导致了非皮肤区域图像被滤波器平滑掉了,也就是不该磨皮的图像区域被模糊了;

重点来了,下面我们在传统磨皮算法流程中使用深度学习来改进或者提高我们磨皮的质量,比如:使用深度学习进行皮肤区域分割,得到更为精确的皮肤区域,从而使得我们最后的磨皮效果超越传统算法的效果;

下面,我们介绍基于深度学习的皮肤区域分割:

分割的方法有很多,CNN/FCN/UNet/DenseNet等等,这里我们使用UNet进行皮肤分割:

Unet做图像分割,参考论文如:UNet:Convolutional Networks for Biomedical Image Segmentation.

它最开始的网络模型如下:

这是一个全卷积神经网络,输入和输出都是图像,没有全连接层,较浅的高分辨率层用来解决像素定位的问题,较深的层用来解决像素分类的问题;

左边进行卷积和下采样,同时保留当前结果,右边进行上采样时将上采样结果和左边对应结果进行融合,以此来提高分割效果;

这个网络中左右是不对称的,后来改进的Unet基本上在图像分辨率上呈现出对称的样式,本文这里使用Keras来实现,网络结构如下:

  1. Layer (type) Output Shape Param # Connected to

  2. ==================================================================================================

  3. input_1 (InputLayer) (None, 256, 256, 3) 0

  4. __________________________________________________________________________________________________

  5. conv2d_1 (Conv2D) (None, 256, 256, 32) 896 input_1[0][0]

  6. __________________________________________________________________________________________________

  7. batch_normalization_1 (BatchNor (None, 256, 256, 32) 128 conv2d_1[0][0]

  8. __________________________________________________________________________________________________

  9. activation_1 (Activation) (None, 256, 256, 32) 0 batch_normalization_1[0][0]

  10. __________________________________________________________________________________________________

  11. conv2d_2 (Conv2D) (None, 256, 256, 32) 9248 activation_1[0][0]

  12. __________________________________________________________________________________________________

  13. batch_normalization_2 (BatchNor (None, 256, 256, 32) 128 conv2d_2[0][0]

  14. __________________________________________________________________________________________________

  15. activation_2 (Activation) (None, 256, 256, 32) 0 batch_normalization_2[0][0]

  16. __________________________________________________________________________________________________

  17. max_pooling2d_1 (MaxPooling2D) (None, 128, 128, 32) 0 activation_2[0][0]

  18. __________________________________________________________________________________________________

  19. conv2d_3 (Conv2D) (None, 128, 128, 64) 18496 max_pooling2d_1[0][0]

  20. __________________________________________________________________________________________________

  21. batch_normalization_3 (BatchNor (None, 128, 128, 64) 256 conv2d_3[0][0]

  22. __________________________________________________________________________________________________

  23. activation_3 (Activation) (None, 128, 128, 64) 0 batch_normalization_3[0][0]

  24. __________________________________________________________________________________________________

  25. conv2d_4 (Conv2D) (None, 128, 128, 64) 36928 activation_3[0][0]

  26. __________________________________________________________________________________________________

  27. batch_normalization_4 (BatchNor (None, 128, 128, 64) 256 conv2d_4[0][0]

  28. __________________________________________________________________________________________________

  29. activation_4 (Activation) (None, 128, 128, 64) 0 batch_normalization_4[0][0]

  30. __________________________________________________________________________________________________

  31. max_pooling2d_2 (MaxPooling2D) (None, 64, 64, 64) 0 activation_4[0][0]

  32. __________________________________________________________________________________________________

  33. conv2d_5 (Conv2D) (None, 64, 64, 128) 73856 max_pooling2d_2[0][0]

  34. __________________________________________________________________________________________________

  35. batch_normalization_5 (BatchNor (None, 64, 64, 128) 512 conv2d_5[0][0]

  36. __________________________________________________________________________________________________

  37. activation_5 (Activation) (None, 64, 64, 128) 0 batch_normalization_5[0][0]

  38. __________________________________________________________________________________________________

  39. conv2d_6 (Conv2D) (None, 64, 64, 128) 147584 activation_5[0][0]

  40. __________________________________________________________________________________________________

  41. batch_normalization_6 (BatchNor (None, 64, 64, 128) 512 conv2d_6[0][0]

  42. __________________________________________________________________________________________________

  43. activation_6 (Activation) (None, 64, 64, 128) 0 batch_normalization_6[0][0]

  44. __________________________________________________________________________________________________

  45. max_pooling2d_3 (MaxPooling2D) (None, 32, 32, 128) 0 activation_6[0][0]

  46. __________________________________________________________________________________________________

  47. conv2d_7 (Conv2D) (None, 32, 32, 256) 295168 max_pooling2d_3[0][0]

  48. __________________________________________________________________________________________________

  49. batch_normalization_7 (BatchNor (None, 32, 32, 256) 1024 conv2d_7[0][0]

  50. __________________________________________________________________________________________________

  51. activation_7 (Activation) (None, 32, 32, 256) 0 batch_normalization_7[0][0]

  52. __________________________________________________________________________________________________

  53. conv2d_8 (Conv2D) (None, 32, 32, 256) 590080 activation_7[0][0]

  54. __________________________________________________________________________________________________

  55. batch_normalization_8 (BatchNor (None, 32, 32, 256) 1024 conv2d_8[0][0]

  56. __________________________________________________________________________________________________

  57. activation_8 (Activation) (None, 32, 32, 256) 0 batch_normalization_8[0][0]

  58. __________________________________________________________________________________________________

  59. max_pooling2d_4 (MaxPooling2D) (None, 16, 16, 256) 0 activation_8[0][0]

  60. __________________________________________________________________________________________________

  61. conv2d_9 (Conv2D) (None, 16, 16, 512) 1180160 max_pooling2d_4[0][0]

  62. __________________________________________________________________________________________________

  63. batch_normalization_9 (BatchNor (None, 16, 16, 512) 2048 conv2d_9[0][0]

  64. __________________________________________________________________________________________________

  65. activation_9 (Activation) (None, 16, 16, 512) 0 batch_normalization_9[0][0]

  66. __________________________________________________________________________________________________

  67. conv2d_10 (Conv2D) (None, 16, 16, 512) 2359808 activation_9[0][0]

  68. __________________________________________________________________________________________________

  69. batch_normalization_10 (BatchNo (None, 16, 16, 512) 2048 conv2d_10[0][0]

  70. __________________________________________________________________________________________________

  71. activation_10 (Activation) (None, 16, 16, 512) 0 batch_normalization_10[0][0]

  72. __________________________________________________________________________________________________

  73. max_pooling2d_5 (MaxPooling2D) (None, 8, 8, 512) 0 activation_10[0][0]

  74. __________________________________________________________________________________________________

  75. conv2d_11 (Conv2D) (None, 8, 8, 1024) 4719616 max_pooling2d_5[0][0]

  76. __________________________________________________________________________________________________

  77. batch_normalization_11 (BatchNo (None, 8, 8, 1024) 4096 conv2d_11[0][0]

  78. __________________________________________________________________________________________________

  79. activation_11 (Activation) (None, 8, 8, 1024) 0 batch_normalization_11[0][0]

  80. __________________________________________________________________________________________________

  81. conv2d_12 (Conv2D) (None, 8, 8, 1024) 9438208 activation_11[0][0]

  82. __________________________________________________________________________________________________

  83. batch_normalization_12 (BatchNo (None, 8, 8, 1024) 4096 conv2d_12[0][0]

  84. __________________________________________________________________________________________________

  85. activation_12 (Activation) (None, 8, 8, 1024) 0 batch_normalization_12[0][0]

  86. __________________________________________________________________________________________________

  87. up_sampling2d_1 (UpSampling2D) (None, 16, 16, 1024) 0 activation_12[0][0]

  88. __________________________________________________________________________________________________

  89. concatenate_1 (Concatenate) (None, 16, 16, 1536) 0 activation_10[0][0]

  90. up_sampling2d_1[0][0]

  91. __________________________________________________________________________________________________

  92. conv2d_13 (Conv2D) (None, 16, 16, 512) 7078400 concatenate_1[0][0]

  93. __________________________________________________________________________________________________

  94. batch_normalization_13 (BatchNo (None, 16, 16, 512) 2048 conv2d_13[0][0]

  95. __________________________________________________________________________________________________

  96. activation_13 (Activation) (None, 16, 16, 512) 0 batch_normalization_13[0][0]

  97. __________________________________________________________________________________________________

  98. conv2d_14 (Conv2D) (None, 16, 16, 512) 2359808 activation_13[0][0]

  99. __________________________________________________________________________________________________

  100. batch_normalization_14 (BatchNo (None, 16, 16, 512) 2048 conv2d_14[0][0]

  101. __________________________________________________________________________________________________

  102. activation_14 (Activation) (None, 16, 16, 512) 0 batch_normalization_14[0][0]

  103. __________________________________________________________________________________________________

  104. conv2d_15 (Conv2D) (None, 16, 16, 512) 2359808 activation_14[0][0]

  105. __________________________________________________________________________________________________

  106. batch_normalization_15 (BatchNo (None, 16, 16, 512) 2048 conv2d_15[0][0]

  107. __________________________________________________________________________________________________

  108. activation_15 (Activation) (None, 16, 16, 512) 0 batch_normalization_15[0][0]

  109. __________________________________________________________________________________________________

  110. up_sampling2d_2 (UpSampling2D) (None, 32, 32, 512) 0 activation_15[0][0]

  111. __________________________________________________________________________________________________

  112. concatenate_2 (Concatenate) (None, 32, 32, 768) 0 activation_8[0][0]

  113. up_sampling2d_2[0][0]

  114. __________________________________________________________________________________________________

  115. conv2d_16 (Conv2D) (None, 32, 32, 256) 1769728 concatenate_2[0][0]

  116. __________________________________________________________________________________________________

  117. batch_normalization_16 (BatchNo (None, 32, 32, 256) 1024 conv2d_16[0][0]

  118. __________________________________________________________________________________________________

  119. activation_16 (Activation) (None, 32, 32, 256) 0 batch_normalization_16[0][0]

  120. __________________________________________________________________________________________________

  121. conv2d_17 (Conv2D) (None, 32, 32, 256) 590080 activation_16[0][0]

  122. __________________________________________________________________________________________________

  123. batch_normalization_17 (BatchNo (None, 32, 32, 256) 1024 conv2d_17[0][0]

  124. __________________________________________________________________________________________________

  125. activation_17 (Activation) (None, 32, 32, 256) 0 batch_normalization_17[0][0]

  126. __________________________________________________________________________________________________

  127. conv2d_18 (Conv2D) (None, 32, 32, 256) 590080 activation_17[0][0]

  128. __________________________________________________________________________________________________

  129. batch_normalization_18 (BatchNo (None, 32, 32, 256) 1024 conv2d_18[0][0]

  130. __________________________________________________________________________________________________

  131. activation_18 (Activation) (None, 32, 32, 256) 0 batch_normalization_18[0][0]

  132. __________________________________________________________________________________________________

  133. up_sampling2d_3 (UpSampling2D) (None, 64, 64, 256) 0 activation_18[0][0]

  134. __________________________________________________________________________________________________

  135. concatenate_3 (Concatenate) (None, 64, 64, 384) 0 activation_6[0][0]

  136. up_sampling2d_3[0][0]

  137. __________________________________________________________________________________________________

  138. conv2d_19 (Conv2D) (None, 64, 64, 128) 442496 concatenate_3[0][0]

  139. __________________________________________________________________________________________________

  140. batch_normalization_19 (BatchNo (None, 64, 64, 128) 512 conv2d_19[0][0]

  141. __________________________________________________________________________________________________

  142. activation_19 (Activation) (None, 64, 64, 128) 0 batch_normalization_19[0][0]

  143. __________________________________________________________________________________________________

  144. conv2d_20 (Conv2D) (None, 64, 64, 128) 147584 activation_19[0][0]

  145. __________________________________________________________________________________________________

  146. batch_normalization_20 (BatchNo (None, 64, 64, 128) 512 conv2d_20[0][0]

  147. __________________________________________________________________________________________________

  148. activation_20 (Activation) (None, 64, 64, 128) 0 batch_normalization_20[0][0]

  149. __________________________________________________________________________________________________

  150. conv2d_21 (Conv2D) (None, 64, 64, 128) 147584 activation_20[0][0]

  151. __________________________________________________________________________________________________

  152. batch_normalization_21 (BatchNo (None, 64, 64, 128) 512 conv2d_21[0][0]

  153. __________________________________________________________________________________________________

  154. activation_21 (Activation) (None, 64, 64, 128) 0 batch_normalization_21[0][0]

  155. __________________________________________________________________________________________________

  156. up_sampling2d_4 (UpSampling2D) (None, 128, 128, 128 0 activation_21[0][0]

  157. __________________________________________________________________________________________________

  158. concatenate_4 (Concatenate) (None, 128, 128, 192 0 activation_4[0][0]

  159. up_sampling2d_4[0][0]

  160. __________________________________________________________________________________________________

  161. conv2d_22 (Conv2D) (None, 128, 128, 64) 110656 concatenate_4[0][0]

  162. __________________________________________________________________________________________________

  163. batch_normalization_22 (BatchNo (None, 128, 128, 64) 256 conv2d_22[0][0]

  164. __________________________________________________________________________________________________

  165. activation_22 (Activation) (None, 128, 128, 64) 0 batch_normalization_22[0][0]

  166. __________________________________________________________________________________________________

  167. conv2d_23 (Conv2D) (None, 128, 128, 64) 36928 activation_22[0][0]

  168. __________________________________________________________________________________________________

  169. batch_normalization_23 (BatchNo (None, 128, 128, 64) 256 conv2d_23[0][0]

  170. __________________________________________________________________________________________________

  171. activation_23 (Activation) (None, 128, 128, 64) 0 batch_normalization_23[0][0]

  172. __________________________________________________________________________________________________

  173. conv2d_24 (Conv2D) (None, 128, 128, 64) 36928 activation_23[0][0]

  174. __________________________________________________________________________________________________

  175. batch_normalization_24 (BatchNo (None, 128, 128, 64) 256 conv2d_24[0][0]

  176. __________________________________________________________________________________________________

  177. activation_24 (Activation) (None, 128, 128, 64) 0 batch_normalization_24[0][0]

  178. __________________________________________________________________________________________________

  179. up_sampling2d_5 (UpSampling2D) (None, 256, 256, 64) 0 activation_24[0][0]

  180. __________________________________________________________________________________________________

  181. concatenate_5 (Concatenate) (None, 256, 256, 96) 0 activation_2[0][0]

  182. up_sampling2d_5[0][0]

  183. __________________________________________________________________________________________________

  184. conv2d_25 (Conv2D) (None, 256, 256, 32) 27680 concatenate_5[0][0]

  185. __________________________________________________________________________________________________

  186. batch_normalization_25 (BatchNo (None, 256, 256, 32) 128 conv2d_25[0][0]

  187. __________________________________________________________________________________________________

  188. activation_25 (Activation) (None, 256, 256, 32) 0 batch_normalization_25[0][0]

  189. __________________________________________________________________________________________________

  190. conv2d_26 (Conv2D) (None, 256, 256, 32) 9248 activation_25[0][0]

  191. __________________________________________________________________________________________________

  192. batch_normalization_26 (BatchNo (None, 256, 256, 32) 128 conv2d_26[0][0]

  193. __________________________________________________________________________________________________

  194. activation_26 (Activation) (None, 256, 256, 32) 0 batch_normalization_26[0][0]

  195. __________________________________________________________________________________________________

  196. conv2d_27 (Conv2D) (None, 256, 256, 32) 9248 activation_26[0][0]

  197. __________________________________________________________________________________________________

  198. batch_normalization_27 (BatchNo (None, 256, 256, 32) 128 conv2d_27[0][0]

  199. __________________________________________________________________________________________________

  200. activation_27 (Activation) (None, 256, 256, 32) 0 batch_normalization_27[0][0]

  201. __________________________________________________________________________________________________

  202. conv2d_28 (Conv2D) (None, 256, 256, 1) 33 activation_27[0][0]

  203. ==================================================================================================

UNet网络代码如下:

  1. def get_unet_256(input_shape=(256, 256, 3),

  2. num_classes=1):

  3. inputs = Input(shape=input_shape)

  4. # 256

  5. down0 = Conv2D(32, (3, 3), padding='same')(inputs)

  6. down0 = BatchNormalization()(down0)

  7. down0 = Activation('relu')(down0)

  8. down0 = Conv2D(32, (3, 3), padding='same')(down0)

  9. down0 = BatchNormalization()(down0)

  10. down0 = Activation('relu')(down0)

  11. down0_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0)

  12. # 128

  13. down1 = Conv2D(64, (3, 3), padding='same')(down0_pool)

  14. down1 = BatchNormalization()(down1)

  15. down1 = Activation('relu')(down1)

  16. down1 = Conv2D(64, (3, 3), padding='same')(down1)

  17. down1 = BatchNormalization()(down1)

  18. down1 = Activation('relu')(down1)

  19. down1_pool = MaxPooling2D((2, 2), strides=(2, 2))(down1)

  20. # 64

  21. down2 = Conv2D(128, (3, 3), padding='same')(down1_pool)

  22. down2 = BatchNormalization()(down2)

  23. down2 = Activation('relu')(down2)

  24. down2 = Conv2D(128, (3, 3), padding='same')(down2)

  25. down2 = BatchNormalization()(down2)

  26. down2 = Activation('relu')(down2)

  27. down2_pool = MaxPooling2D((2, 2), strides=(2, 2))(down2)

  28. # 32

  29. down3 = Conv2D(256, (3, 3), padding='same')(down2_pool)

  30. down3 = BatchNormalization()(down3)

  31. down3 = Activation('relu')(down3)

  32. down3 = Conv2D(256, (3, 3), padding='same')(down3)

  33. down3 = BatchNormalization()(down3)

  34. down3 = Activation('relu')(down3)

  35. down3_pool = MaxPooling2D((2, 2), strides=(2, 2))(down3)

  36. # 16

  37. down4 = Conv2D(512, (3, 3), padding='same')(down3_pool)

  38. down4 = BatchNormalization()(down4)

  39. down4 = Activation('relu')(down4)

  40. down4 = Conv2D(512, (3, 3), padding='same')(down4)

  41. down4 = BatchNormalization()(down4)

  42. down4 = Activation('relu')(down4)

  43. down4_pool = MaxPooling2D((2, 2), strides=(2, 2))(down4)

  44. # 8

  45. center = Conv2D(1024, (3, 3), padding='same')(down4_pool)

  46. center = BatchNormalization()(center)

  47. center = Activation('relu')(center)

  48. center = Conv2D(1024, (3, 3), padding='same')(center)

  49. center = BatchNormalization()(center)

  50. center = Activation('relu')(center)

  51. # center

  52. up4 = UpSampling2D((2, 2))(center)

  53. up4 = concatenate([down4, up4], axis=3)

  54. up4 = Conv2D(512, (3, 3), padding='same')(up4)

  55. up4 = BatchNormalization()(up4)

  56. up4 = Activation('relu')(up4)

  57. up4 = Conv2D(512, (3, 3), padding='same')(up4)

  58. up4 = BatchNormalization()(up4)

  59. up4 = Activation('relu')(up4)

  60. up4 = Conv2D(512, (3, 3), padding='same')(up4)

  61. up4 = BatchNormalization()(up4)

  62. up4 = Activation('relu')(up4)

  63. # 16

  64. up3 = UpSampling2D((2, 2))(up4)

  65. up3 = concatenate([down3, up3], axis=3)

  66. up3 = Conv2D(256, (3, 3), padding='same')(up3)

  67. up3 = BatchNormalization()(up3)

  68. up3 = Activation('relu')(up3)

  69. up3 = Conv2D(256, (3, 3), padding='same')(up3)

  70. up3 = BatchNormalization()(up3)

  71. up3 = Activation('relu')(up3)

  72. up3 = Conv2D(256, (3, 3), padding='same')(up3)

  73. up3 = BatchNormalization()(up3)

  74. up3 = Activation('relu')(up3)

  75. # 32

  76. up2 = UpSampling2D((2, 2))(up3)

  77. up2 = concatenate([down2, up2], axis=3)

  78. up2 = Conv2D(128, (3, 3), padding='same')(up2)

  79. up2 = BatchNormalization()(up2)

  80. up2 = Activation('relu')(up2)

  81. up2 = Conv2D(128, (3, 3), padding='same')(up2)

  82. up2 = BatchNormalization()(up2)

  83. up2 = Activation('relu')(up2)

  84. up2 = Conv2D(128, (3, 3), padding='same')(up2)

  85. up2 = BatchNormalization()(up2)

  86. up2 = Activation('relu')(up2)

  87. # 64

  88. up1 = UpSampling2D((2, 2))(up2)

  89. up1 = concatenate([down1, up1], axis=3)

  90. up1 = Conv2D(64, (3, 3), padding='same')(up1)

  91. up1 = BatchNormalization()(up1)

  92. up1 = Activation('relu')(up1)

  93. up1 = Conv2D(64, (3, 3), padding='same')(up1)

  94. up1 = BatchNormalization()(up1)

  95. up1 = Activation('relu')(up1)

  96. up1 = Conv2D(64, (3, 3), padding='same')(up1)

  97. up1 = BatchNormalization()(up1)

  98. up1 = Activation('relu')(up1)

  99. # 128

  100. up0 = UpSampling2D((2, 2))(up1)

  101. up0 = concatenate([down0, up0], axis=3)

  102. up0 = Conv2D(32, (3, 3), padding='same')(up0)

  103. up0 = BatchNormalization()(up0)

  104. up0 = Activation('relu')(up0)

  105. up0 = Conv2D(32, (3, 3), padding='same')(up0)

  106. up0 = BatchNormalization()(up0)

  107. up0 = Activation('relu')(up0)

  108. up0 = Conv2D(32, (3, 3), padding='same')(up0)

  109. up0 = BatchNormalization()(up0)

  110. up0 = Activation('relu')(up0)

  111. # 256

  112. classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(up0)

  113. model = Model(inputs=inputs, outputs=classify)

  114. #model.compile(optimizer=RMSprop(lr=0.0001), loss=bce_dice_loss, metrics=[dice_coeff])

  115. return model

输入为256X256X3的彩色图,输出为256X256X1的MASK,训练参数如下:

  1. model.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ["accuracy"])

  2. model.fit(image_train, label_train,epochs=100,verbose=1,validation_split=0.2, shuffle=True,batch_size=8)

效果图如下:

本人这里训练集中样本标定是把人脸区域都当作了肤色区域,因此没有排除五官区域,如果要得到不包含五官的皮肤区域,只需要替换相应样本就可以了。

拿到了精确的肤色区域,我们就可以更新磨皮算法,这里给出一组效果图:

大家可以看到,基于颜色空间的传统磨皮算法始终无法精确区分皮肤区域与类肤色区域,因此在头发的地方也做了磨皮操作,导致头发纹理细节丢失,而基于Unet皮肤分割的磨皮算法则可以很好的区分皮肤与头发这种类肤色区域,进而将头发的纹理细节保留,达到该磨皮的地方磨皮,不该磨皮的地方不磨,效果明显优于传统方法。

目前美图秀秀,天天P图等主流公司也都已经使用了基于深度学习肤色分割的算法来提高磨皮的效果,这里给大家简单介绍一下,帮助大家更好的理解。

当然,使用深度学习的方法来改进传统方法,只是一个模式,因此这里文章标题为AI美颜磨皮算法一,在AI美颜磨皮算法二中,本人将完全抛弃传统方法,完全基于深度学习来实现磨皮美颜的效果。

最后,本人使用的训练样本来源于网络中的lfw训练集,大家可以搜索一下,很容易就可以找到了,当然,如果你要精确的样本集,并且不包含五官区域,那还是自己标记的好,本人QQ1358009172

深度学习AI美颜系列---AI美颜磨皮算法一相关推荐

  1. 你今天怎么这么好看——基于深度学习的大型现场实时美颜

    Photo from BoredPanda 美颜是当下直播甚至是所有形式对外展示的一个必备条件.手机端的美颜就像私人化妆师,能够帮助我们实现各种心仪的效果. 而大型娱乐节目一般都是提前进行录制,然后进 ...

  2. TorchFusion 是一个深度学习框架,主要用于 AI 系统加速研究和开发

    TorchFusion 是一个深度学习框架,主要用于 AI 系统加速研究和开发. TorchFusion 基于 PyTorch 并且完全兼容纯 PyTorch 和其他 PyTorch 软件包,它供了一 ...

  3. 【百家稷学】深度学习与嵌入式平台AI实践(北京交通大学实训)

    继续咱们百家稷学专题,本次是有三AI在北京交通大学进行的暑期课程教学.百家稷学专题的目标,是走进100所高校和企业进行学习与分享. 分享主题 本次分享是在北京交通大学计算机与信息技术学院进行,主题是& ...

  4. Yoshua Bengio首次中国演讲:深度学习通往人类水平AI的挑战

    11 月 7 日,Yoshua Bengio 受邀来到北京参加第二十届「二十一世纪的计算」国际学术研讨会.会上以及随后受邀前往清华时,他给出了题为「深度学习通往人类水平 AI 的挑战」(Challen ...

  5. Amazon 首席科学家李沐亲授「深度学习」,2019 AI ProCon震撼来袭!(日程出炉)...

    2019年9月5-7日,面向AI技术人的年度盛会-- 2019 AI开发者大会 AI ProCon,火热来袭!  继2018 年由CSDN成功举办AI 开发者大会一年之后,全球AI市场正发生着巨大的变 ...

  6. 我用深度学习做个视觉AI微型处理器!

    Datawhale干货 作者:张强,Datawhale成员 讲多了算法,如何真正将算法应用到产品领域?本文将带你从0用深度学习打造一个视觉AI的微型处理器.文章含完整代码,知识点相对独立,欢迎点赞收藏 ...

  7. 2.7mnist手写数字识别之训练调试与优化精讲(百度架构师手把手带你零基础实践深度学习原版笔记系列)

    2.7mnist手写数字识别之训练调试与优化精讲(百度架构师手把手带你零基础实践深度学习原版笔记系列) 目录 2.7mnist手写数字识别之训练调试与优化精讲(百度架构师手把手带你零基础实践深度学习原 ...

  8. 深度学习与计算机视觉系列(9)_串一串神经网络之动手实现小例子

    深度学习与计算机视觉系列(9)_串一串神经网络之动手实现小例子 作者:寒小阳  时间:2016年1月.  出处:http://blog.csdn.net/han_xiaoyang/article/de ...

  9. 深度学习与计算机视觉系列(8)_神经网络训练与注意点

    深度学习与计算机视觉系列(8)_神经网络训练与注意点 作者:寒小阳  时间:2016年1月.  出处:http://blog.csdn.net/han_xiaoyang/article/details ...

最新文章

  1. react antd Table 选中某一行,其它行也被选中了
  2. 使用Storm实现WordSum
  3. RecyclerView上拉加载Demo
  4. 【Hibernate】Hibernate实体映射——单边的一对多关系
  5. php开发电商项目的技术,[项目实战] php电商开发基本功课程 电商后台实战开发视频教程 共6章...
  6. android 通知 按钮,在自定义通知中添加按钮操作
  7. 近期将要学习的内容(flag)
  8. 你被限流了吗?| 图解+代码
  9. 无人车研发实力哪家强?Google只能排第十
  10. linux 下挂载光驱
  11. BZOJ 1443 游戏(二分图博弈)
  12. 8. OSPF的NSSA详解
  13. IMDB 电影打分规则
  14. pyspark LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak
  15. python 滤波放大数组,python 双边滤波与高斯滤波
  16. Spring框架——applicationContext.xml配置文件头部xmlns
  17. 刚刚,华为决定起诉美国政府
  18. 元宇宙、数字孪生与汽车
  19. 吉首大学2019年程序设计竞赛(重现赛) J 滑稽树下你和我 (递归)
  20. autojs-ocr-easyedge-nodejs

热门文章

  1. Hash魔法:一致性 hash 算法
  2. varnish使用汇总
  3. 读写自旋锁,第1部分(来自IBM)
  4. 嵌入式Linux系统编程学习之二十八线程的等待退出
  5. C++中函数重载、缺省参数及命名空间
  6. php裁剪图片白边,php生成缩略图填充白边(等比缩略图方案)_PHP
  7. matlab 判断元素索引_MATLAB图像处理:08:在交通视频中检测汽车
  8. oracle输入命令为什么显示2,oracle安装后完善2-2 sqlplus配置变量 命令提示符如何显示为用户名...
  9. 【JUC】第二章 线程间通信、集合的线程安全
  10. 【笔记】JAVA SE