引言:近段时间,一个让梦娜丽莎图像动起来的项目火遍了朋友圈。而今天我们就将实现让图片中的人物随着视频人物一起产生动作。

很多人学习python,不知道从何学起。
很多人学习python,掌握了基本语法过后,不知道在哪里寻找案例上手。
很多已经做案例的人,却不知道如何去学习更加高深的知识。
那么针对这三类人,我给大家提供一个好的学习平台,免费领取视频教程,电子书籍,以及课程的源代码!??¤
QQ群:704929215

其中通过在静止图像中动画对象产生视频有无数的应用跨越的领域兴趣,包括电影制作、摄影和电子商务。更准确地说,是图像动画指将提取的视频外观结合起来自动合成视频的任务一种源图像与运动模式派生的视频。

近年来,深度生成模型作为一种有效的图像动画技术出现了视频重定向。特别是,可生成的对抗网络(GANS)和变分自动编码器(VAES)已被用于在视频中人类受试者之间转换面部表情或运动模式。

根据论文FirstOrder Motion Model for Image Animation可知,在姿态迁移的大任务当中,Monkey-Net首先尝试了通过自监督范式预测关键点来表征姿态信息,测试阶段估计驱动视频的姿态关键点完成迁移工作。在此基础上,FOMM使用了相邻关键点的局部仿射变换来模拟物体运动,还额外考虑了遮挡的部分,遮挡的部分可以使用image inpainting生成。

而今天我们就将借助论文所分享的源代码,构建模型创建自己需要的人物运动。具体流程如下。

实验前的准备

首先我们使用的python版本是3.6.5所用到的模块如下:

  • imageio模块用来控制图像的输入输出等。
  • Matplotlib模块用来绘图。
  • numpy模块用来处理矩阵运算。
  • Pillow库用来加载数据处理。
  • pytorch模块用来创建模型和模型训练等。
  • 完整模块需求参见requirements.txt文件。

模型的加载和调用

通过定义命令行参数来达到加载模型,图片等目的。

(1)首先是训练模型的读取,包括模型加载方式:

def load_checkpoints(config_path, checkpoint_path, cpu=False):  with open(config_path) as f:  config = yaml.load(f)  generator = OcclusionAwareGenerator(**config[ model_params ][ generator_params ],  **config[ model_params ][ common_params ])  if not cpu: generator.cuda()  kp_detector = KPDetector(**config[ model_params ][ kp_detector_params ], **config[ model_params ][ common_params ])  if not cpu:  kp_detector.cuda()  if cpu:  checkpoint = torch.load(checkpoint_path, map_location=torch.device( cpu ))  else:  checkpoint = torch.load(checkpoint_path)  generator.load_state_dict(checkpoint[ generator ])  kp_detector.load_state_dict(checkpoint[ kp_detector ])  if not cpu:  generator = DataParallelWithCallback(generator)  kp_detector = DataParallelWithCallback(kp_detector)  generator.eval() kp_detector.eval()  return generator, kp_detector  

(2)然后是利用模型创建产生的虚拟图像,找到最佳的脸部特征:

def make_animation(source_image, driving_video, generator, kp_detector, relative=True, adapt_movement_scale=True, cpu=False):  with torch.no_grad():  predictions = []  source = torch.tensor(source_image[np.newaxis].astype(np.float32)).permute(0, 3, 1, 2)  if not cpu:  sourcesource = source.cuda()  driving = torch.tensor(np.array(driving_video)[np.newaxis].astype(np.float32)).permute(0, 4, 1, 2, 3)  kp_source = kp_detector(source)  kp_driving_initial = kp_detector(driving[:, :, 0])  for frame_idx in tqdm(range(driving.shape[2])): drivingdriving_frame = driving[:, :, frame_idx]  if not cpu:  driving_framedriving_frame = driving_frame.cuda()  kp_driving = kp_detector(driving_frame)  kp_norm = normalize_kp(kp_sourcekp_source=kp_source, kp_drivingkp_driving=kp_driving,  kp_driving_initialkp_driving_initial=kp_driving_initial, use_relative_movement=relative,  use_relative_jacobian=relative, adapt_movement_scaleadapt_movement_scale=adapt_movement_scale)  out = generator(source, kp_sourcekp_source=kp_source, kp_driving=kp_norm)           predictions.append(np.transpose(out[ prediction ].data.cpu().numpy(), [0, 2, 3, 1])[0]) return predictions
def find_best_frame(source, driving, cpu=False):  import face_alignment  def normalize_kp(kp):  kpkp = kp - kp.mean(axis=0, keepdims=True)  area = ConvexHull(kp[:, :2]).volume  area = np.sqrt(area)  kp[:, :2] = kp[:, :2] / area  return kp  fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=True,  device= cpu  if cpu else  cuda ) kp_source = fa.get_landmarks(255 * source)[0]  kp_source = normalize_kp(kp_source) norm  = float( inf )  frame_num = 0  for i, image in tqdm(enumerate(driving)):  kp_driving = fa.get_landmarks(255 * image)[0]  kp_driving = normalize_kp(kp_driving)  new_norm = (np.abs(kp_source - kp_driving) ** 2).sum()  if new_norm < norm:  norm = new_norm  frame_num = i
return frame_num 

(3)    接着定义命令行调用参数加载图片、视频等方式:

parser = ArgumentParser()  parser.add_argument("--config", required=True, help="path to config")  parser.add_argument("--checkpoint", default= vox-cpk.pth.tar , help="path to checkpoint to restore")  parser.add_argument("--source_image", default= sup-mat/source.png , help="path to source image")  parser.add_argument("--driving_video", default= sup-mat/source.png , help="path to driving video")  parser.add_argument("--result_video", default= result.mp4 , help="path to output") parser.add_argument("--relative", dest="relative", action="store_true", help="use relative or absolute keypoint coordinates")  parser.add_argument("--adapt_scale", dest="adapt_scale", action="store_true", help="adapt movement scale based on convex hull of keypoints")  parser.add_argument("--find_best_frame", dest="find_best_frame", action="store_true",  help="Generate from the frame that is the most alligned with source. (Only for faces, requires face_aligment lib)")  parser.add_argument("--best_frame", dest="best_frame", type=int, default=None,    help="Set frame to start from.") parser.add_argument("--cpu", dest="cpu", action="store_true", help="cpu mode.")  parser.set_defaults(relative=False)  parser.set_defaults(adapt_scale=False)  opt = parser.parse_args()  source_image = imageio.imread(opt.source_image)  reader = imageio.get_reader(opt.driving_video)  fps = reader.get_meta_data()[ fps ] driving_video = [] try:  for im in reader:  driving_video.append(im)  except RuntimeError:  pass  reader.close()  source_image = resize(source_image, (256, 256))[..., :3]  driving_video = [resize(frame, (256, 256))[..., :3] for frame in driving_video]  generator, kp_detector = load_checkpoints(config_path=opt.config, checkpoint_path=opt.checkpoint, cpu=opt.cpu)  if opt.find_best_frame or opt.best_frame is not None:  i = opt.best_frame if opt.best_frame is not None else find_best_frame(source_image, driving_video, cpu=opt.cpu)  print ("Best frame: " + str(i))  driving_forward = driving_video[i:]  driving_backward = driving_video[:(i+1)][::-1]  predictions_forward = make_animation(source_image, driving_forward, generator, kp_detector, relative=opt.relative, adapt_movement_scale=opt.adapt_scale, cpu=opt.cpu) predictions_backward = make_animation(source_image, driving_backward, generator, kp_detector, relative=opt.relative, adapt_movement_scale=opt.adapt_scale, cpu=opt.cpu) predictions = predictions_backward[::-1] + predictions_forward[1:]  else:  predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=opt.relative, adapt_movement_scale=opt.adapt_scale, cpu=opt.cpu) imageio.mimsave(opt.result_video, [img_as_ubyte(frame) for frame in predictions], fpsfps=fps) 

模型的搭建

整个模型训练过程是图像重建的过程,输入是源图像和驱动图像,输出是保留源图像物体信息的含有驱动图像姿态的新图像,其中输入的两张图像来源于同一个视频,即同一个物体信息,那么整个训练过程就是驱动图像的重建过程。大体上来说分成两个模块,一个是motion estimation module,另一个是imagegeneration module。

(1)其中通过定义VGG19模型建立网络层作为perceptual损失。

其中手动输入数据进行预测需要设置更多的GUI按钮,其中代码如下:

class Vgg19(torch.nn.Module):  """  Vgg19 network for perceptual loss. See Sec 3.3.  """  def __init__(self, requires_grad=False):  super(Vgg19, self).__init__()  vgg_pretrained_features = models.vgg19(pretrained=True).features  self.slice1 = torch.nn.Sequential()  self.slice2 = torch.nn.Sequential()  self.slice3 = torch.nn.Sequential()  self.slice4 = torch.nn.Sequential()  self.slice5 = torch.nn.Sequential()  for x in range(2): self.slice1.add_module(str(x), vgg_pretrained_features[x])  for x in range(2, 7):  self.slice2.add_module(str(x), vgg_pretrained_features[x])  for x in range(7, 12):  self.slice3.add_module(str(x), vgg_pretrained_features[x])  for x in range(12, 21):  self.slice4.add_module(str(x), vgg_pretrained_features[x])  for x in range(21, 30):  self.slice5.add_module(str(x), vgg_pretrained_features[x])  self.mean = torch.nn.Parameter(data=torch.Tensor(np.array([0.485, 0.456, 0.406]).reshape((1, 3, 1, 1))),  requires_grad=False)  self.std = torch.nn.Parameter(data=torch.Tensor(np.array([0.229, 0.224, 0.225]).reshape((1, 3, 1, 1))),  requires_grad=False)  if not requires_grad:  for param in self.parameters():  param.requires_grad = False  def forward(self, X):  X = (X - self.mean) / self.std  h_relu1 = self.slice1(X)  h_relu2 = self.slice2(h_relu1)  h_relu3 = self.slice3(h_relu2)  h_relu4 = self.slice4(h_relu3)  h_relu5 = self.slice5(h_relu4)  out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5]  return out 

(2)创建图像金字塔计算金字塔感知损失:

class ImagePyramide(torch.nn.Module):  """  Create image pyramide for computing pyramide perceptual loss. See Sec 3.3  """  def __init__(self, scales, num_channels):  super(ImagePyramide, self).__init__()  downs = {}  for scale in scales:  downs[str(scale).replace( . ,  - )] = AntiAliasInterpolation2d(num_channels, scale)  self.downs = nn.ModuleDict(downs)  def forward(self, x):  out_dict = {} for scale, down_module in self.downs.items():  out_dict[ prediction_  + str(scale).replace( - ,  . )] = down_module(x)  return out_dict 

(3)等方差约束的随机tps变换

class Transform:  """  Random tps transformation for equivariance constraints. See Sec 3.3  """  def __init__(self, bs, **kwargs):  noise = torch.normal(mean=0, std=kwargs[ sigma_affine ] * torch.ones([bs, 2, 3]))  self.theta = noise + torch.eye(2, 3).view(1, 2, 3)  self.bs = bs  if ( sigma_tps  in kwargs) and ( points_tps  in kwargs):  self.tps = True  self.control_points = make_coordinate_grid((kwargs[ points_tps ], kwargs[ points_tps ]), type=noise.type())  selfself.control_points = self.control_points.unsqueeze(0)  self.control_params = torch.normal(mean=0,  std=kwargs[ sigma_tps ] * torch.ones([bs, 1, kwargs[ points_tps ] ** 2]))  else:  self.tps = False  def transform_frame(self, frame):  grid = make_coordinate_grid(frame.shape[2:], type=frame.type()).unsqueeze(0)  gridgrid = grid.view(1, frame.shape[2] * frame.shape[3], 2)  grid = self.warp_coordinates(grid).view(self.bs, frame.shape[2], frame.shape[3], 2)  return F.grid_sample(frame, grid, padding_mode="reflection")  def warp_coordinates(self, coordinates):  theta = self.theta.type(coordinates.type())  thetatheta = theta.unsqueeze(1)  transformed = torch.matmul(theta[:, :, :, :2], coordinates.unsqueeze(-1)) + theta[:, :, :, 2:]  transformedtransformed = transformed.squeeze(-1) if self.tps:  control_points = self.control_points.type(coordinates.type())  control_params = self.control_params.type(coordinates.type())  distances = coordinates.view(coordinates.shape[0], -1, 1, 2) - control_points.view(1, 1, -1, 2)  distances = torch.abs(distances).sum(-1)  result = distances ** 2  resultresult = result * torch.log(distances + 1e-6)  resultresult = result * control_params  resultresult = result.sum(dim=2).view(self.bs, coordinates.shape[1], 1)  transformedtransformed = transformed + result  return transformed  def jacobian(self, coordinates):  new_coordinates = self.warp_coordinates(coordinates)  gradgrad_x = grad(new_coordinates[..., 0].sum(), coordinates, create_graph=True)  gradgrad_y = grad(new_coordinates[..., 1].sum(), coordinates, create_graph=True)  jacobian = torch.cat([grad_x[0].unsqueeze(-2), grad_y[0].unsqueeze(-2)], dim=-2)  return jacobian  

(4)生成器的定义:生成器,给定的源图像和和关键点尝试转换图像根据运动轨迹引起要点。部分代码如下:

class OcclusionAwareGenerator(nn.Module):  def __init__(self, num_channels, num_kp, block_expansion, max_features, num_down_blocks,  num_bottleneck_blocks, estimate_occlusion_map=False, dense_motion_params=None, estimate_jacobian=False):  super(OcclusionAwareGenerator, self).__init__()  if dense_motion_params is not None:  self.dense_motion_network = DenseMotionNetwork(num_kpnum_kp=num_kp, num_channelsnum_channels=num_channels,  estimate_occlusion_mapestimate_occlusion_map=estimate_occlusion_map,  **dense_motion_params) else:  self.dense_motion_network = None  self.first = SameBlock2d(num_channels, block_expansion, kernel_size=(7, 7), padding=(3, 3))  down_blocks = []  for i in range(num_down_blocks):  in_features = min(max_features, block_expansion * (2 ** i))  out_features = min(max_features, block_expansion * (2 ** (i + 1)))  down_blocks.append(DownBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1)))  self.down_blocks = nn.ModuleList(down_blocks)  up_blocks = [] for i in range(num_down_blocks):  in_features = min(max_features, block_expansion * (2 ** (num_down_blocks - i)))  out_features = min(max_features, block_expansion * (2 ** (num_down_blocks - i - 1)))  up_blocks.append(UpBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1)))  self.up_blocks = nn.ModuleList(up_blocks)  self.bottleneck = torch.nn.Sequential() in_features = min(max_features, block_expansion * (2 ** num_down_blocks))  for i in range(num_bottleneck_blocks):  self.bottleneck.add_module( r  + str(i), ResBlock2d(in_features, kernel_size=(3, 3), padding=(1, 1)))  self.final = nn.Conv2d(block_expansion, num_channels, kernel_size=(7, 7), padding=(3, 3))  self.estimate_occlusion_map = estimate_occlusion_map  self.num_channels = num_channels 

(5)判别器类似于Pix2PixGenerator。

def __init__(self, num_channels=3, block_expansion=64, num_blocks=4, max_features=512,  sn=False, use_kp=False, num_kp=10, kp_variance=0.01, **kwargs):  super(Discriminator, self).__init__()  down_blocks = []  for i in range(num_blocks):  down_blocks.append(  DownBlock2d(num_channels + num_kp * use_kp if i == 0 else min(max_features, block_expansion * (2 ** i)),  min(max_features, block_expansion * (2 ** (i + 1))),  norm=(i != 0), kernel_size=4, pool=(i != num_blocks - 1), snsn=sn))  self.down_blocks = nn.ModuleList(down_blocks)  self.conv = nn.Conv2d(self.down_blocks[-1].conv.out_channels, out_channels=1, kernel_size=1)  if sn:  self.conv = nn.utils.spectral_norm(self.conv)  self.use_kp = use_kp  self.kp_variance = kp_variance  def forward(self, x, kp=None):  feature_maps = []  out = x  if self.use_kp:  heatmap = kp2gaussian(kp, x.shape[2:], self.kp_variance)  out = torch.cat([out, heatmap], dim=1)  for down_block in self.down_blocks:  feature_maps.append(down_block(out))  out = feature_maps[-1]  prediction_map = self.conv(out)  return feature_maps, prediction_map 

最终通过以下代码调用模型训练“python demo.py--config config/vox-adv-256.yaml --driving_video path/to/driving/1.mp4--source_image path/to/source/7.jpg --checkpointpath/to/checkpoint/vox-adv-cpk.pth.tar --relative --adapt_scale”

效果如下:

Python骚操作,让图片上的人物动起来!相关推荐

  1. Python 骚操作!如何让自己在斗图中立于不败之地?

    点击⬆️"小詹学Python",选择"星标公众号" 福利干货,第一时间送达! 本文授权转载自AirPython,禁二次转载 阅读文本大概需要 6 分钟. 1 目 ...

  2. python图片识别是否p过_用 Python 和 OpenCV 检测图片上的条形码

    用 Python 和 OpenCV 检测图片上的的条形码 这篇博文的目的是应用计算机视觉和图像处理技术,展示一个条形码检测的基本实现.我所实现的算法本质上基于StackOverflow 上的这个问题, ...

  3. Python骚操作,实现驾考自动答题,直接满分

    Python骚操作来了~ 用Python来实现科目一/四自动答题,100分不要太简单! 最初是表弟最近想买车,但是驾照都没有,买什么车,只能先考驾照~ 看他在网页上练习题目慢吞吞的,我就看不下去了,直 ...

  4. 骚操作!图片缩小后 内容竟变得完全不同?

    点击上方"AI算法与图像处理",选择加"星标"或"置顶" 重磅干货,第一时间送达 编辑:AI算法与图像处理 仅作为学习交流分享,如有侵权请联 ...

  5. Python骚操作:Python控制Excel实现自动化办公!

    Python骚操作:Python控制Excel实现自动化办公! 1.安装 Python骚操作:Python控制Excel实现自动化办公! 2.操作一个简单的Excel文档 操作注释及代码: Pytho ...

  6. Python骚操作—自动刷抖音

    python骚操作之电脑自动刷抖音 上篇文章发完之后,有朋友在后台留言给我,说手机自动化控制很炫酷,但是appium的安装和使用很复杂,想让我写一篇详细文章介绍一下,下面我就以自动刷抖音为例简单介绍一 ...

  7. python如何识别身份证图片上的证件号码及姓名

    要在Python中识别身份证图片上的号码和姓名,可以使用OCR(光学字符识别)技术.OCR技术可以将图像中的文本转换为计算机可读的文本格式. 以下是一个示例代码,使用tesseract OCR库和Op ...

  8. python 将图片转换成像素画_Python 骚操作之图片转字符画

    今天给大家介绍一个 Python 的骚操作,将图片转化为字符画,又是一个装逼的好技巧,大家赶紧 get 一下吧! 实现原理: 字符画是一系列字符的组合,我们可以把字符看作是比较大块的像素,一个字符能表 ...

  9. Python 骚操作,微信远程控制电脑

    **今天带给大家一个非常有意思的 python 程序,基于 itchat 实现微信控制电脑.你可以通过在微信发送命令,来拍摄当前电脑的使用者,然后图片会发送到你的微信上.甚至你可以发送命令来远程关闭电 ...

最新文章

  1. linux agetty 登录框进程简介
  2. hdu 1081To The Max
  3. kafka java客户端编程
  4. java dictionary遍历_C# Dictionary 的几种遍历方法
  5. vegas9.0合成计时器
  6. sharepoint对象模型性能比较
  7. Asp.Net Core之Identity应用(下篇)
  8. 常见窗函数的C语言实现及其形状,适用于单片机、DSP作FFT运算
  9. maven工程使用spring-boot-devtools进行热部署,更改代码避免重启web容器
  10. 【SimpleITK】Resampling重采样
  11. 74HC595的中文资料
  12. FastDFS 原理 以及 简单使用
  13. 2018引汉济渭计算机监控系统,基于数字水网的引汉济渭受水区水资源调配业务化研究与应用...
  14. 第三方支付牌照会改变在线支付现状吗?
  15. react 中子路由(route)或二级路由如何配置?
  16. 超级计算机也无法算尽圆周率,圆周率如果被算尽意味着什么?
  17. (七)fastai 2018 lesson8 目标检测 ~ lesson9 目标检测
  18. Hibernate架包的下载
  19. 计算机专业二本可以考cfa,计算机专业的CFA考生的备考经验分享
  20. #ifndef的神仙用法

热门文章

  1. qq空间显示手机型号android,手机QQ空间说说怎么显示手机型号
  2. STM32F103C8T6的TIM1的CH1、CH2、CH3三路互补PWM实现四路PWM两两输出
  3. 【前端教程】前端要懂的色域知识
  4. python统计英文文本词频和提取文本关键词
  5. Java开发技巧!整理了3家面试问题:美团+字节+腾讯,工作感悟
  6. 【面试】google电面等
  7. cassandra java cql_使用Cassandra CQL Java Driver 访问Lindorm
  8. android 蓝牙打印代码,分享一个b4a下安卓操作蓝牙打印机的代码
  9. Oracle数据库操作:将有顿号的一条数据拆分为多行
  10. jar包扫描工具: gamma