目录

1.进入AI人脸编辑页面案例页面,并完成基础配置

2.下载代码和数据并安装依赖​编辑​

3.开始运行代码


1.进入AI人脸编辑案例页面,并完成基础配置

AI人脸编辑 (huaweicloud.com)

点击 Run in ModelArts,进入 JupyterLab 页面。

等待初始化

​进行规格切换,并选择 [限时免费]GPU: 1*V100|CPU: 8核 64GB

​资源切换完成,点击确定。

点击右上角Select Kernel:选择PyTorch-1.4

2.下载代码和数据并安装依赖

安装依赖环境

!pip install ninja
!pip install dlib
!pip uninstall -y torch
!pip uninstall -y torchvision
!pip install torch==1.6.0
!pip install torchvision==0.7.0

安装完成后,需要重启一下kernel,点击上方Restart the kernel

进入HFGI路径下:

%cd HFGI

3.开始运行代码

#@title Setup Repository
import os
from argparse import Namespace
import time
import os
import sys
import numpy as np
from PIL import Image
import torch
import torchvision.transforms as transforms# from utils.common import tensor2im
from models.psp import pSp  # we use the pSp framework to load the e4e encoder.%load_ext autoreload
%autoreload 2
def tensor2im(var):# var shape: (3, H, W)var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy()var = ((var + 1) / 2)var[var < 0] = 0var[var > 1] = 1var = var * 255return Image.fromarray(var.astype('uint8'))

加载预训练模型

model_path = "checkpoint/ckpt.pt"
ckpt = torch.load(model_path, map_location='cpu')
opts = ckpt['opts']
opts['is_train'] = False
opts['checkpoint_path'] = model_path
opts= Namespace(**opts)
net = pSp(opts)
net.eval()
net.cuda()
print('Model successfully loaded!')

设置输入图像

# Setup required image transformations
EXPERIMENT_ARGS = {"image_path": "test_imgs/Lina.jpg"}EXPERIMENT_ARGS['transform'] = transforms.Compose([transforms.Resize((256, 256)),transforms.ToTensor(),transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
resize_dims = (256, 256)

将照片拖到此处即可上传

image_path = EXPERIMENT_ARGS["image_path"]
original_image = Image.open(image_path)
original_image = original_image.convert("RGB")run_align = True

图像对齐

import numpy as np
import PIL
import PIL.Image
import scipy
import scipy.ndimage
import dlibdef get_landmark(filepath, predictor):"""get landmark with dlib:return: np.array shape=(68, 2)"""detector = dlib.get_frontal_face_detector()img = dlib.load_rgb_image(filepath)dets = detector(img, 1)for k, d in enumerate(dets):shape = predictor(img, d)t = list(shape.parts())a = []for tt in t:a.append([tt.x, tt.y])lm = np.array(a)return lmdef align_face(filepath, predictor):""":param filepath: str:return: PIL Image"""lm = get_landmark(filepath, predictor)lm_chin = lm[0: 17]  # left-rightlm_eyebrow_left = lm[17: 22]  # left-rightlm_eyebrow_right = lm[22: 27]  # left-rightlm_nose = lm[27: 31]  # top-downlm_nostrils = lm[31: 36]  # top-downlm_eye_left = lm[36: 42]  # left-clockwiselm_eye_right = lm[42: 48]  # left-clockwiselm_mouth_outer = lm[48: 60]  # left-clockwiselm_mouth_inner = lm[60: 68]  # left-clockwise# Calculate auxiliary vectors.eye_left = np.mean(lm_eye_left, axis=0)eye_right = np.mean(lm_eye_right, axis=0)eye_avg = (eye_left + eye_right) * 0.5eye_to_eye = eye_right - eye_leftmouth_left = lm_mouth_outer[0]mouth_right = lm_mouth_outer[6]mouth_avg = (mouth_left + mouth_right) * 0.5eye_to_mouth = mouth_avg - eye_avg# Choose oriented crop rectangle.x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]x /= np.hypot(*x)x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)y = np.flipud(x) * [-1, 1]c = eye_avg + eye_to_mouth * 0.1quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])qsize = np.hypot(*x) * 2# read imageimg = PIL.Image.open(filepath)output_size = 256transform_size = 256enable_padding = True# Shrink.shrink = int(np.floor(qsize / output_size * 0.5))if shrink > 1:rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))img = img.resize(rsize, PIL.Image.ANTIALIAS)quad /= shrinkqsize /= shrink# Crop.border = max(int(np.rint(qsize * 0.1)), 3)crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),int(np.ceil(max(quad[:, 1]))))crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),min(crop[3] + border, img.size[1]))if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:img = img.crop(crop)quad -= crop[0:2]# Pad.pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),int(np.ceil(max(quad[:, 1]))))pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),max(pad[3] - img.size[1] + border, 0))if enable_padding and max(pad) > border - 4:pad = np.maximum(pad, int(np.rint(qsize * 0.3)))img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')h, w, _ = img.shapey, x, _ = np.ogrid[:h, :w, :1]mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))blur = qsize * 0.02img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')quad += pad[:2]# Transform.img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)if output_size < transform_size:img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS)# Return aligned image.return img
if  'shape_predictor_68_face_landmarks.dat' not in os.listdir():
#     !wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2!bzip2 -dk shape_predictor_68_face_landmarks.dat.bz2def run_alignment(image_path):import dlibpredictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")aligned_image = align_face(filepath=image_path, predictor=predictor) print("Aligned image has shape: {}".format(aligned_image.size))return aligned_image if run_align:input_image = run_alignment(image_path)
else:input_image = original_imageinput_image.resize(resize_dims)

高保真逆向映射

def display_alongside_source_image(result_image, source_image):res = np.concatenate([np.array(source_image.resize(resize_dims)),np.array(result_image.resize(resize_dims))], axis=1)return Image.fromarray(res)def get_latents(net, x, is_cars=False):codes = net.encoder(x)if net.opts.start_from_latent_avg:if codes.ndim == 2:codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1)[:, 0, :]else:codes = codes + net.latent_avg.repeat(codes.shape[0], 1, 1)if codes.shape[1] == 18 and is_cars:codes = codes[:, :16, :]return codes
with torch.no_grad():x = transformed_image.unsqueeze(0).cuda()tic = time.time()latent_codes = get_latents(net, x)# calculate the distortion mapimgs, _ = net.decoder([latent_codes[0].unsqueeze(0).cuda()],None, input_is_latent=True, randomize_noise=False, return_latents=True)res = x -  torch.nn.functional.interpolate(torch.clamp(imgs, -1., 1.), size=(256,256) , mode='bilinear')# ADAimg_edit = torch.nn.functional.interpolate(torch.clamp(imgs, -1., 1.), size=(256,256) , mode='bilinear')res_align  = net.grid_align(torch.cat((res, img_edit  ), 1))# consultation fusionconditions = net.residue(res_align)result_image, _ = net.decoder([latent_codes],conditions, input_is_latent=True, randomize_noise=False, return_latents=True)toc = time.time()print('Inference took {:.4f} seconds.'.format(toc - tic))# Display inversion:
display_alongside_source_image(tensor2im(result_image[0]), input_image)

运行结果:

高保真图像编辑

from editings import latent_editor
editor = latent_editor.LatentEditor(net.decoder)
# interface-GAN
interfacegan_directions = {'age': './editings/interfacegan_directions/age.pt','smile': './editings/interfacegan_directions/smile.pt' }
edit_direction = torch.load(interfacegan_directions['smile']).cuda()
edit_degree = 1.5 # 设置微笑幅度
img_edit, edit_latents = editor.apply_interfacegan(latent_codes[0].unsqueeze(0).cuda(), edit_direction, factor=edit_degree)  # 设置微笑
# align the distortion map
img_edit = torch.nn.functional.interpolate(torch.clamp(img_edit, -1., 1.), size=(256,256) , mode='bilinear')
res_align  = net.grid_align(torch.cat((res, img_edit  ), 1))# fusion
conditions = net.residue(res_align)
result, _ = net.decoder([edit_latents],conditions, input_is_latent=True, randomize_noise=False, return_latents=True)result = torch.nn.functional.interpolate(result, size=(256,256) , mode='bilinear')
display_alongside_source_image(tensor2im(result[0]), input_image)

运行结果:

华为云——AI人脸编辑让Lena微笑相关推荐

  1. 一文看懂华为云AI新政,这波开发者福利有点硬

    李根 发自 ShanghAI  量子位 报道 | 公众号 QbitAI 华为这家公司,"很吓人". 他们在做的事.要做的事,一旦进入执行,总会态势惊人,无论旧业务还是新领域. 比如 ...

  2. 华为云+AI,视频分析全面进入智能时代

    华为云+AI,视频分析全面进入智能时代 云计算.大数据.物联网.人工智能等信息技术飞速发展及传统产业数字化的转型,一方面媒体数据量呈现几何级增长,据IDC预测,全球数据总量预计2020年达到44ZB; ...

  3. 华为云 AI 实战营计划,带你迈上 AI 之路

    当今,AI的开发人才需求呈现极大的供需不平衡.所有开发者都关心,要如何从一名开发者晋升为AI开发者?AI开发能力,是主要的进入障碍.不用慌,华为云推出了 <华为云ModelArts-Lab AI ...

  4. 【云驻共创】华为云AI之用Python定制我的《本草纲目女孩》

    文章目录 前言 一.华为云ModelArts-Notebook介绍 1.华为云ModelArts-Notebook 1.1 Jupyter Notebook是什么 1.2 JupyterLab是什么 ...

  5. 深度解读华为云AI战略:如何靠AI赢得下一个十年?

    10月10日开幕的2018华为全联接大会(HUAWEI CONNECT 2018)现场,华为轮值董事长徐直军详细阐述了华为的AI战略,并推出了全栈全场景的AI解决方案,正式向整个AI业界宣告华为入场- ...

  6. 华为云AI能力抢先体验 HC大会现场火爆“比心”

    人工智能离我们还远么?这个问题相信不止我一个人在问. 实际上,从2017年被称为"人工智能元年"以来至今,其发展方向也可谓百家争鸣,但是由于解决方案的场景化进程集中于某些特殊行业, ...

  7. 中秋佳节,基于华为云AI制作属于自己的月亮!

    中秋佳节,基于华为云AI制作属于自己的月亮! 一.前言 二.结果展示 三.模型简介 四.实验环境 五.实验步骤 1.导入依赖包 2.参数设置 3.调用视频和图片 4.定义SkyFilter类 5.处理 ...

  8. AI开发难?请收下华为云AI大拿秘籍一份!

    现如今AI技术.概念火爆.落地应用更是繁多,但开发呢?是否困难?到底有多痛? 这一问可不要紧,竟然引来诸多吐槽,攻城狮们纷纷表示,AI开发对技能要求实在是高,技术知识你要懂,机器学习的背景理论也要ge ...

  9. 华为云AI开发部总经理罗华霖:华为人工智能的实践与创新

    来源:亿欧 作者:罗华霖 概要:9月21-22日,由深圳市罗湖区人民政府指导.亿欧公司主办的"GIIS-全球产业创新峰会"在罗湖区京基100瑞吉酒店盛大启幕. 9月21-22日,由 ...

最新文章

  1. java mysql in_MySQL中Exists和In的使用
  2. 【BZOJ2738】矩阵乘法 [整体二分][树状数组]
  3. sql server 替换有反斜杠的字符串_SQL注入思维导图
  4. 五十七、教用Python中的turtle海龟画图(下篇)
  5. ELK+Kafka 企业日志收集平台(二)这是原版
  6. BZOJ2741 【FOTILE模拟赛】L 【可持久化trie + 分块】
  7. Django中间件与python日志模块 介绍
  8. Vue + Spring Boot 项目实战(二):使用 CLI 搭建 Vue.js 项目
  9. 数电课设—四位数字电子钟设计
  10. Dubbo视频教程(Dubbo项目实战)
  11. Matlab中的基本绘图操作,Matlab中如何绘图
  12. SPI 接口驱动电路设计
  13. 电脑计算机睡眠和休眠模式区别,电脑休眠和睡眠的区别?
  14. 电脑如何进入【安全模式】——杀毒访问清理文件很方便
  15. 打造一张万能Windows安装盘(转)
  16. 如何预防钓鱼邮件?S/MIME邮件证书来支招
  17. layui实现文件压缩上传_基于SSM框架、Layui的多文件上传、包括图片,压缩包,音频等文件(与数据库挂钩) - 爱秧博客...
  18. perp系列之六:perp工作截屏
  19. html中怎样写css路径,CSS 书写位置
  20. windows和android平板,AndroidWindows平板一样适用_苹果平板电脑_平板电脑评测-中关村在线...

热门文章

  1. [教程]Magic Mouse 频繁失去连接解决方法
  2. DRF--Django RestFramework
  3. IC50、pIC50、EC50、ED50、Ki、Kd、KD、Ka、Km、Kon、Koff概念辨析
  4. bpsk传输系统实验matlab,通信原理实验4 BPSK系统仿真matlab程序
  5. spring mvc get 请求 对于 “Sun Sep 29 00:28:16 CST 2019”格式日期的处理
  6. css购物车右上角数字效果
  7. css绘制自定义数据仪表盘
  8. 成都榆熙:如何看待拼多多淡化“搜索”功能,以推荐为导向构建“爆款单品?
  9. 复杂网络建模学习笔记二(模型)
  10. 《思考致富》第一章-心想事成