Link of original Github repo
Link of personal made study case of HR-VITON

Content

  • Pre
  • 1、OpenPose(On colab, need GPU)
  • 2、Human Parse
    • Method 1: Colab
    • Method 2: Local or Server
  • 3、DensePose (On colab, GPU or CPU)
  • 4、Cloth Mask (On colab, GPU or CPU)
  • 5、Parse Agnostic (On colab)
  • 6、Human Agnostic
  • 7、Conclusion

Pre

According to explanation from authors: Preprocessing.md. At least a few steps are needed for getting all required inputs of model.

  • OpenPose
  • Human Parse
  • DensePose
  • Cloth Mask
  • Parse Agnostic
  • Human Agnostic

Most of those are reproduced on Colab, except Human Parse, which needs Tensorflow 1.15 and GPU is highly prefered.

1、OpenPose(On colab, need GPU)

(1) Install OpenPose, taking about 15 minutes

import os
from os.path import exists, join, basename, splitextgit_repo_url = 'https://github.com/CMU-Perceptual-Computing-Lab/openpose.git'
project_name = splitext(basename(git_repo_url))[0]
if not exists(project_name):# see: https://github.com/CMU-Perceptual-Computing-Lab/openpose/issues/949# install new CMake becaue of CUDA10!wget -q https://cmake.org/files/v3.13/cmake-3.13.0-Linux-x86_64.tar.gz!tar xfz cmake-3.13.0-Linux-x86_64.tar.gz --strip-components=1 -C /usr/local# clone openpose!git clone -q --depth 1 $git_repo_url!sed -i 's/execute_process(COMMAND git checkout master WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}\/3rdparty\/caffe)/execute_process(COMMAND git checkout f019d0dfe86f49d1140961f8c7dec22130c83154 WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}\/3rdparty\/caffe)/g' openpose/CMakeLists.txt# install system dependencies!apt-get -qq install -y libatlas-base-dev libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler libgflags-dev libgoogle-glog-dev liblmdb-dev opencl-headers ocl-icd-opencl-dev libviennacl-dev# install python dependencies!pip install -q youtube-dl# build openpose!cd openpose && rm -rf build || true && mkdir build && cd build && cmake .. && make -j`nproc`

Now, OpenPose will be installed under your current path.

(2) Get all needed models

!. ./openpose/models/getModels.sh

(3) Prepare your test data

# for storing input image
!mkdir ./image_path
# copy official provided data to image_path, you may need to download and unzip it in advance
!cp ./test/image/000* ./image_path/
# create directories for generated results of OpenPose
!mkdir ./json_path
!mkdir ./img_path

(4)Run

# go to openpose directory
%cd openpose
# run openpose.bin
!./build/examples/openpose/openpose.bin --image_dir ../image_path --hand --disable_blending --display 0 --write_json ../json_path --write_images ../img_path --num_gpu 1 --num_gpu_start 0

Then json files will be saved under …/json_path and images will be saved under …/img_path.

The image result looks like

More details about results can be found at openpose

2、Human Parse

In this section, you can either do it on Colab, Cloud, or local. Unfortunately, I didn’t successfully make use of GPU on Colab, and I can only use CPU, which is super slow when image size at 768 × 1024 (about 13 minutes per image).

Method 1: Colab

If you can accept, then install Tensorflow 1.15, before which you have to change Python version to 3.7 or 3.6.

(1) Get pretrained model

%%bash
FILE_NAME='./CIHP_pgn.zip'
FILE_ID='1Mqpse5Gen4V4403wFEpv3w3JAsWw2uhk'curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=$FILE_ID" > /dev/null
CODE="$(awk '/_warning_/ {print $NF}' /tmp/cookie)"
curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${CODE}&id=$FILE_ID" -o $FILE_NAME

unzip

!unzip CIHP_pgn.zip

(2) Get repo

!cp -r /content/drive/MyDrive/CIHP_PGN ./
%cd CIHP_PGN

Note: I just saved the repo and cleaned it for my own purpose, but you can use official provided code as well.

(3) Prepare data and model

!mkdir -p ./checkpoint
!mkdir -p ./datasets/images
# You also need to download dataset provided or use your own images
!mv ../CIHP_pgn ./checkpoint/CIHP_pgn
!cp ../test/image/0000* ./datasets/images

(4) Configuration

Change to Python 3.6

!sudo update-alternatives --config python3

Install dependencies (Tensorflow 1.15)

!sudo apt-get install python3-pip
!python -m pip install --upgrade pip
!pip install matplotlib opencv-python==4.2.0.32 Pillow scipy tensorflow==1.15
!pip install ipykernel

(5) Run

now you can run your code

!python ./inference_pgn.py

Note: In official repo, the file is named inf_pgn.py, which leads to the same result as mine.

Finally, you can get result looks like

More details can be found at CIHP_PGN

Method 2: Local or Server

In this section, I will give more explanation about what we really need.

You need conda in this part, which is what I used at least.

(1) Create a new env for oldschool Tensorflow

conda create -n tf python=3.7

(2) Configuration

conda activate tf

install GPU dependencies: cudatoolkit=10.0 cudnn=7.6.5

conda install -c conda-forge cudatoolkit=10.0 cudnn=7.6.5

install Tensorflow 1.15 GPU

pip install tensorflow-gpu==1.15

You may need to install below in a new env

pip install scipy==1.7.3 opencv-python==4.5.5.62 protobuf==3.19.1 Pillow==9.0.1 matplotlib==3.5.1

More info about compatibility between Tensorflow and CUDA can be found here

(3) Prepare data, repo and model as mentioned before

A final dir looks like

So you basically just put model under checkpoint/CIHP_pgn

And put data under datasets/images

It can be just a few images of people. A repo of my cleaned version can be found at Google Drive. Feel free to download it. If you use official provided inf_pgn.py, same results will be generated.

(4) Run

python inference_pgn.py

Then you should see the output. Unfortunately, I didn’t make it inference with GPU, no matter on server or local.

At local, my GPU is MX250 with 2G memory, which is not enough for inference.
At server, the GPU is RTX A5000, but for some unknown reason, probably something incompatible, the GPU is not invoked for inference. But model is successfully loaded into GPU though.

Fortunately, the server I used has 24 Cores and supports 2 threads per Core, which make it running still fast (20 to 30 seconds per 768×1024 image) even with CPU.

Final result looks like

However, the result inferenced with input of 768×1024 is not the same as input of 192×256. The former looks worse as shown above.

Note: The black images are what we really need, because the values of colored one are for example 0, 51, 85, 128, 170, 221, 255, which are not from 0 - 20 and inconsistant with HR-VITON. The values of black one are for example 0, 2, 5, 10, 12, 13, 14, 15, which are needed as labels for getting agnostic images.

One thing to mention, the images provided by official dataset keep both visualization (colored) and label (0 - 20). I don’t know how they did that. I also tried P mode in PIL, but found nothing.

The color and corresponding label: (See Github Issue)

Background (0,0,0)
Hat (128,0,0)
Hair (255,0,0)
Glove (0,85,0)
Sunglasses (170,0,51)
Upper-clothes (255,85,0)
Dress (0,0,85)
Coat (0,119,221)
Socks (85,85,0)
Pants (0,85,85)
torso-skin (85,51,0)
Scarf (52,86,128)
Skirt (0,128,0)
Face (0,0,255)
Left-arm (51,170,221)
Right-arm (0,255,255)
Left-leg (85,255,170)
Right-leg (170,255,85)
Left-shoe (255,255,0)
Right-shoe (255,170,0)

3、DensePose (On colab, GPU or CPU)

(1) get repo of detectron2

!git clone https://github.com/facebookresearch/detectron2

(2) install dependencies

!python -m pip install -e detectron2

(3) install packages for DensePose

%cd detectron2/projects/DensePose
!pip install av>=8.0.3 opencv-python-headless>=4.5.3.56 scipy>=1.5.4

(4) Prepare your images

!mkdir ./image_path
!cp /content/test/image/0000* ./image_path/

(5) Modify code

At the time I used DensePose, there are some bugs, I have to modify some code to make it work as I want it to. When you follow this tutorial, situation may change.

  • For getting same input as HR-VITON, change ./densepose/vis/densepose_results.py in line 320
 alpha=0.7 to 1inplace=True to False
  • change ./densepose/vis/base.py, line 38

This modification is because above change is not enough, image_target_bgr = image_bgr * 0 made a copy instead of a reference and lost our result.

image_target_bgr = image_bgr * 0
to
image_target_bgr = image_bgr
image_target_bgr *= 0
  • To save file with name kept and in directory, change apply_net.py, line 286 and 287 to below
out_fname = './image-densepose/' + image_fpath.split('/')[-1]
out_dir = './image-densepose'

(6) Run

If you are using CPU, add --opts MODEL.DEVICE cpu to end of below command.

!python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml \
https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl \
image_path dp_segm -v

Then you can get results look like

4、Cloth Mask (On colab, GPU or CPU)

This is a lot easier.

(1) Install

!pip install carvekit_colab

(2) Download models

from carvekit.ml.files.models_loc import download_all
download_all();

(3) Prepare cloth images

!mkdir ./cloth
!cp ./test/cloth/0000* ./cloth/

prepare dir for results

!mkdir ./cloth_mask

(4) Run

#title Upload images from your computer
#markdown Description of parameters
#markdown - `SHOW_FULLSIZE`  - Shows image in full size (may take a long time to load)
#markdown - `PREPROCESSING_METHOD`  - Preprocessing method
#markdown - `SEGMENTATION_NETWORK`  - Segmentation network. Use `u2net` for hairs-like objects and `tracer_b7` for objects
#markdown - `POSTPROCESSING_METHOD`  - Postprocessing method
#markdown - `SEGMENTATION_MASK_SIZE` - Segmentation mask size. Use 640 for Tracer B7 and 320 for U2Net
#markdown - `TRIMAP_DILATION`  - The size of the offset radius from the object mask in pixels when forming an unknown area
#markdown - `TRIMAP_EROSION`  - The number of iterations of erosion that the object's mask will be subjected to before forming an unknown areaimport os
import numpy as np
from PIL import Image, ImageOps
from carvekit.web.schemas.config import MLConfig
from carvekit.web.utils.init_utils import init_interfaceSHOW_FULLSIZE = False #param {type:"boolean"}
PREPROCESSING_METHOD = "none" #param ["stub", "none"]
SEGMENTATION_NETWORK = "tracer_b7" #param ["u2net", "deeplabv3", "basnet", "tracer_b7"]
POSTPROCESSING_METHOD = "fba" #param ["fba", "none"]
SEGMENTATION_MASK_SIZE = 640 #param ["640", "320"] {type:"raw", allow-input: true}
TRIMAP_DILATION = 30 #param {type:"integer"}
TRIMAP_EROSION = 5 #param {type:"integer"}
DEVICE = 'cpu' # 'cuda'config = MLConfig(segmentation_network=SEGMENTATION_NETWORK,preprocessing_method=PREPROCESSING_METHOD,postprocessing_method=POSTPROCESSING_METHOD,seg_mask_size=SEGMENTATION_MASK_SIZE,trimap_dilation=TRIMAP_DILATION,trimap_erosion=TRIMAP_EROSION,device=DEVICE)interface = init_interface(config)imgs = []
root = '/content/cloth'
for name in os.listdir(root):imgs.append(root + '/' + name)images = interface(imgs)
for i, im in enumerate(images):img = np.array(im)img = img[...,:3] # no transparencyidx = (img[...,0]==0)&(img[...,1]==0)&(img[...,2]==0) # background 0 or 130, just try itimg = np.ones(idx.shape)*255img[idx] = 0im = Image.fromarray(np.uint8(img), 'L')im.save(f'./cloth_mask/{imgs[i].split("/")[-1].split(".")[0]}.jpg')

Make sure your cloth mask results are the same size with input cloth image (768×1024). And looks like

Note: you may have to change above code to get the right results, because sometimes the generated results are different, and I didn’t investigate to much about this tool. Especially the line of idx = (img[...,0]==0)&(img[...,1]==0)&(img[...,2]==0), you may get results of 0 or 130 as background depending on the model you use and settings.

5、Parse Agnostic (On colab)

Here is the parse label and corresponding body parts. You may need or not.

0 - 20
Background
Hat
Hair
Glove
Sunglasses
Upper-clothes
Dress
Coat
Socks
Pants
tosor-skin
Scarf
Skirt
Face
Left-arm
Right-arm
Left-leg
Right-leg
Left-shoe
Right-shoe

(1) Install packages

!pip install Pillow tqdm

(2) Prepare data

After all above steps, now you should have a data structure like this, they are under directory of test. If you are not sure which results locate in which dir, check out official dataset structure, you can download it from here.

You can zip them into test.zip and unzip them on Colab with !unzip test.zip.

Note: the images under image-parse-v3 (black images with label) are not looking the same as official data (colored images with label), the reason has been mentioned before.

(3) Run

import json
from os import path as osp
import os
import numpy as np
from PIL import Image, ImageDraw
from tqdm import tqdmdef get_im_parse_agnostic(im_parse, pose_data, w=768, h=1024):label_array = np.array(im_parse)parse_upper = ((label_array == 5).astype(np.float32) +(label_array == 6).astype(np.float32) +(label_array == 7).astype(np.float32))parse_neck = (label_array == 10).astype(np.float32)r = 10agnostic = im_parse.copy()# mask armsfor parse_id, pose_ids in [(14, [2, 5, 6, 7]), (15, [5, 2, 3, 4])]:mask_arm = Image.new('L', (w, h), 'black')mask_arm_draw = ImageDraw.Draw(mask_arm)i_prev = pose_ids[0]for i in pose_ids[1:]:if (pose_data[i_prev, 0] == 0.0 and pose_data[i_prev, 1] == 0.0) or (pose_data[i, 0] == 0.0 and pose_data[i, 1] == 0.0):continuemask_arm_draw.line([tuple(pose_data[j]) for j in [i_prev, i]], 'white', width=r*10)pointx, pointy = pose_data[i]radius = r*4 if i == pose_ids[-1] else r*15mask_arm_draw.ellipse((pointx-radius, pointy-radius, pointx+radius, pointy+radius), 'white', 'white')i_prev = iparse_arm = (np.array(mask_arm) / 255) * (label_array == parse_id).astype(np.float32)agnostic.paste(0, None, Image.fromarray(np.uint8(parse_arm * 255), 'L'))# mask torso & neckagnostic.paste(0, None, Image.fromarray(np.uint8(parse_upper * 255), 'L'))agnostic.paste(0, None, Image.fromarray(np.uint8(parse_neck * 255), 'L'))return agnosticif __name__ =="__main__":data_path = './test'output_path = './test/parse'os.makedirs(output_path, exist_ok=True)for im_name in tqdm(os.listdir(osp.join(data_path, 'image'))):# load pose imagepose_name = im_name.replace('.jpg', '_keypoints.json')try:with open(osp.join(data_path, 'openpose_json', pose_name), 'r') as f:pose_label = json.load(f)pose_data = pose_label['people'][0]['pose_keypoints_2d']pose_data = np.array(pose_data)pose_data = pose_data.reshape((-1, 3))[:, :2]except IndexError:print(pose_name)continue# load parsing imageparse_name = im_name.replace('.jpg', '.png')im_parse = Image.open(osp.join(data_path, 'image-parse-v3', parse_name))agnostic = get_im_parse_agnostic(im_parse, pose_data)agnostic.save(osp.join(output_path, parse_name))

You can check results under ./test/parse. But it’s all black as well. To ensure you are getting the right agnostic parse images, do below

import numpy as np
from PIL import Imageim_ori = Image.open('./test/image-parse-v3/06868_00.png')
im = Image.open('./test/parse/06868_00.png')
print(np.unique(np.array(im_ori)))
print(np.unique(np.array(im)))

The output may look like

[ 0  2  5  9 10 13 14 15]
[ 0  2  9 13 14 15]

The first row is longer than the second row.

You can also visualize it by

np_im = np.array(im)
np_im[np_im==2] = 151
np_im[np_im==9] = 178
np_im[np_im==13] = 191
np_im[np_im==14] = 221
np_im[np_im==15] = 246
Image.fromarray(np_im)

result may be like, which is cloth-agnostic

Save all the images under parse to image-parse-agnostic-v3.2

6、Human Agnostic

Steps are almost the same as above section.

(1) install

!pip install Pillow tqdm

(2) Prepare data

Now it looks like

(3) Run

import json
from os import path as osp
import os
import numpy as np
from PIL import Image, ImageDraw
from tqdm import tqdmdef get_img_agnostic(img, parse, pose_data):parse_array = np.array(parse)parse_head = ((parse_array == 4).astype(np.float32) +(parse_array == 13).astype(np.float32))parse_lower = ((parse_array == 9).astype(np.float32) +(parse_array == 12).astype(np.float32) +(parse_array == 16).astype(np.float32) +(parse_array == 17).astype(np.float32) +(parse_array == 18).astype(np.float32) +(parse_array == 19).astype(np.float32))agnostic = img.copy()agnostic_draw = ImageDraw.Draw(agnostic)length_a = np.linalg.norm(pose_data[5] - pose_data[2])length_b = np.linalg.norm(pose_data[12] - pose_data[9])point = (pose_data[9] + pose_data[12]) / 2pose_data[9] = point + (pose_data[9] - point) / length_b * length_apose_data[12] = point + (pose_data[12] - point) / length_b * length_ar = int(length_a / 16) + 1# mask armsagnostic_draw.line([tuple(pose_data[i]) for i in [2, 5]], 'gray', width=r*10)for i in [2, 5]:pointx, pointy = pose_data[i]agnostic_draw.ellipse((pointx-r*5, pointy-r*5, pointx+r*5, pointy+r*5), 'gray', 'gray')for i in [3, 4, 6, 7]:if (pose_data[i - 1, 0] == 0.0 and pose_data[i - 1, 1] == 0.0) or (pose_data[i, 0] == 0.0 and pose_data[i, 1] == 0.0):continueagnostic_draw.line([tuple(pose_data[j]) for j in [i - 1, i]], 'gray', width=r*10)pointx, pointy = pose_data[i]agnostic_draw.ellipse((pointx-r*5, pointy-r*5, pointx+r*5, pointy+r*5), 'gray', 'gray')# mask torsofor i in [9, 12]:pointx, pointy = pose_data[i]agnostic_draw.ellipse((pointx-r*3, pointy-r*6, pointx+r*3, pointy+r*6), 'gray', 'gray')agnostic_draw.line([tuple(pose_data[i]) for i in [2, 9]], 'gray', width=r*6)agnostic_draw.line([tuple(pose_data[i]) for i in [5, 12]], 'gray', width=r*6)agnostic_draw.line([tuple(pose_data[i]) for i in [9, 12]], 'gray', width=r*12)agnostic_draw.polygon([tuple(pose_data[i]) for i in [2, 5, 12, 9]], 'gray', 'gray')# mask neckpointx, pointy = pose_data[1]agnostic_draw.rectangle((pointx-r*7, pointy-r*7, pointx+r*7, pointy+r*7), 'gray', 'gray')agnostic.paste(img, None, Image.fromarray(np.uint8(parse_head * 255), 'L'))agnostic.paste(img, None, Image.fromarray(np.uint8(parse_lower * 255), 'L'))return agnosticif __name__ =="__main__":data_path = './test'output_path = './test/parse'os.makedirs(output_path, exist_ok=True)for im_name in tqdm(os.listdir(osp.join(data_path, 'image'))):# load pose imagepose_name = im_name.replace('.jpg', '_keypoints.json')try:with open(osp.join(data_path, 'openpose_json', pose_name), 'r') as f:pose_label = json.load(f)pose_data = pose_label['people'][0]['pose_keypoints_2d']pose_data = np.array(pose_data)pose_data = pose_data.reshape((-1, 3))[:, :2]except IndexError:print(pose_name)continue# load parsing imageim = Image.open(osp.join(data_path, 'image', im_name))label_name = im_name.replace('.jpg', '.png')im_label = Image.open(osp.join(data_path, 'image-parse-v3', label_name))agnostic = get_img_agnostic(im, im_label, pose_data)agnostic.save(osp.join(output_path, im_name))

Results look like

Save them to dir of agnostic-v3.2. Now you are almost done. The final structure of preprocessing results are

7、Conclusion

Thanks for reading. It’s not easy to get all this done. Before you run HR-VITON with you preprocessed dataset, note that each person image need a corresponding cloth image even though it’s not used while inference. If you don’t want this behavior, you can either change the source code manually or just add some random images with the same name of person images. After all done, suppose you are testing 5 people images and 3 cloth images, which are all unpaired, you should end up with 3 images under cloth dir and 3 images under cloth-mask; and 5 images under each other dirs: agnostic-v3.2, image, image-densepose, image-parse-agnostic-v3.2, image-parse-v3, openpose_img, and openpose_json.

Final test result

【HR-VITON】虚拟换衣算法pre-processing复现全过程记录相关推荐

  1. 虚拟换衣 VITON 论文笔记

    VITON 论文笔记 介绍 论文笔记 任务 VITON 框架 Person Representation(人体表示) Pose heatmap (姿势热图) Human body representa ...

  2. 虚拟换衣!速览这几篇最新论文咋做的!

    Virtual Try-on 虚拟换衣,也就是给定某款衣服图像,让目标试衣者虚拟穿上.下面整理了一些相关论文.打包下载好的论文,可关注微信公众号"学点诗歌和AI知识"回复" ...

  3. 虚拟换衣 CP-VTON 论文笔记

    CP-VTON 介绍 论文笔记 算法目的 主要贡献 CP-VTON 算法框架 Geometric Matching Module(几何匹配模块) Try-on Module(试穿模块) 总结 参考文献 ...

  4. 基于图像的虚拟换装:Towards Photo-Realistic Virtual Try-On by Adaptively Generating-Preserving Image Content

    Image Based Virtual Try-On 基于图像的虚拟换装,可以直接生成换好装的人物图片.VITON提供了一个被广泛使用的pipeline,现在的方法大多遵循类似的框架. 利用网络泛化能 ...

  5. 基于图像的虚拟换装:Morphing architectures for pose-based image generation of people in clothing

    项目的重点是变形操作的特征化与实现,解决卷积神经网络中的信息失准问题.我们将所研究的方法应用到一个换衣服的任务中,将其建模为一个条件图像生成问题.尽管对抗性方法在生成性任务中很流行,但我们将此项目的范 ...

  6. CVPR 2020 | ACGPN: 基于图像的虚拟换装新思路

    点击上方"机器学习与生成对抗网络",关注"星标" 获取有趣.好玩的前沿干货! CVPR 2020之117篇GAN论文分类清单 编辑  AI科技评论 本文介绍的是 ...

  7. 难度炸裂!DeepChange:一个新的超大规模的换衣行人再识别数据集

    关注公众号,发现CV技术之美 传统的行人再识别限定了研究范围是短时范围的再识别(short-term re-id),即假设数据集中的行人的衣服不会发生变化.近年来,可换衣的行人再识别研究引起了学者的兴 ...

  8. 长期换衣行人重识别(Long-Term Clothes-Changing Person Reid)数据集汇总

    长期换衣行人重识别任务中目前常用的数据集 总览 目前换衣数据集的主要问题 数据集换衣/总体情况集统计 DeepChange(2021) LaST(2021) COCAS(2020) VC-Clothe ...

  9. ICCV 2019 | 沉迷AI换脸?不如来试试“AI换衣”

    作者丨文永明 学校丨中山大学硕士生 研究方向丨Object Manipulation.机器人视觉.GAN 引言 笔者最近发现一篇发表在 ICCV 2019 挺有意思的论文,是来自中山大学 Fashio ...

最新文章

  1. 【NIO】异步模型之Callback -- 封装NIO
  2. 泛型算法----概述,初识泛型算法,定制操作
  3. JavaScript中七种函数调用方式及对应 this 的含义
  4. 2.3.1 进程的同步与互斥
  5. 软件开发过程(CMMI/RUP/XP/MSF)是与非?
  6. 企业数字化转型与SAP云平台
  7. python安装包找不到setup_如何安装没有setup.py的Python模块?
  8. BZOJ 3527: [ZJOI2014]力(FFT)
  9. 团队作业-第二周-测试计划
  10. MySQL 中锁的面试题总结
  11. miniui单元格点击弹框_miniui 给表格行添加监听事件的几种方法以及点击某列列名数据不能排序的问题...
  12. Java中的SoftReference和WeakReference有什么区别?
  13. 离线计算框架MapReduce
  14. the android emulator process,Android studio报错:The emulator process for AVD (xxx) was killed
  15. C++ 调用 SOAP Web Service
  16. Oracle 12c新特性--ASMFD(ASM Filter Driver)特性
  17. 面试被问自己的“优点和缺点”如何机智应答
  18. Qt:无法定位程序输入点于动态链接库等。
  19. 80后的罗敏已经在创业路上走了十几年
  20. mac 软件卸载后无法安装

热门文章

  1. git + 码云 使用详解(入门)(mac+windows教程)
  2. 1230v3配服务器内存性能提升,有点小失望 E3-1280 v3性能实测
  3. 代码质量管理工具:SonarQube常见的问题及正确解决方案
  4. 鸿蒙开发(13)---ProgressBar与RoundProgressBar组件
  5. c语言考试系统题库判断和选择,C考试系统题库判断和选择.doc
  6. 社区保密计算机使用制度,社区保密制度
  7. 笔记本更换网络连接,MAC地址改变!
  8. 一个30岁的人给你提个醒:不管收入多少,你一定要养成存钱习惯
  9. 部署springboot+vue项目文档(若依ruoyi项目部署步骤)
  10. 气体灭火系统的应用与选型 (装载)