上一话3D视觉——3.人体姿态估计(Pose Estimation) 算法对比 即 效果展示——MediaPipe与OpenPosehttps://blog.csdn.net/XiaoyYidiaodiao/article/details/125571632?spm=1001.2014.3001.5502这一章 讲述 使用MediaPipe的手势识别


单帧手势识别代码

重点简单代码讲解

1.solutions.hands

import mediapipe as mp
mp_hands = mp.solutions.hands

mediapipe手势模块(.solutions.hands)将手分成21个点(0-20)如下图1. ,可通过判断手势的角度,来识别是什么手势。8号关键点很重要,因为做HCI(人机交互)都是以8号关键点为要素。


2.mp_hand.Hands()

static_image_mode 表示 静态图像还是连续帧视频;

max_num_hands 表示 最多识别多少只手,手越多识别越慢;

min_detection_confidence 表示 检测置信度阈值;

min_tracking_confidence 表示 各帧之间跟踪置信度阈值;

hands = mp_hands.Hands(static_image_mode=True,max_num_hands=2,min_detection_confidence=0.5,min_tracking_confidence=0.5)

3.mp.solutions.drawing_utils

绘图

draw = mp.solutions.drawing_utils
draw.draw_landmarks(img, hand, mp_hands.HAND_CONNECTIONS)

完整代码

import cv2
import mediapipe as mp
import matplotlib.pyplot as pltif __name__ == '__main__':mp_hands = mp.solutions.handshands = mp_hands.Hands(static_image_mode=True,max_num_hands=2,min_detection_confidence=0.5,min_tracking_confidence=0.5)draw = mp.solutions.drawing_utilsimg = cv2.imread("3.jpg")# Flip Horizontalimg = cv2.flip(img, 1)# BGR to RGBimg = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)results = hands.process(img)# detected handsif results.multi_hand_landmarks:for hand_idx, _ in enumerate(results.multi_hand_landmarks):hand = results.multi_hand_landmarks[hand_idx]draw.draw_landmarks(img, hand, mp_hands.HAND_CONNECTIONS)plt.imshow(img)plt.show()

运行结果


输出结果解析

import cv2
import mediapipe as mp
import matplotlib.pyplot as plt
if __name__ == "__main__":mp_hands = mp.solutions.handshands = mp_hands.Hands(static_image_mode=False,max_num_hands=2,min_detection_confidence=0.5,min_tracking_confidence=0.5)draw = mp.soultions.drawing_utilsimg = cv2.imread("3.jpg")img = cv2.filp(img, 1)img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)results = hands.process(img)print(results.multi_handedness)

显示置信度与左右手

print(results.multi_handedness)
# results.multi_handedness

结果如下,其中:label 表示 左右手;socre 表示 置信度。

[classification {index: 0score: 0.9686367511749268label: "Left"
}
, classification {index: 1score: 0.9293265342712402label: "Right"
}
]Process finished with exit code 0

调用索引为1的置信度或者标签(左右手)

print(results.multi_handedness[1].classification[0].label)
# results.multi_handedness[1].classification[0].labelprint(results.multi_handedness[1].classification[0].score)
# results.multi_handedness[1].classification[0].scor

结果如下

Right0.9293265342712402Process finished with exit code 0

关键点坐标

所有手指的关键点坐标都显示出来

print(results.multi_hand_landmarks)
# results.multi_hand_landmarks

结果如下,其中这个z轴不是真正的归一化或者真实距离,是与第0号关键点(手腕根部)相对的单位,所以这个在另外一个方面这也算是2.5D

[landmark {x: 0.2627016007900238y: 0.6694213151931763z: 5.427047540251806e-07
}
landmark {x: 0.33990585803985596y: 0.6192424297332764z: -0.03650109842419624
}
landmark {x: 0.393616259098053y: 0.5356684923171997z: -0.052688632160425186
}
landmark {x: 0.43515193462371826y: 0.46728551387786865z: -0.06730890274047852
}
landmark {x: 0.47741779685020447y: 0.4358704090118408z: -0.08190854638814926
}
landmark {x: 0.3136638104915619y: 0.3786224126815796z: -0.031268805265426636
}
landmark {x: 0.32385849952697754y: 0.2627469599246979z: -0.05158748850226402
}
landmark {x: 0.32598018646240234y: 0.19216321408748627z: -0.06765355914831161
}
landmark {x: 0.32443925738334656y: 0.13277988135814667z: -0.08096525818109512
}
landmark {x: 0.2579415738582611y: 0.37382692098617554z: -0.032878294587135315
}
landmark {x: 0.2418910712003708y: 0.24533957242965698z: -0.052028004080057144
}
landmark {x: 0.2304030954837799y: 0.16545917093753815z: -0.06916359812021255
}
landmark {x: 0.2175263613462448y: 0.10054746270179749z: -0.08164216578006744
}
landmark {x: 0.2122785747051239y: 0.4006640613079071z: -0.03896263241767883
}
landmark {x: 0.16991624236106873y: 0.29440340399742126z: -0.06328895688056946
}
landmark {x: 0.14282439649105072y: 0.22940337657928467z: -0.0850684642791748
}
landmark {x: 0.12077423930168152y: 0.17327186465263367z: -0.09951525926589966
}
landmark {x: 0.17542198300361633y: 0.45321160554885864z: -0.04842161759734154
}
landmark {x: 0.11945687234401703y: 0.3971719741821289z: -0.07581596076488495
}
landmark {x: 0.08262123167514801y: 0.364059180021286z: -0.0922597199678421
}
landmark {x: 0.05292154848575592y: 0.3294071555137634z: -0.10204877704381943
}
, landmark {x: 0.7470765113830566y: 0.6501559019088745z: 5.611524898085918e-07
}
landmark {x: 0.6678823232650757y: 0.5958800911903381z: -0.02732565440237522
}
landmark {x: 0.6151978969573975y: 0.5073886513710022z: -0.04038363695144653
}
landmark {x: 0.5764827728271484y: 0.4352443814277649z: -0.05264268442988396
}
landmark {x: 0.5317870378494263y: 0.39630216360092163z: -0.0660330131649971
}
landmark {x: 0.6896227598190308y: 0.36289966106414795z: -0.0206490196287632
}
landmark {x: 0.6775739192962646y: 0.24553297460079193z: -0.040212471038103104
}
landmark {x: 0.6752403974533081y: 0.1725052148103714z: -0.0579310804605484
}
landmark {x: 0.6765506267547607y: 0.10879574716091156z: -0.07280831784009933
}
landmark {x: 0.7441056370735168y: 0.35811278223991394z: -0.027878733351826668
}
landmark {x: 0.7618186473846436y: 0.22912392020225525z: -0.04549985006451607
}
landmark {x: 0.7751038670539856y: 0.14770638942718506z: -0.06489630043506622
}
landmark {x: 0.788221538066864y: 0.07831943035125732z: -0.08037586510181427
}
landmark {x: 0.7893437743186951y: 0.38241785764694214z: -0.03979477286338806
}
landmark {x: 0.8274303674697876y: 0.27563461661338806z: -0.06700978428125381
}
landmark {x: 0.8543692827224731y: 0.20750769972801208z: -0.09000862389802933
}
landmark {x: 0.877074122428894y: 0.14460964500904083z: -0.10605450719594955
}
landmark {x: 0.8275361061096191y: 0.4312361180782318z: -0.05377437546849251
}
landmark {x: 0.879731297492981y: 0.3712681233882904z: -0.08328921347856522
}
landmark {x: 0.9155224561691284y: 0.33470967411994934z: -0.09889543056488037
}
landmark {x: 0.9464077949523926y: 0.297884464263916z: -0.109173983335495
}
]Process finished with exit code 0

统计手的数量

print(len(results.multi_hand_landmarks))
# len(results.multi_hand_landmarks)

结果如下

2Process finished with exit code 0

获取索引为1的第3号关键点的坐标

print(results.multi_hand_landmarks[1].landmark[3])
# results.multi_hand_landmarks[1].landmark[3]

结果如下

x: 0.5764827728271484
y: 0.4352443814277649
z: -0.05264268442988396Process finished with exit code 0

以上的x,y坐标是根据高和宽归一化之后的结果,将其转化为图像中坐标结果

    height, width, _ = img.shapeprint("height:{},width:{}".format(height, width))cx = results.multi_hand_landmarks[1].landmark[3].x * widthcy = results.multi_hand_landmarks[1].landmark[3].y * heightprint("cx:{}, cy:{}".format(cx, cy))

结果如下

height:512,width:720
cx:415.0675964355469, cy:222.84512329101562Process finished with exit code 0

关键点之间的连接

print(mp_hands.HAND_CONNECTIONS)
# mp_hands.HAND_CONNECTIONS

结果如下

frozenset({(3, 4), (0, 5), (17, 18), (0, 17), (13, 14), (13, 17), (18, 19), (5, 6), (5, 9), (14, 15), (0, 1), (9, 10), (1, 2), (9, 13), (10, 11), (19, 20), (6, 7), (15, 16), (2, 3), (11, 12), (7, 8)})Process finished with exit code 0

显示左右手信息

    if results.multi_hand_landmarks:hand_info = ""for h_idx, hand in enumerate(results.multi_hand_landmarks):hand_info += str(h_idx) + ":" + str(results.multi_handedness[h_idx].classification[0].label) + ", "print(hand_info)

结果如下

0:Left, 1:Right, Process finished with exit code 0

单帧手势识别代码优化

完整代码

import cv2
import mediapipe as mp
import matplotlib.pyplot as pltif __name__ == '__main__':mp_hands = mp.solutions.handshands = mp_hands.Hands(static_image_mode=True,max_num_hands=2,min_detection_confidence=0.5,min_tracking_confidence=0.5)draw = mp.solutions.drawing_utilsimg = cv2.imread("3.jpg")img = cv2.flip(img, 1)height, width, _ = img.shapeimg = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)results = hands.process(img)if results.multi_hand_landmarks:handness_str = ''index_finger_tip_str = ''for h_idx, hand in enumerate(results.multi_hand_landmarks):# get 21 key points of handhand_info = results.multi_hand_landmarks[h_idx]# log left and right hands infodraw.draw_landmarks(img, hand_info, mp_hands.HAND_CONNECTIONS)temp_handness = results.multi_handedness[h_idx].classification[0].labelhandness_str += "{}:{} ".format(h_idx, temp_handness)cz0 = hand_info.landmark[0].zfor idx, keypoint in enumerate(hand_info.landmark):cx = int(keypoint.x * width)cy = int(keypoint.y * height)cz = keypoint.zdepth_z = cz0 - cz# radius about depth_zradius = int(6 * (1 + depth_z))if idx == 0:img = cv2.circle(img, (cx, cy), radius * 2, (0, 0, 255), -1)elif idx == 8:img = cv2.circle(img, (cx, cy), radius * 2, (193, 182, 255), -1)index_finger_tip_str += '{}:{:.2f} '.format(h_idx, depth_z)elif idx in [1, 5, 9, 13, 17]:img = cv2.circle(img, (cx, cy), radius, (16, 144, 247), -1)elif idx in [2, 6, 10, 14, 18]:img = cv2.circle(img, (cx, cy), radius, (1, 240, 255), -1)elif idx in [3, 7, 11, 15, 19]:img = cv2.circle(img, (cx, cy), radius, (140, 47, 240), -1)elif idx in [4, 12, 16, 20]:img = cv2.circle(img, (cx, cy), radius, (223, 155, 60), -1)img = cv2.putText(img, handness_str, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)img = cv2.putText(img, index_finger_tip_str, (10, 80), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 255), 2)plt.imshow(img)plt.show()

这次就不用那个莫兰迪色了,朋友说我用了那个莫兰迪色系还不如本来的颜色明显。

运行程序


视频手势识别

1.摄像头拍摄手势识别

简化版代码

import cv2
import mediapipe as mpmp_hands = mp.solutions.hands
hands = mp_hands.Hands(static_image_mode=False,max_num_hands=2,min_tracking_confidence=0.5,min_detection_confidence=0.5)
draw = mp.solutions.drawing_utilsdef process_frame(img):img = cv2.flip(img, 1)height, width, _ = img.shaperesults = hands.process(img)if results.multi_hand_landmarks:for h_idx, hand in enumerate(results.multi_hand_landmarks):draw.draw_landmarks(img, hand, mp_hands.HAND_CONNECTIONS)return imgif __name__ == '__main__':cap = cv2.VideoCapture()cap.open(0)while cap.isOpened():success, frame = cap.read()if not success:raise ValueError("Error")frame = process_frame(frame)cv2.imshow("hand", frame)if cv2.waitKey(1) in [ord('q'), 27]:breakcap.release()cv2.destroyAllWindows()

优化版代码

import cv2
import mediapipe as mpmp_hands = mp.solutions.hands
hands = mp_hands.Hands(static_image_mode=False,max_num_hands=2,min_tracking_confidence=0.5,min_detection_confidence=0.5)
draw = mp.solutions.drawing_utilsdef process_frame(img):img = cv2.flip(img, 1)height, width, _ = img.shaperesults = hands.process(img)handness_str = ''index_finger_tip_str = ''if results.multi_hand_landmarks:for h_idx, hand in enumerate(results.multi_hand_landmarks):draw.draw_landmarks(img, hand, mp_hands.HAND_CONNECTIONS)# get 21 key points of handhand_info = results.multi_hand_landmarks[h_idx]# log left and right hands infotemp_handness = results.multi_handedness[h_idx].classification[0].labelhandness_str += "{}:{} ".format(h_idx, temp_handness)cz0 = hand_info.landmark[0].zfor idx, keypoint in enumerate(hand_info.landmark):cx = int(keypoint.x * width)cy = int(keypoint.y * height)cz = keypoint.zdepth_z = cz0 - cz# radius about depth_zradius = int(6 * (1 + depth_z))if idx == 0:img = cv2.circle(img, (cx, cy), radius * 2, (0, 0, 255), -1)elif idx == 8:img = cv2.circle(img, (cx, cy), radius * 2, (193, 182, 255), -1)index_finger_tip_str += '{}:{:.2f} '.format(h_idx, depth_z)elif idx in [1, 5, 9, 13, 17]:img = cv2.circle(img, (cx, cy), radius, (16, 144, 247), -1)elif idx in [2, 6, 10, 14, 18]:img = cv2.circle(img, (cx, cy), radius, (1, 240, 255), -1)elif idx in [3, 7, 11, 15, 19]:img = cv2.circle(img, (cx, cy), radius, (140, 47, 240), -1)elif idx in [4, 12, 16, 20]:img = cv2.circle(img, (cx, cy), radius, (223, 155, 60), -1)img = cv2.putText(img, handness_str, (25, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)img = cv2.putText(img, index_finger_tip_str, (25, 100), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 255), 2)return imgif __name__ == '__main__':cap = cv2.VideoCapture()cap.open(0)while cap.isOpened():success, frame = cap.read()if not success:raise ValueError("Error")frame = process_frame(frame)cv2.imshow("hand", frame)if cv2.waitKey(1) in [ord('q'), 27]:breakcap.release()cv2.destroyAllWindows()

此代码能正常运行,不展示运行结果!


2.视频实时手势识别

简化版完整代码

import cv2
from tqdm import tqdm
import mediapipe as mpmp_hands = mp.solutions.hands
hands = mp_hands.Hands(static_image_mode=False,max_num_hands=2,min_tracking_confidence=0.5,min_detection_confidence=0.5)
draw = mp.solutions.drawing_utilsdef process_frame(img):img = cv2.flip(img, 1)height, width, _ = img.shaperesults = hands.process(img)if results.multi_hand_landmarks:for h_idx, hand in enumerate(results.multi_hand_landmarks):draw.draw_landmarks(img, hand, mp_hands.HAND_CONNECTIONS)return imgdef out_video(input):file = input.split("/")[-1]output = "out-" + fileprint("It will start processing video: {}".format(input))cap = cv2.VideoCapture(input)frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))frame_size = (cap.get(cv2.CAP_PROP_FRAME_WIDTH), cap.get(cv2.CAP_PROP_FRAME_HEIGHT))# create VideoWriter, VideoWriter_fourcc is video decodefourcc = cv2.VideoWriter_fourcc(*'mp4v')fps = cap.get(cv2.CAP_PROP_FPS)out = cv2.VideoWriter(output, fourcc, fps, (int(frame_size[0]), int(frame_size[1])))with tqdm(range(frame_count)) as pbar:while cap.isOpened:success, frame = cap.read()if not success:breakframe = process_frame(frame)out.write(frame)pbar.update(1)pbar.close()cv2.destroyAllWindows()out.release()cap.release()print("{} finished!".format(output))if __name__ == '__main__':video_dirs = "7.mp4"out_video(video_dirs)

运行结果

此视频是我在网上随便找来做实验的

手势识别


优化版完整代码

import cv2
import time
from tqdm import tqdm
import mediapipe as mpmp_hands = mp.solutions.hands
hands = mp_hands.Hands(static_image_mode=False,max_num_hands=2,min_tracking_confidence=0.5,min_detection_confidence=0.5)
draw = mp.solutions.drawing_utilsdef process_frame(img):t0 = time.time()img = cv2.flip(img, 1)height, width, _ = img.shaperesults = hands.process(img)handness_str = ''index_finger_tip_str = ''if results.multi_hand_landmarks:for h_idx, hand in enumerate(results.multi_hand_landmarks):draw.draw_landmarks(img, hand, mp_hands.HAND_CONNECTIONS)# get 21 key points of handhand_info = results.multi_hand_landmarks[h_idx]# log left and right hands infotemp_handness = results.multi_handedness[h_idx].classification[0].labelhandness_str += "{}:{} ".format(h_idx, temp_handness)cz0 = hand_info.landmark[0].zfor idx, keypoint in enumerate(hand_info.landmark):cx = int(keypoint.x * width)cy = int(keypoint.y * height)cz = keypoint.zdepth_z = cz0 - cz# radius about depth_zradius = int(6 * (1 + depth_z))if idx == 0:img = cv2.circle(img, (cx, cy), radius * 2, (0, 0, 255), -1)elif idx == 8:img = cv2.circle(img, (cx, cy), radius * 2, (193, 182, 255), -1)index_finger_tip_str += '{}:{:.2f} '.format(h_idx, depth_z)elif idx in [1, 5, 9, 13, 17]:img = cv2.circle(img, (cx, cy), radius, (16, 144, 247), -1)elif idx in [2, 6, 10, 14, 18]:img = cv2.circle(img, (cx, cy), radius, (1, 240, 255), -1)elif idx in [3, 7, 11, 15, 19]:img = cv2.circle(img, (cx, cy), radius, (140, 47, 240), -1)elif idx in [4, 12, 16, 20]:img = cv2.circle(img, (cx, cy), radius, (223, 155, 60), -1)img = cv2.putText(img, handness_str, (25, 100), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 0), 2)img = cv2.putText(img, index_finger_tip_str, (25, 150), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 0, 255), 2)fps = 1 / (time.time() - t0)img = cv2.putText(img, "FPS:" + str(int(fps)), (25, 50), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 2)return imgdef out_video(input):file = input.split("/")[-1]output = "out-optim-" + fileprint("It will start processing video: {}".format(input))cap = cv2.VideoCapture(input)frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))frame_size = (cap.get(cv2.CAP_PROP_FRAME_WIDTH), cap.get(cv2.CAP_PROP_FRAME_HEIGHT))# create VideoWriter, VideoWriter_fourcc is video decodefourcc = cv2.VideoWriter_fourcc(*'mp4v')fps = cap.get(cv2.CAP_PROP_FPS)out = cv2.VideoWriter(output, fourcc, fps, (int(frame_size[0]), int(frame_size[1])))with tqdm(range(frame_count)) as pbar:while cap.isOpened:success, frame = cap.read()if not success:breakframe = process_frame(frame)out.write(frame)pbar.update(1)pbar.close()cv2.destroyAllWindows()out.release()cap.release()print("{} finished!".format(output))if __name__ == '__main__':video_dirs = "7.mp4"out_video(video_dirs)

运行结果

手势识别优化


未完...

3D视觉——4.手势识别(Gesture Recognition)入门——使用MediaPipe含单帧(Singel Frame)和实时视频(Real-Time Video)相关推荐

  1. 3D视觉——3.人体姿态估计(Pose Estimation) 算法对比 即 效果展示——MediaPipe与OpenPose

    上一话 3D视觉--2.人体姿态估计(Pose Estimation)入门--OpenPose含安装.编译.使用(单帧.实时视频)https://blog.csdn.net/XiaoyYidiaodi ...

  2. Gesture recognition based on binocular vision(基于双目视觉的手势识别)

    基于双目视觉的手势识别 Gesture recognition based on binocular vision 原文见:https://link.springer.com/article/10.1 ...

  3. Multimodal Gesture Recognition Using 3-D Convolution and Convolutional LSTM

    前言 因为最近再看C3D+LSTM手势识别方面的文章,故记录一下,一方面来加深理解记忆,另一方面以备后面回顾复习 团队介绍 作者来自西安电子科技大学软件学院 Motivation 1:利用三维卷积网络 ...

  4. 视频+课件|3D视觉从入门到精通系统学习教程

    前言 很多粉丝在公众号后台留言,不知如何入门3D视觉.3D领域的主线是什么,一些难点该如何解决,有哪些方法,导师新开的3D视觉方向无人指导等等.这些痛点,工坊的许多童鞋都踩过坑,也为大家提出了许多非常 ...

  5. 3D视觉从入门到进阶学习路线

    01 什么是知识星球? 知识星球是一个高度活跃的社区平台,在这里你可以和相同研究方向的小伙伴一起探讨科研工作难题.交流最新领域进展.分享paper资料.发布高质量的求职就业信息,当然还可以侃侃而谈,吐 ...

  6. 七夕福利 | 3D视觉从入门到精通系统学习教程

    写在前面 首先提前祝大家七夕快乐,感谢大家对工坊的陪伴与支持! 今天是七夕福利活动的最后一天,共100张券,已经送出去了60多张,还剩不到35张,大家可以抓住本次的活动机会,享受全年的最低价(优惠了7 ...

  7. 「3D视觉从入门到精通」知识星球

    写在前面 话不多说,送大家一张优惠券(今明两天有效,共60张),感谢大家一路的陪伴与支持. 什么是知识星球? 知识星球是一个高度活跃的社区平台,在这里你可以和相同研究方向的小伙伴一起探讨科研工作难题. ...

  8. tof摄像头手势识别_解决方案| USB 3D视觉TOF飞行时间深度摄像头Depth Eye

    原标题:解决方案| USB 3D视觉TOF飞行时间深度摄像头Depth Eye 今年年初,火了好久的保时捷概念车Mission E相继在各大车展首秀,再次引发这款酷炫电动车的热烈讨论.而Mission ...

  9. 3D视觉技术的6个问答

    首发于微信公众号「3D视觉工坊」--3D视觉技术的6个问答 前言 自从加入学习圈「3D视觉技术」以来,与小伙伴们一起讨论交流了近200多个学术问题,每每遇到一些令我难以回答的问题,我都会为自己学识有限 ...

最新文章

  1. Qt C++发送图片到QML显示
  2. c++中声明和定义的区别(这个兄弟写的解决了我的疑惑)
  3. 有多个重载参数pow_随时随地想充就充,同时最多能给三部手机充电的南卡无线充电宝POW-1体验...
  4. php5.5 集成环境,windows下配置php5.5开发环境及开发扩展_PHP
  5. GDAL源码剖析(五)之Python命令行程序
  6. mysql read uncomit_mysql配置文件,帮看看
  7. pygame学习_part1_pygame写程序前的准备工作
  8. Spark交互式工具spark-shell
  9. java如何实现联网象棋代码_java中国象棋联网对战源码
  10. 地图比例尺与空间分辨率之间的关系_卫星遥感制图最佳影像空间分辨率与地图比例尺关系探讨.doc...
  11. 论文绘图——矢量图篇
  12. C语言链表之在指定结点前面或后面插入新的结点
  13. 华为HCNE题库大全(第一部)
  14. [Learn Android Studio 汉化教程]Reminders实验:第一部分(续)
  15. DevOps名言警句 - 2021
  16. 人才缺口上百万,年薪50万+!
  17. mysql y m d h_php时间问题?mysql数据库的时间格式(Y-M-D H:I:S) 在PHP页面想这样显示(Y-M-D) ('.$rows['ndate'].')...
  18. SSM 之 MyBatis
  19. mysql 查看时区_mysql 查看及修改时区的方法
  20. JIR、进程池和线程池

热门文章

  1. 理解Javascript的正则表达式
  2. Nginx 下载安装与配置
  3. 金德的齐瓦,能否成为金德的温格?_原水_新浪博客
  4. 更新域内计算机时间,Word2013如何自动更新文档中的日期和时间?如何设置打印前自动更新域...
  5. Python:Excel自动录入、Excel表格快速合并(附有源代码)
  6. git基本命令行操作
  7. HTML学习13:div和表格布局
  8. C语言中宏定义和函数的区别
  9. 建立一个STM32F411RTOS
  10. [生存志] 第89节 太公阴符天人之道