一、概述
OpenPose最开始由卡内基梅隆大学提出,其主要基于先后发表的几篇文章中提出的模型中进行实现:
CVPR 2016: Convolutional Pose Machine(CPM)
CVPR2017 : realtime multi-person pose estimation
CVPR2017 : Hand Keypoint Detection in Single Images using Multiview Bootstrapping
但运行计算量非常大,通常得在GPU上运行,并且帧率较低(低于5fps),在此后也陆续出现了一些改进版
改进版主要在模型上进行了一些改进或裁剪,另外移动端(如各种尬舞app) 为能够跑通OpenPose,在改网络结构的同时,对算法本身也进行了优化,减少了计算量,但与此同时准确性也有相应地降低。
二、简化版OpenPose实现代码
代码来源GitHub:human-pose-estimation-opencv
其代码较为简单,模型(较小:7.8M)已经训练好在graph_opt.pb文件中,其中全部实现代码在openpose.py文件中,下面是实现代码及测试效果:
# To use Inference Engine backend, specify location of plugins:
# export LD_LIBRARY_PATH=/opt/intel/deeplearning_deploymenttoolkit/deployment_tools/external/mklml_lnx/lib:$LD_LIBRARY_PATH
import cv2 as cv
import numpy as np
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('--input', help='Path to image or video. Skip to capture frames from camera')
parser.add_argument('--thr', default=0.2, type=float, help='Threshold value for pose parts heat map')
parser.add_argument('--width', default=368, type=int, help='Resize input to specific width.')
parser.add_argument('--height', default=368, type=int, help='Resize input to specific height.')

args = parser.parse_args()

BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
               "LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
               "RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
               "LEye": 15, "REar": 16, "LEar": 17, "Background": 18 }

POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],
               ["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],
               ["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],
               ["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],
               ["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ]

inWidth = args.width
inHeight = args.height

net = cv.dnn.readNetFromTensorflow("graph_opt.pb")

cap = cv.VideoCapture(args.input if args.input else 0)

while cv.waitKey(1) < 0:
    hasFrame, frame = cap.read()
    if not hasFrame:
        cv.waitKey()
        break

frameWidth = frame.shape[1]
    frameHeight = frame.shape[0]
    
    net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))
    out = net.forward()
    out = out[:, :19, :, :]  # MobileNet output [1, 57, -1, -1], we only need the first 19 elements

assert(len(BODY_PARTS) == out.shape[1])

points = []
    for i in range(len(BODY_PARTS)):
        # Slice heatmap of corresponging body's part.
        heatMap = out[0, i, :, :]

# Originally, we try to find all the local maximums. To simplify a sample
        # we just find a global one. However only a single pose at the same time
        # could be detected this way.
        _, conf, _, point = cv.minMaxLoc(heatMap)
        x = (frameWidth * point[0]) / out.shape[3]
        y = (frameHeight * point[1]) / out.shape[2]
        # Add a point if it's confidence is higher than threshold.
        points.append((int(x), int(y)) if conf > args.thr else None)

for pair in POSE_PAIRS:
        partFrom = pair[0]
        partTo = pair[1]
        assert(partFrom in BODY_PARTS)
        assert(partTo in BODY_PARTS)

idFrom = BODY_PARTS[partFrom]
        idTo = BODY_PARTS[partTo]

if points[idFrom] and points[idTo]:
            cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)
            cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
            cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)

t, _ = net.getPerfProfile()
    freq = cv.getTickFrequency() / 1000
    cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))

cv.imshow('OpenPose using OpenCV', frame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
检测效果如下:

下面两张图中体现出单人且姿态展开时较好的检测效果:

下面两张图中体现出多人或特殊姿态时较差的检测效果(如乱连接或者遗漏关键点等):

从左上角显示的处理时间可看到,处理较慢,基本一张图片需耗时0.5S
三、较复杂版OpenPose实现代码
代码来源GitHub:camera-openpose-keras
其代码比起前面这个更复杂一些,模型(更大:200M)已经训练好在可自行下载,但在外网不易下载,因此也可在百度云下载:model.h5
其中全部实现代码在demo_camera.py文件中,下面是修改了一点点代码,采取读入图片的方式进行了测试:
import argparse
import cv2
import math
import time
import numpy as np
import util
from config_reader import config_reader
from scipy.ndimage.filters import gaussian_filter
from model import get_testing_model

tic=0
# find connection in the specified sequence, center 29 is in the position 15
limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \
           [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \
           [1, 16], [16, 18], [3, 17], [6, 18]]

# the middle joints heatmap correpondence
mapIdx = [[31, 32], [39, 40], [33, 34], [35, 36], [41, 42], [43, 44], [19, 20], [21, 22], \
          [23, 24], [25, 26], [27, 28], [29, 30], [47, 48], [49, 50], [53, 54], [51, 52], \
          [55, 56], [37, 38], [45, 46]]

# visualize
colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0],
          [0, 255, 0], \
          [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255],
          [85, 0, 255], \
          [170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]]

def process (input_image, params, model_params):

oriImg = cv2.imread(input_image)  # B,G,R order
    multiplier = [x * model_params['boxsize'] / oriImg.shape[0] for x in params['scale_search']]

heatmap_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 19))
    paf_avg = np.zeros((oriImg.shape[0], oriImg.shape[1], 38))

#for m in range(len(multiplier)):
    for m in range(1):
        scale = multiplier[m]

imageToTest = cv2.resize(oriImg, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
        imageToTest_padded, pad = util.padRightDownCorner(imageToTest, model_params['stride'],
                                                          model_params['padValue'])

input_img = np.transpose(np.float32(imageToTest_padded[:,:,:,np.newaxis]), (3,0,1,2)) # required shape (1, width, height, channels)

output_blobs = model.predict(input_img)
      
        
        
        # extract outputs, resize, and remove padding
        heatmap = np.squeeze(output_blobs[1])  # output 1 is heatmaps
        heatmap = cv2.resize(heatmap, (0, 0), fx=model_params['stride'], fy=model_params['stride'],
                             interpolation=cv2.INTER_CUBIC)
        heatmap = heatmap[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3],
                  :]
        heatmap = cv2.resize(heatmap, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)

paf = np.squeeze(output_blobs[0])  # output 0 is PAFs
        paf = cv2.resize(paf, (0, 0), fx=model_params['stride'], fy=model_params['stride'],
                         interpolation=cv2.INTER_CUBIC)
        paf = paf[:imageToTest_padded.shape[0] - pad[2], :imageToTest_padded.shape[1] - pad[3], :]
        paf = cv2.resize(paf, (oriImg.shape[1], oriImg.shape[0]), interpolation=cv2.INTER_CUBIC)

heatmap_avg = heatmap_avg + heatmap / len(multiplier)
        paf_avg = paf_avg + paf / len(multiplier)

all_peaks = []
    peak_counter = 0
    prinfTick(1)
    for part in range(18):
        map_ori = heatmap_avg[:, :, part]
        map = gaussian_filter(map_ori, sigma=3)

map_left = np.zeros(map.shape)
        map_left[1:, :] = map[:-1, :]
        map_right = np.zeros(map.shape)
        map_right[:-1, :] = map[1:, :]
        map_up = np.zeros(map.shape)
        map_up[:, 1:] = map[:, :-1]
        map_down = np.zeros(map.shape)
        map_down[:, :-1] = map[:, 1:]

peaks_binary = np.logical_and.reduce(
            (map >= map_left, map >= map_right, map >= map_up, map >= map_down, map > params['thre1']))
        peaks = list(zip(np.nonzero(peaks_binary)[1], np.nonzero(peaks_binary)[0]))  # note reverse
        peaks_with_score = [x + (map_ori[x[1], x[0]],) for x in peaks]
        id = range(peak_counter, peak_counter + len(peaks))
        peaks_with_score_and_id = [peaks_with_score[i] + (id[i],) for i in range(len(id))]

all_peaks.append(peaks_with_score_and_id)
        peak_counter += len(peaks)

connection_all = []
    special_k = []
    mid_num = 10
    prinfTick(2)
    for k in range(len(mapIdx)):
        score_mid = paf_avg[:, :, [x - 19 for x in mapIdx[k]]]
        candA = all_peaks[limbSeq[k][0] - 1]
        candB = all_peaks[limbSeq[k][1] - 1]
        nA = len(candA)
        nB = len(candB)
        indexA, indexB = limbSeq[k]
        if (nA != 0 and nB != 0):
            connection_candidate = []
            for i in range(nA):
                for j in range(nB):
                    vec = np.subtract(candB[j][:2], candA[i][:2])
                    norm = math.sqrt(vec[0] * vec[0] + vec[1] * vec[1])
                    # failure case when 2 body parts overlaps
                    if norm == 0:
                        continue
                    vec = np.divide(vec, norm)

startend = list(zip(np.linspace(candA[i][0], candB[j][0], num=mid_num), \
                                   np.linspace(candA[i][1], candB[j][1], num=mid_num)))

vec_x = np.array(
                        [score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 0] \
                         for I in range(len(startend))])
                    vec_y = np.array(
                        [score_mid[int(round(startend[I][1])), int(round(startend[I][0])), 1] \
                         for I in range(len(startend))])

score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1])
                    score_with_dist_prior = sum(score_midpts) / len(score_midpts) + min(
                        0.5 * oriImg.shape[0] / norm - 1, 0)
                    criterion1 = len(np.nonzero(score_midpts > params['thre2'])[0]) > 0.8 * len(
                        score_midpts)
                    criterion2 = score_with_dist_prior > 0
                    if criterion1 and criterion2:
                        connection_candidate.append([i, j, score_with_dist_prior,
                                                     score_with_dist_prior + candA[i][2] + candB[j][2]])

connection_candidate = sorted(connection_candidate, key=lambda x: x[2], reverse=True)
            connection = np.zeros((0, 5))
            for c in range(len(connection_candidate)):
                i, j, s = connection_candidate[c][0:3]
                if (i not in connection[:, 3] and j not in connection[:, 4]):
                    connection = np.vstack([connection, [candA[i][3], candB[j][3], s, i, j]])
                    if (len(connection) >= min(nA, nB)):
                        break

connection_all.append(connection)
        else:
            special_k.append(k)
            connection_all.append([])

# last number in each row is the total parts number of that person
    # the second last number in each row is the score of the overall configuration
    subset = -1 * np.ones((0, 20))
    candidate = np.array([item for sublist in all_peaks for item in sublist])
    prinfTick(3)
    for k in range(len(mapIdx)):
        if k not in special_k:
            partAs = connection_all[k][:, 0]
            partBs = connection_all[k][:, 1]
            indexA, indexB = np.array(limbSeq[k]) - 1

for i in range(len(connection_all[k])):  # = 1:size(temp,1)
                found = 0
                subset_idx = [-1, -1]
                for j in range(len(subset)):  # 1:size(subset,1):
                    if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]:
                        subset_idx[found] = j
                        found += 1

if found == 1:
                    j = subset_idx[0]
                    if (subset[j][indexB] != partBs[i]):
                        subset[j][indexB] = partBs[i]
                        subset[j][-1] += 1
                        subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]
                elif found == 2:  # if found 2 and disjoint, merge them
                    j1, j2 = subset_idx
                    membership = ((subset[j1] >= 0).astype(int) + (subset[j2] >= 0).astype(int))[:-2]
                    if len(np.nonzero(membership == 2)[0]) == 0:  # merge
                        subset[j1][:-2] += (subset[j2][:-2] + 1)
                        subset[j1][-2:] += subset[j2][-2:]
                        subset[j1][-2] += connection_all[k][i][2]
                        subset = np.delete(subset, j2, 0)
                    else:  # as like found == 1
                        subset[j1][indexB] = partBs[i]
                        subset[j1][-1] += 1
                        subset[j1][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]

# if find no partA in the subset, create a new subset
                elif not found and k < 17:
                    row = -1 * np.ones(20)
                    row[indexA] = partAs[i]
                    row[indexB] = partBs[i]
                    row[-1] = 2
                    row[-2] = sum(candidate[connection_all[k][i, :2].astype(int), 2]) + \
                              connection_all[k][i][2]
                    subset = np.vstack([subset, row])

# delete some rows of subset which has few parts occur
    deleteIdx = [];
    for i in range(len(subset)):
        if subset[i][-1] < 4 or subset[i][-2] / subset[i][-1] < 0.4:
            deleteIdx.append(i)
    subset = np.delete(subset, deleteIdx, axis=0)

canvas = cv2.imread(input_image)  # B,G,R order
    for i in range(18):
        for j in range(len(all_peaks[i])):
            cv2.circle(canvas, all_peaks[i][j][0:2], 4, colors[i], thickness=-1)

stickwidth = 4
    for i in range(17):
        for n in range(len(subset)):
            index = subset[n][np.array(limbSeq[i]) - 1]
            if -1 in index:
                continue
            cur_canvas = canvas.copy()
            Y = candidate[index.astype(int), 0]
            X = candidate[index.astype(int), 1]
            mX = np.mean(X)
            mY = np.mean(Y)
            length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5
            angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1]))
            polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0,
                                       360, 1)
            cv2.fillConvexPoly(cur_canvas, polygon, colors[i])
            canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0)

return canvas

def prinfTick(i):
    toc = time.time()
    print ('processing time%d is %.5f' % (i,toc - tic))

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--image', type=str, default='sample_images/ski.jpg', help='input image')
    parser.add_argument('--output', type=str, default='result.png', help='output image')
    parser.add_argument('--model', type=str, default='model/keras/model.h5', help='path to the weights file')

args = parser.parse_args()
    input_image = args.image
    output = args.output
    keras_weights_file = args.model

tic = time.time()
    print('start processing...')

# load model

# authors of original model don't use
    # vgg normalization (subtracting mean) on input images
    model = get_testing_model()
    model.load_weights(keras_weights_file)
    
    cap=cv2.VideoCapture(0)
    vi=cap.isOpened()
    if(vi == False):
        time.sleep(2) #必须要此步骤,否则失败
        #fr = cv2.imread('./sample_images/ski2.jpg',1)
        tic = time.time()
        # cv2.imwrite(input_image, fr)
        params, model_params = config_reader()
        canvas = process(input_image, params, model_params)    
        cv2.imshow("capture",canvas)
        #cv2.waitKey(0)#修改使得按ESC能退出终止程序
        key = cv2.waitKey(0)
        if key == 27:
            cv2.destroyAllWindows()

if(vi == True):
        cap.set(3,160)
        cap.set(4,120)
        time.sleep(2) #必须要此步骤,否则失败
    
        while(1):
            tic = time.time()
        
            ret,frame=cap.read()
            cv2.imwrite(input_image, frame)
            params, model_params = config_reader()
            
        # generate image with body parts
            canvas = process(input_image, params, model_params)    
            cv2.imshow("capture",canvas) 
            if cv2.waitKey(1) & 0xFF==ord('q'):
                break
        cap.release()   
cv2.destroyAllWindows()    
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
程序的运行方式如下:

python demo_camera.py --image ./sample_images/ski.jpg
1
` - -image后面接的是输入图片的路径
检测效果如下

首先对比了上面简化版代码效果不好的两张图在这个模型中的处理效果

另外,结合其他图片测试了一下效果,发现人能够呈现清楚时,关节点能够比较准确得检测出来

--------------------- 
作者:yph001 
来源:CSDN 
原文:https://blog.csdn.net/yph001/article/details/83218839 
版权声明:本文为博主原创文章,转载请附上博文链接!

基于OpenPose的人体姿态检测两个群众相关推荐

  1. 基于OpenPose的人体姿态检测(非常好)

    参考:https://blog.csdn.net/yph001/article/details/83218839 一.概述 OpenPose最开始由卡内基梅隆大学提出,其主要基于先后发表的几篇文章中提 ...

  2. MATLAB基于视频的人体姿态检测

    基于视频的人体姿态检测 设计目的和要求 1.根据已知要求分析视频监控中行人站立和躺卧姿态检测的处理流程,确定视频监中行人的检测设计的方法,画出流程图,编写实现程序,并进行调试,录制实验视频,验证检测方 ...

  3. MATLAB基于视频的人体姿态检测(第二期)

    设计思路 首先利用统计的方法得到背景模型,并实时地对背景模型进行更新以适应光线变化和场景本身的变化,用形态学方法和检测连通域面积进行后处理,消除噪声和背景扰动带来的影响,在HSV色度空间下检测阴影,得 ...

  4. Python+OpenCV+OpenPose实现人体姿态估计(人体关键点检测)

    目录 1.人体姿态估计简介 2.人体姿态估计数据集 3.OpenPose库 4.实现原理 5.实现神经网络 6.实现代码 1.人体姿态估计简介 人体姿态估计(Human Posture Estimat ...

  5. 项目开源!基于PaddleDetection打造实时人体姿态检测的多关节控制皮影机器人

    本文已在[飞桨PaddlePaddle]公众号平台发布,详情请戳链接:项目开源!基于PaddleDetection打造实时人体姿态检测的多关节控制皮影机器人 皮影戏是一种以兽皮或纸板做成的人物剪影以表 ...

  6. 轻量级openpose人体姿态检测

    1.去github下载轻量级openpose人体姿态检测项目:https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.py ...

  7. OpenPose实现人体姿态估计(人体关键点检测)

    转载:Python+OpenCV+OpenPose实现人体姿态估计(人体关键点检测)_不脱发的程序猿-CSDN博客_python人体姿态识别

  8. 加盟依图科技后,颜水成首篇顶会论文提出“高效多人体姿态检测SPM”

    唐木 发自 天龙寺  量子位 出品 | 公众号 QbitAI 颜水成团队研究实力依然强劲. 从360到依图,颜水成依然保持着高质量的学术输出. 最近提出的单阶段高效人体姿态检测模型SPM就是最好的例证 ...

  9. tensorflow入门教程(四十四)人体姿态检测(二)

    # #作者:韦访 #博客:https://blog.csdn.net/rookie_wei #微信:1007895847 #添加微信的备注一下是CSDN的 #欢迎大家一起学习 # ------韦访 2 ...

最新文章

  1. python3.6 django部署_Centos7 django+uwsgi+nginx+python3.6.8部署
  2. ContestHunter暑假欢乐赛 SRM 08
  3. hdu 1418 抱歉 (欧拉公式)
  4. 网易云IM云服务的稳定原来是这样实现的
  5. android import找不到文件夹,android 音乐播放器找不到源文件
  6. EF Core 3 的 40 个中断性变更
  7. java isempty_Optional.isEmpty()即将加入Java吗?
  8. Oracle中关于计算时间差的例子:
  9. 帝国 listenews.php,帝国cms7.2后台信息列表页批量添加tags
  10. 有人说男人赚钱都是为了女人,这话说的有道理吗?
  11. Struts2中的ModelDriven机制及其运用
  12. 【C++笔记】文件操作
  13. 配置nginx负载均衡
  14. html5仿微博弹出,JS 仿腾讯发表微博的效果代码
  15. 16 岁高中生成功将 Linux 移植到 iPhone,并贴出详细指南
  16. 宜立方商城 搭建Maven第一天心得
  17. C语言复杂构造数据类型ppt,复杂构造数据类型.ppt
  18. 如何理解最小相位信号就是振幅谱相同的信号中群延迟最小的那个?
  19. 光电耦合器的工作原理以及应用
  20. anaconda安装与使用

热门文章

  1. python如何安装各类包_如何在Python中列出所有已安装的包及其版本?
  2. webpack打包优化_前端性能优化:webpack性能调优与Gzip原理
  3. html怎么让导航栏平均分布,CSS 怎么让按钮平均分布
  4. pdf文件添加页码方法介绍
  5. 关于Quartz的Job 不能被注入以及SpringAop对Job失效
  6. 【树莓派】制作树莓派最小镜像:img裁剪瘦身
  7. atitit.jndi的架构与原理以及资源配置and单元測试实践
  8. POJ 2190 模拟
  9. java 两个数交换问题
  10. oracle 游标小例