三维点云学习(4)5-DBSCNA python 复现-2-kd-tree加速

因为在上一章DBSCAN在构建距离矩阵时,需要构建一个NN的距离矩阵,严重占用资源,古采用kd_tree搜索进行进一步的优化,使用kd_tree 的radius NN 进行近邻矩阵的构建,大大提高运算速率
DBSCNA python 复现-1- 距离矩阵法

使用自写、scipy库、sklearn库 kd-tree DBSCAN聚类最终效果图

原图:

自写kd_tree 的radius NN

生成的聚类个数:3
dbscan time:3.233520Process finished with exit code 0

使用scipy.spatial 库的kd_tree

其中白色的为噪点

优化前:

生成的聚类个数:4
dbscan time:19.526319Process finished with exit code 0

自写-kd_tree优化后,速度提升大概20倍!!!

生成的聚类个数:3
dbscan time:1.043557Process finished with exit code 0

sklearn-kd_tree优化后,使用sklearn 库更快

生成的聚类个数:5
dbscan time:0.030102

主要优化部分,代码片

构建kd_tree

    n = len(data)# 构建kd_treeleaf_size = 4root = kdtree.kdtree_construction(data,leaf_size=leaf_size)#step1 初始化核心对象集合T,聚类个数k,聚类集合C, 未访问集合PT = set()    #set 集合k = 0        #类初始化C = []       #聚类集合unvisited = set(range(n))   #初始化未访问集合

step2 通过判断,通过kd_tree radius NN找出所有核心点

    #step2 通过判断,通过kd_tree radius NN找出所有核心点for d in range(n):result_set = RadiusNNResultSet(radius=eps)  #进行radius NN搜索,半径为epsionkdtree.kdtree_radius_search(root,data,result_set,data[d])nearest_idx = result_set.radius_nn_output_index()if len(nearest_idx) >= Minpts:     #临近点数 > min_sample,加入核心点T.add(d)    #最初得核心点

获取new_core的近邻点

            #kd-tree radius NN 搜索邻近result_set = RadiusNNResultSet(radius=eps)  # 进行radius NN搜索,半径为epsionkdtree.kdtree_radius_search(root, data, result_set, data[new_core])new_core_nearest = result_set.radius_nn_output_index()   #获取new_core 得邻近点

完整代码

#DBSCAN_fast.py
# 文件功能:实现 Spectral 谱聚类 算法from numpy import *
from numpy.random import rand
import matplotlib.pyplot as plt
from result_set import KNNResultSet, RadiusNNResultSet
import  kdtree as kdtree
import  timeplt.style.use('seaborn')# matplotlib显示点云函数
def Point_Show(point,color):x = []y = []point = np.asarray(point)for i in range(len(point)):x.append(point[i][0])y.append(point[i][1])plt.scatter(x, y,color=color)#构建距离矩阵
def my_distance_Marix(data):S = np.zeros((len(data), len(data)))  # 初始化 关系矩阵 w 为 n*n的矩阵# step1 建立关系矩阵, 每个节点都有连线,权重为距离的倒数for i in range(len(data)):  # i:行for j in range(len(data)):  # j:列S[i][j] = np.linalg.norm(data[i] - data[j])  # 二范数计算两个点直接的距离,两个点之间的权重为之间距离的倒数return S# @profile
def DBSCAN(data, eps, Minpts):"""基于密度的点云聚类:param d_bbox: 点与点之间的距离矩阵:param eps:  最大搜索直径阈值:param Minpts:  最小包含其他对象数量阈值:return: 返回聚类结果,是一个嵌套列表,每个子列表就是这个区域的对象的序号"""n = len(data)# 构建kd_treeleaf_size = 4root = kdtree.kdtree_construction(data,leaf_size=leaf_size)#step1 初始化核心对象集合T,聚类个数k,聚类集合C, 未访问集合PT = set()    #set 集合k = 0        #类初始化C = []       #聚类集合unvisited = set(range(n))   #初始化未访问集合#step2 通过判断,通过kd_tree radius NN找出所有核心点for d in range(n):result_set = RadiusNNResultSet(radius=eps)  #进行radius NN搜索,半径为epsionkdtree.kdtree_radius_search(root,data,result_set,data[d])nearest_idx = result_set.radius_nn_output_index()if len(nearest_idx) >= Minpts:     #临近点数 > min_sample,加入核心点T.add(d)    #最初得核心点#step3 聚类while len(T):     #visited core ,until all core points were visitedunvisited_old = unvisited     #更新为访问集合core = list(T)[np.random.randint(0,len(T))]    #从 核心点集T 中随机选取一个 核心点coreunvisited = unvisited - set([core])      #把核心点标记为 visited,从 unvisited 集合中剔除visited = []visited.append(core)while len(visited):new_core = visited[0]#kd-tree radius NN 搜索邻近result_set = RadiusNNResultSet(radius=eps)  # 进行radius NN搜索,半径为epsionkdtree.kdtree_radius_search(root, data, result_set, data[new_core])new_core_nearest = result_set.radius_nn_output_index()   #获取new_core 得邻近点if len(new_core_nearest) >= Minpts:S = unvisited & set(new_core_nearest)    #当前 核心对象的nearest 与 unvisited 的交集visited +=  (list(S))                     #对该new core所能辐射的点,再做检测unvisited = unvisited - S          #unvisited 剔除已 visited 的点visited.remove(new_core)                     #new core 已做检测,去掉new corek += 1   #类个数cluster = unvisited_old -  unvisited    #原有的 unvisited # 和去掉了 该核心对象的密度可达对象的visited就是该类的所有对象T = T - cluster  #去掉该类对象里面包含的核心对象C.append(cluster)  #把对象加入列表print("生成的聚类个数:%d" %k)return C,k
# 生成仿真数据
def generate_X(true_Mu, true_Var):# 第一簇的数据num1, mu1, var1 = 400, true_Mu[0], true_Var[0]X1 = np.random.multivariate_normal(mu1, np.diag(var1), num1)# 第二簇的数据num2, mu2, var2 = 600, true_Mu[1], true_Var[1]X2 = np.random.multivariate_normal(mu2, np.diag(var2), num2)# 第三簇的数据num3, mu3, var3 = 1000, true_Mu[2], true_Var[2]X3 = np.random.multivariate_normal(mu3, np.diag(var3), num3)# 合并在一起X = np.vstack((X1, X2, X3))# 显示数据plt.figure(figsize=(10, 8))plt.axis([-10, 15, -5, 15])plt.scatter(X1[:, 0], X1[:, 1], s=5)plt.scatter(X2[:, 0], X2[:, 1], s=5)plt.scatter(X3[:, 0], X3[:, 1], s=5)#plt.show()return Xif __name__ == '__main__':# 生成数据true_Mu = [[0.5, 0.5], [5.5, 2.5], [1, 7]]true_Var = [[1, 3], [2, 2], [6, 2]]X = generate_X(true_Mu, true_Var)begin_t = time.time()index,k = DBSCAN(X,eps=0.5,Minpts=15)dbscan_time = time.time() - begin_tprint("dbscan time:%f" %dbscan_time)cluster = [[] for i in range(k)]for i in range(k):cluster[i] = [X[j] for j in index[i]]Point_Show(cluster[0],color="red")Point_Show(cluster[1], color="orange")Point_Show(cluster[2],color="blue")plt.show()
#kdtree.py
import random
import math
import numpy as npfrom result_set import KNNResultSet, RadiusNNResultSetclass Node:def __init__(self, axis, value, left, right, point_indices):self.axis = axisself.value = valueself.left = leftself.right = rightself.point_indices = point_indicesdef is_leaf(self):if self.value is None:return Trueelse:return Falsedef __str__(self):output = ''output += 'axis %d, ' % self.axisif self.value is None:output += 'split value: leaf, 'else:output += 'split value: %.2f, ' % self.valueoutput += 'point_indices: 'output += str(self.point_indices.tolist())return outputdef sort_key_by_vale(key, value):assert key.shape == value.shape       #assert 断言操作,用于判断一个表达式,在表达式条件为false的时候触发异常assert len(key.shape) == 1            #numpy是多维数组sorted_idx = np.argsort(value)        #对value值进行排序key_sorted = key[sorted_idx]value_sorted = value[sorted_idx]      #进行升序排序return key_sorted, value_sorteddef axis_round_robin(axis, dim):         #用于轴的轮换if axis == dim-1:return 0else:return axis + 1def kdtree_recursive_build(root, db, point_indices, axis, leaf_size):    #kd树的建立""":param root::param db: NxD:param db_sorted_idx_inv: NxD:param point_idx: M:param axis: scalar:param leaf_size: scalar:return:"""if root is None:root = Node(axis, None, None, None, point_indices)           #实例化Node# determine whether to split into left and rightif len(point_indices) > leaf_size:                              #判断是否需要进行分割# --- get the split position ---point_indices_sorted, _ = sort_key_by_vale(point_indices, db[point_indices, axis])  #对点进行排列,dp存储信息middle_left_idx = math.ceil(point_indices_sorted.shape[0] / 2) - 1     #分一半middle_left_point_idx = point_indices_sorted[middle_left_idx]          #左边界点middle_left_point_value = db[middle_left_point_idx, axis]middle_right_idx = middle_left_idx + 1middle_right_point_idx = point_indices_sorted[middle_right_idx]middle_right_point_value = db[middle_right_point_idx, axis]           #右边界点root.value = (middle_left_point_value + middle_right_point_value) * 0.5    #取中值为节点值# === get the split position ===root.left = kdtree_recursive_build(root.left,db,point_indices_sorted[0:middle_right_idx],axis_round_robin(axis, dim=db.shape[1]),leaf_size)                                  #对对应的轴值进行排序root.right = kdtree_recursive_build(root.right,db,point_indices_sorted[middle_right_idx:],axis_round_robin(axis, dim=db.shape[1]),leaf_size)                                  #对对应的轴值进行排序return rootdef traverse_kdtree(root: Node, depth, max_depth):      #计算kdtree的深度depth[0] += 1if max_depth[0] < depth[0]:max_depth[0] = depth[0]if root.is_leaf():                                 #打印最后的叶子节点print(root)else:traverse_kdtree(root.left, depth, max_depth)    #累加计算深度traverse_kdtree(root.right, depth, max_depth)depth[0] -= 1def kdtree_construction(db_np, leaf_size):N, dim = db_np.shape[0], db_np.shape[1]# build kd_tree recursivelyroot = Noneroot = kdtree_recursive_build(root,db_np,np.arange(N),axis=0,leaf_size=leaf_size)return rootdef kdtree_knn_search(root: Node, db: np.ndarray, result_set: KNNResultSet, query: np.ndarray):   #KNNResultSet 继承二叉树的结果集if root is None:return Falseif root.is_leaf():# compare the contents of a leafleaf_points = db[root.point_indices, :]diff = np.linalg.norm(np.expand_dims(query, 0) - leaf_points, axis=1)for i in range(diff.shape[0]):result_set.add_point(diff[i], root.point_indices[i])return Falseif query[root.axis] <= root.value:          #如果 q[axis] inside the partition   如果查询点在根节点的左边,一定要查找左边kdtree_knn_search(root.left, db, result_set, query)if math.fabs(query[root.axis] - root.value) < result_set.worstDist():   #如果目标点离轴虚线的距离小于worst_dist 继续搜寻节点的右边kdtree_knn_search(root.right, db, result_set, query)else:kdtree_knn_search(root.right, db, result_set, query)if math.fabs(query[root.axis] - root.value) < result_set.worstDist():kdtree_knn_search(root.left, db, result_set, query)return Falsedef kdtree_radius_search(root: Node, db: np.ndarray, result_set: RadiusNNResultSet, query: np.ndarray):if root is None:return Falseif root.is_leaf():# compare the contents of a leafleaf_points = db[root.point_indices, :]diff = np.linalg.norm(np.expand_dims(query, 0) - leaf_points, axis=1)             #取行的差值,暴力搜索for i in range(diff.shape[0]):result_set.add_point(diff[i], root.point_indices[i])return Falseif query[root.axis] <= root.value:kdtree_radius_search(root.left, db, result_set, query)if math.fabs(query[root.axis] - root.value) < result_set.worstDist():kdtree_radius_search(root.right, db, result_set, query)else:kdtree_radius_search(root.right, db, result_set, query)if math.fabs(query[root.axis] - root.value) < result_set.worstDist():kdtree_radius_search(root.left, db, result_set, query)return Falsedef main():# configurationdb_size = 64dim = 3                #三维leaf_size = 4k = 1                  #一个点db_np = np.random.rand(db_size, dim)root = kdtree_construction(db_np, leaf_size=leaf_size)depth = [0]max_depth = [0]traverse_kdtree(root, depth, max_depth)print("tree max depth: %d" % max_depth[0])query = np.asarray([0, 0, 0])result_set = KNNResultSet(capacity=k)kdtree_knn_search(root, db_np, result_set, query)print(result_set)print(db_np)## diff = np.linalg.norm(np.expand_dims(query, 0) - db_np, axis=1)# nn_idx = np.argsort(diff)# nn_dist = diff[nn_idx]# print(nn_idx[0:k])# print(nn_dist[0:k])### print("Radius search:")# query = np.asarray([0, 0, 0])# result_set = RadiusNNResultSet(radius = 0.5)# radius_search(root, db_np, result_set, query)# print(result_set)if __name__ == '__main__':main()
#result_set.py
import copyclass DistIndex:def __init__(self, distance, index):self.distance = distanceself.index = indexdef __lt__(self, other):return self.distance < other.distanceclass KNNResultSet:def __init__(self, capacity):self.capacity = capacityself.count = 0self.worst_dist = 1e10self.dist_index_list = []self.output_index = []for i in range(capacity):self.dist_index_list.append(DistIndex(self.worst_dist, 0))self.comparison_counter = 0def size(self):return self.countdef full(self):return self.count == self.capacitydef worstDist(self):return self.worst_distdef add_point(self, dist, index):self.comparison_counter += 1if dist > self.worst_dist:returnif self.count < self.capacity:self.count += 1i = self.count - 1while i > 0:if self.dist_index_list[i - 1].distance > dist:self.dist_index_list[i] = copy.deepcopy(self.dist_index_list[i - 1])i -= 1else:breakself.dist_index_list[i].distance = distself.dist_index_list[i].index = indexself.worst_dist = self.dist_index_list[self.capacity - 1].distancedef __str__(self):output = ''for i, dist_index in enumerate(self.dist_index_list):output += '%d - %.2f\n' % (dist_index.index, dist_index.distance)output += 'In total %d comparison operations.' % self.comparison_counterreturn outputdef knn_output_index(self):output = ''for i, dist_index in enumerate(self.dist_index_list):output += '%d - %.2f\n' % (dist_index.index, dist_index.distance)self.output_index.append(dist_index.index)output += 'In total %d comparison operations.' % self.comparison_counterreturn  self.output_indexclass RadiusNNResultSet:def __init__(self, radius):self.radius = radiusself.count = 0self.worst_dist = radiusself.dist_index_list = []self.output_index = []self.comparison_counter = 0def size(self):return self.countdef worstDist(self):return self.radiusdef add_point(self, dist, index):self.comparison_counter += 1if dist > self.radius:returnself.count += 1self.dist_index_list.append(DistIndex(dist, index))def __str__(self):self.dist_index_list.sort()output = ''for i, dist_index in enumerate(self.dist_index_list):output += '%d - %.2f\n' % (dist_index.index, dist_index.distance)output += 'In total %d neighbors within %f.\nThere are %d comparison operations.' \% (self.count, self.radius, self.comparison_counter)return outputdef radius_nn_output_index(self):output = ''for i, dist_index in enumerate(self.dist_index_list):output += '%d - %.2f\n' % (dist_index.index, dist_index.distance)self.output_index.append(dist_index.index)output += 'In total %d comparison operations.' % self.comparison_counterreturn  self.output_index

调用sklearn的kd_tree库进行 DBSCAN的优化

全部代码

#DBCAN_fast.py
# 文件功能:实现 Spectral 谱聚类 算法import numpy as np
from numpy import *
import scipy
import pylab
import random, math
from numpy.random import rand
from numpy import square, sqrt
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
from scipy.stats import multivariate_normal
from result_set import KNNResultSet, RadiusNNResultSet
from sklearn.cluster import KMeans
import  kdtree as kdtree
import  time
#from scipy.spatial import KDTree
from sklearn.neighbors import KDTree # KDTree 进行搜索
import copyplt.style.use('seaborn')# matplotlib显示点云函数
def Point_Show(point,point_index):def colormap(c, num_clusters):if c == -1:color = [1] * 3# surrouding object:else:color = [0] * 3color[c % 3] = c / num_clustersreturn colorx = []y = []num_clusters = max(point_index) + 1point = np.asarray(point)for i in range(len(point)):x.append(point[i][0])y.append(point[i][1])plt.scatter(x, y,color=[colormap(c,num_clusters) for c in point_index])plt.show()#构建距离矩阵
def my_distance_Marix(data):S = np.zeros((len(data), len(data)))  # 初始化 关系矩阵 w 为 n*n的矩阵# step1 建立关系矩阵, 每个节点都有连线,权重为距离的倒数for i in range(len(data)):  # i:行for j in range(len(data)):  # j:列S[i][j] = np.linalg.norm(data[i] - data[j])  # 二范数计算两个点直接的距离,两个点之间的权重为之间距离的倒数return S# @profile
def DBSCAN(data, eps, Minpts):"""基于密度的点云聚类:param d_bbox: 点与点之间的距离矩阵:param eps:  最大搜索直径阈值:param Minpts:  最小包含其他对象数量阈值:return: 返回聚类结果,是一个嵌套列表,每个子列表就是这个区域的对象的序号"""n = len(data)# 构建kd_treeleaf_size = 4tree = KDTree(data,leaf_size)#step1 初始化核心对象集合T,聚类个数k,聚类集合C, 未访问集合PT = set()    #set 集合k = 0        #第k类cluster_index = np.zeros(n,dtype=int)      #聚类集合unvisited = set(range(n))   #初始化未访问集合#step2 通过判断,通过kd_tree radius NN找出所有核心点nearest_idx = tree.query_radius(data, eps)  # 进行radius NN搜索,半径为epsion,所有点的最临近点储存在 nearest_idx中for d in range(n):if len(nearest_idx[d]) >= Minpts:     #临近点数 > min_sample,加入核心点T.add(d)    #最初得核心点#step3 聚类while len(T):     #visited core ,until all core points were visitedunvisited_old = unvisited     #更新为访问集合core = list(T)[np.random.randint(0,len(T))]    #从 核心点集T 中随机选取一个 核心点coreunvisited = unvisited - set([core])      #把核心点标记为 visited,从 unvisited 集合中剔除visited = []visited.append(core)while len(visited):new_core = visited[0]#kd-tree radius NN 搜索邻近if new_core in T:     #如果当前搜索点是核心点S = unvisited & set(nearest_idx[new_core])    #当前 核心对象的nearest 与 unvisited 的交集visited +=  (list(S))                     #对该new core所能辐射的点,再做检测unvisited = unvisited - S          #unvisited 剔除已 visited 的点visited.remove(new_core)                     #new core 已做检测,去掉new corecluster = unvisited_old -  unvisited    #原有的 unvisited # 和去掉了 该核心对象的密度可达对象的visited就是该类的所有对象T = T - cluster  #去掉该类对象里面包含的核心对象,差集cluster_index[list(cluster)] = kk += 1   #类个数noise_cluster = unvisitedcluster_index[list(noise_cluster)] = -1    #噪声归类为 1print(cluster_index)print("生成的聚类个数:%d" %k)return cluster_index
# 生成仿真数据
def generate_X(true_Mu, true_Var):# 第一簇的数据num1, mu1, var1 = 400, true_Mu[0], true_Var[0]X1 = np.random.multivariate_normal(mu1, np.diag(var1), num1)# 第二簇的数据num2, mu2, var2 = 600, true_Mu[1], true_Var[1]X2 = np.random.multivariate_normal(mu2, np.diag(var2), num2)# 第三簇的数据num3, mu3, var3 = 1000, true_Mu[2], true_Var[2]X3 = np.random.multivariate_normal(mu3, np.diag(var3), num3)# 合并在一起X = np.vstack((X1, X2, X3))# 显示数据plt.figure(figsize=(10, 8))plt.axis([-10, 15, -5, 15])plt.scatter(X1[:, 0], X1[:, 1], s=5)plt.scatter(X2[:, 0], X2[:, 1], s=5)plt.scatter(X3[:, 0], X3[:, 1], s=5)#plt.show()return Xif __name__ == '__main__':# 生成数据true_Mu = [[0.5, 0.5], [5.5, 2.5], [1, 7]]true_Var = [[1, 3], [2, 2], [6, 2]]X = generate_X(true_Mu, true_Var)cluster_index = DBSCAN(X,eps=0.5,Minpts=15)Point_Show(X,cluster_index)

 

三维点云学习(4)5-DBSCNA python 复现-2-kd-_tree加速相关推荐

  1. 三维点云学习(4)5-DBSCNA python 复现-3-kd-tree radius NN 三方库 scipy 与 sklearn速度比较

    三维点云学习(4)5-DBSCNA python 复现-3-kd-tree radius NN 三方库 scipy 与 sklearn速度比较 import from scipy.spatial im ...

  2. 三维点云学习(4)5-DBSCNA python 复现-1- 距离矩阵法

    三维点云学习(4)5-DBSCNA python 复现-1- 距离矩阵法 代码参考,及伪代码参考: DBSCAN 对点云障碍物聚类 使用Kdtree加速的DBSCAN进行点云聚类 DBSCAN 课程笔 ...

  3. 三维点云学习(4)7-ransac 地面分割+ DBSCAN聚类比较

    三维点云学习(4)7-ransac 地面分割+ DBSCAN聚类比较 回顾: 实现ransac地面分割 DBSCNA python 复现-1- 距离矩阵法 DBSCNA python 复现-2-kd- ...

  4. 三维点云学习(3)7- 实现GMM

    三维点云学习(3)7- 实现GMM github大神参考代码 高斯混合模型的通俗理解 GMM课程个人总结笔记 最终效果图 原图 进行高斯聚类后的图 代码编写流程 1.输入数据集x1 x2 -xn,和K ...

  5. 三维点云学习(1)上-PCA主成分分析 法向量估计

    三维点云学习(1)上 环境安装 1.系统环境 win10 或者 ubuntu 2. Anaconda3+python3.6 使用Anaconda创建的conda虚拟环境进行python的编写 环境安装 ...

  6. 三维点云学习(9)5-实现RANSAC Registration配准

    三维点云学习(9)5-实现RANSAC Registration配准 参考博客: 机器视觉之 ICP算法和RANSAC算法 三维点云配准 ICP点云配准原理及优化 本章因个人能力有限,大部分代码摘自g ...

  7. 三维点云学习(5)5-实现Deeplearning-PointNet-2-classfication

    三维点云学习(5)5-实现Deeplearning-PointNet-2-classfication Github PointNet源码 数据集下载:为40种物体的三维点云数据集 提取码:es14 运 ...

  8. 三维点云学习(5)4-实现Deeplearning-PointNet-1-数据集的批量读取

    三维点云学习(5)4-实现Deeplearning-PointNet-1-数据集的批量读取 Github PointNet源码 数据集下载:为40种物体的三维点云数据集 提取码:es14 因为本人初次 ...

  9. 三维点云学习(5)3-Deep learning for Point Cloud-PointNet++

    三维点云学习(5)3-Deep learning for Point Cloud-PointNet++ 强烈推荐 PointNet的深刻理解 PointNet++示意图 可分为 Encoder. Se ...

最新文章

  1. 在Ubuntu 14.04.5 64bit上安装git GUI客户端GitKraken
  2. Java学习笔记4——I/O框架
  3. 腾讯视频如何多倍速播放视频
  4. 打造自己的Lnmp固若金汤系统
  5. MySQL 入门(十)—— 数据操作
  6. 高效记忆/形象记忆(12)110数字编码表 61-70
  7. linux 远程 mox,MOX 文件扩展名: 它是什么以及如何打开它?
  8. 网页显示mysql数据库到表格数据_html表格显示数据库数据
  9. BT软件系统包含哪些部分?BT技术如何突破运营商的封锁?
  10. 生活中的设计模式之状态(State)模式
  11. 场景一:刮刮卡,大转盘等抽奖算法
  12. 用于视觉识别的深度卷积网络空间金字塔池化方法
  13. 数据库连接池的管理思想
  14. 前端项目的总结——为什么要组件化?
  15. 一夜之间收到上百条短信,账户空了... 这种诈骗方式的背后技术原理
  16. linux中write的实例,Linux内核 down_write()
  17. 联系我们吧 - 12个联系我们表单和页面设计赏析和学习
  18. shell终端多目录间快速cd工具
  19. 8.关于删除操作中axis=0和axis=1的理解(Python版)
  20. b类产品访谈提纲_谷歌产品分析师访谈

热门文章

  1. 解决springmvc报No converter found for return value of type: class java.util.ArrayList问题
  2. 遇到local variable ‘e‘ referenced before assignment这样的问题应该如何解决
  3. 为什么会出现“无法连接服务器-与网络有关或与实例有关的错误”?
  4. 如何使用JavaScript从字符串中删除空格?
  5. 有没有办法在Android上运行Python?
  6. 关于计算机优点缺点的英语作文,跪求一篇英语作文 题目:论计算机的优缺点...
  7. win11组策略如何恢复默认设置 windows11组策略恢复默认设置的步骤方法
  8. pip源使用国内镜像
  9. 数据挖掘与其商务智能上的应用的实验报告
  10. 2019重庆对口高职计算机类分数排名,重庆2019高职分类考试分数线公布