聚类算法是一种无监督的分类方法,即样本预先不知所属类别或标签,需要根据样本之间的距离或相似程度自动进行分类。聚类算法可以分为基于划分的方法、基于连通性的方法、基于密度的方法、基于概率分布模型的方法等,K-means(K均值)属于基于划分的聚类方法。

一、基本原理
基于划分的聚类方法是将样本集组成的矢量空间划分为多个区域,每个区域都存在一个区域相关的表示,即区域中心。对于每个样本可以建立一种样本到区域中心的映射
其中l()为指数函数。
根据建立的映射q(x),可以将相应的样本分类到相应的中心,得到最终的划分结果。
不同的基于划分的聚类算法的主要区别在于如何建立相应的映射方式q(x)。在经典的K-means聚类算法中,映射是通过样本与各中心的之间的距离平方和最小准则来确立的。
假设有样本集合, K-means聚类算法的目标是将数据集划分为k(k<n)类:S = {S1, S2, ..., SK},使划分后的K个子集合满足类内的距离平方和最小:
其中,
求解目标函数是一个NP-hard问题,无法保证得到一个稳定的全局最优解。在经典的聚类算法中,采取迭代优化策略,有效地求解目标函数的局部最优解。
算法步骤如下
步骤1  初始化聚类中心,可选取样本集的前k个样本,或者随机选取k个样本;
步骤2 分配各样本到相近的聚类集合,样本分配依据为:
           式 中 i = 1,2, ...,k,p ≠ j。
步骤3  根据步骤2的分配结果,更新聚类中心:
步骤4 若迭代达到最大迭代步数,或前后两次迭代的差小于设定阈值,即,则迭代终止,否则重复步骤2。
其中,步骤2和步骤3分别对样本集合重新分配和更新计算聚类中心,通过迭代计算过程中优化目标函数,实现类内距离平方和最小。

二、K-means算法的优化
2.1 聚类中心初始化的优化
K-means对聚类中心的初始化比较敏感,不同初始化值会带来不同的聚类结果,这是因为K-means仅仅对目标函数求取近似局部最优解,不能保证得到全局最优解,即在一定数据分布下聚类结果会因为初始化的不同而产生很大的偏差。
下面介绍一下K-means的改进算法,即K-means++算法,改算法能够有效产生初始的聚类中心。
首先,随机初始化一个聚类中心
然后,通过迭代计算最大概率值:
加入下一个聚类中心:
直到选择k个中心。
K-means++算法的计算复杂度为O(knd),没有增加过多的计算负担,同时可以保证算法更有效的近似于最优解。
2.2 类别个数的自适应确定
经典的K-means算法中,聚类的个数k是预先确定的,不具备自适应选择类别个数的能力。而聚类算法中类别个数的设定将会在很大程度上决定聚类效果。
ISODATA算法与K-means在基本原则上是一致的,通过计算距离平方和最小来实现聚类,但在迭代的过程中会引入类别的合并与分离机制。
在每一次迭代中,ISODATA算法首先在固定类别的情况下进行聚类,然后根据设定样本之间的距离阈值进行合并操作,并根据每一组类别Si中样本协方差矩阵信息来判断是否分开。
但IOSDATA算法的效率会相比于K-means大大降低。
附:K-means算法C语言实现
#include <math.h>
#include <stdlib.h>
#include <stdio.h>#define sqr(x) ((x)*(x))#define MAX_CLUSTERS 16#define MAX_ITERATIONS 100#define BIG_double (INFINITY)void fail(char *str){printf(str);exit(-1);}double calc_distance(int dim, double *p1, double *p2){double distance_sq_sum = 0;for (int ii = 0; ii < dim; ii++)distance_sq_sum += sqr(p1[ii] - p2[ii]);return distance_sq_sum;}void calc_all_distances(int dim, int n, int k, double *X, double *centroid, double *distance_output){for (int ii = 0; ii < n; ii++) // for each pointfor (int jj = 0; jj < k; jj++) // for each cluster{// calculate distance between point and cluster centroiddistance_output[ii*k + jj] = calc_distance(dim, &X[ii*dim], ¢roid[jj*dim]);}}double calc_total_distance(int dim, int n, int k, double *X, double *centroids, int *cluster_assignment_index)// NOTE: a point with cluster assignment -1 is ignored{double tot_D = 0;// for every pointfor (int ii = 0; ii < n; ii++){// which cluster is it in?int active_cluster = cluster_assignment_index[ii];// sum distanceif (active_cluster != -1)tot_D += calc_distance(dim, &X[ii*dim], ¢roids[active_cluster*dim]);}return tot_D;}void choose_all_clusters_from_distances(int dim, int n, int k, double *distance_array, int *cluster_assignment_index){// for each pointfor (int ii = 0; ii < n; ii++){int best_index = -1;double closest_distance = BIG_double;// for each clusterfor (int jj = 0; jj < k; jj++){// distance between point and cluster centroiddouble cur_distance = distance_array[ii*k + jj];if (cur_distance < closest_distance){best_index = jj;closest_distance = cur_distance;}}// record in arraycluster_assignment_index[ii] = best_index;}}void calc_cluster_centroids(int dim, int n, int k, double *X, int *cluster_assignment_index, double *new_cluster_centroid){int cluster_member_count[MAX_CLUSTERS];// initialize cluster centroid coordinate sums to zerofor (int ii = 0; ii < k; ii++) {cluster_member_count[ii] = 0;for (int jj = 0; jj < dim; jj++)new_cluster_centroid[ii*dim + jj] = 0;}// sum all points// for every pointfor (int ii = 0; ii < n; ii++){// which cluster is it in?int active_cluster = cluster_assignment_index[ii];// update count of members in that clustercluster_member_count[active_cluster]++;// sum point coordinates for finding centroidfor (int jj = 0; jj < dim; jj++)new_cluster_centroid[active_cluster*dim + jj] += X[ii*dim + jj];}// now divide each coordinate sum by number of members to find mean/centroid// for each clusterfor (int ii = 0; ii < k; ii++) {if (cluster_member_count[ii] == 0)printf("WARNING: Empty cluster %d! \n", ii);// for each dimensionfor (int jj = 0; jj < dim; jj++)new_cluster_centroid[ii*dim + jj] /= cluster_member_count[ii];  /// XXXX will divide by zero here for any empty clusters!}}void get_cluster_member_count(int n, int k, int *cluster_assignment_index, int *cluster_member_count){// initialize cluster member countsfor (int ii = 0; ii < k; ii++) cluster_member_count[ii] = 0;// count members of each cluster    for (int ii = 0; ii < n; ii++)cluster_member_count[cluster_assignment_index[ii]]++;}void update_delta_score_table(int dim, int n, int k, double *X, int *cluster_assignment_cur, double *cluster_centroid, int *cluster_member_count, double *point_move_score_table, int cc){// for every point (both in and not in the cluster)for (int ii = 0; ii < n; ii++){double dist_sum = 0;for (int kk = 0; kk < dim; kk++){double axis_dist = X[ii*dim + kk] - cluster_centroid[cc*dim + kk]; dist_sum += sqr(axis_dist);}double mult = ((double)cluster_member_count[cc] / (cluster_member_count[cc] + ((cluster_assignment_cur[ii]==cc) ? -1 : +1)));point_move_score_table[ii*dim + cc] = dist_sum * mult;}}void  perform_move(int dim, int n, int k, double *X, int *cluster_assignment, double *cluster_centroid, int *cluster_member_count, int move_point, int move_target_cluster){int cluster_old = cluster_assignment[move_point];int cluster_new = move_target_cluster;// update cluster assignment arraycluster_assignment[move_point] = cluster_new;// update cluster count arraycluster_member_count[cluster_old]--;cluster_member_count[cluster_new]++;if (cluster_member_count[cluster_old] <= 1)printf("WARNING: Can't handle single-member clusters! \n");// update centroid arrayfor (int ii = 0; ii < dim; ii++){cluster_centroid[cluster_old*dim + ii] -= (X[move_point*dim + ii] - cluster_centroid[cluster_old*dim + ii]) / cluster_member_count[cluster_old];cluster_centroid[cluster_new*dim + ii] += (X[move_point*dim + ii] - cluster_centroid[cluster_new*dim + ii]) / cluster_member_count[cluster_new];}}  void cluster_diag(int dim, int n, int k, double *X, int *cluster_assignment_index, double *cluster_centroid){int cluster_member_count[MAX_CLUSTERS];get_cluster_member_count(n, k, cluster_assignment_index, cluster_member_count);printf("  Final clusters \n");for (int ii = 0; ii < k; ii++) printf("    cluster %d:     members: %8d, centroid (%.1f %.1f) \n", ii, cluster_member_count[ii], cluster_centroid[ii*dim + 0], cluster_centroid[ii*dim + 1]);}void copy_assignment_array(int n, int *src, int *tgt){for (int ii = 0; ii < n; ii++)tgt[ii] = src[ii];}int assignment_change_count(int n, int a[], int b[]){int change_count = 0;for (int ii = 0; ii < n; ii++)if (a[ii] != b[ii])change_count++;return change_count;}void kmeans(int  dim,                            // dimension of data double *X,                        // pointer to dataint   n,                         // number of elementsint   k,                         // number of clustersdouble *cluster_centroid,         // initial cluster centroidsint   *cluster_assignment_final  // output){double *dist                    = (double *)malloc(sizeof(double) * n * k);int   *cluster_assignment_cur  = (int *)malloc(sizeof(int) * n);int   *cluster_assignment_prev = (int *)malloc(sizeof(int) * n);double *point_move_score        = (double *)malloc(sizeof(double) * n * k);if (!dist || !cluster_assignment_cur || !cluster_assignment_prev || !point_move_score)fail("Error allocating dist arrays");// initial setup  calc_all_distances(dim, n, k, X, cluster_centroid, dist);choose_all_clusters_from_distances(dim, n, k, dist, cluster_assignment_cur);copy_assignment_array(n, cluster_assignment_cur, cluster_assignment_prev);// BATCH UPDATEdouble prev_totD = BIG_double;int batch_iteration = 0;while (batch_iteration < MAX_ITERATIONS){
//        printf("batch iteration %d \n", batch_iteration);
//        cluster_diag(dim, n, k, X, cluster_assignment_cur, cluster_centroid);// update cluster centroidscalc_cluster_centroids(dim, n, k, X, cluster_assignment_cur, cluster_centroid);// deal with empty clusters// XXXXXXXXXXXXXX// see if we've failed to improvedouble totD = calc_total_distance(dim, n, k, X, cluster_centroid, cluster_assignment_cur);if (totD > prev_totD)// failed to improve - currently solution worse than previous{// restore old assignmentscopy_assignment_array(n, cluster_assignment_prev, cluster_assignment_cur);// recalc centroidscalc_cluster_centroids(dim, n, k, X, cluster_assignment_cur, cluster_centroid);printf("  negative progress made on this step - iteration completed (%.2f) \n", totD - prev_totD);// done with this phasebreak;}// save previous stepcopy_assignment_array(n, cluster_assignment_cur, cluster_assignment_prev);// move all points to nearest clustercalc_all_distances(dim, n, k, X, cluster_centroid, dist);choose_all_clusters_from_distances(dim, n, k, dist, cluster_assignment_cur);int change_count = assignment_change_count(n, cluster_assignment_cur, cluster_assignment_prev);printf("%3d   %u   %9d  %16.2f %17.2f\n", batch_iteration, 1, change_count, totD, totD - prev_totD);fflush(stdout);// done with this phase if nothing has changedif (change_count == 0){printf("  no change made on this step - iteration completed \n");break;}prev_totD = totD;batch_iteration++;}// write to output arraycopy_assignment_array(n, cluster_assignment_cur, cluster_assignment_final);    free(dist);free(cluster_assignment_cur);free(cluster_assignment_prev);free(point_move_score);}  

2017.11.17

【Machine Learning】K-means算法及C语言实现相关推荐

  1. kmeans改进 matlab,基于距离函数的改进k―means 算法

    摘要:聚类算法在自然科学和和社会科学中都有很普遍的应用,而K-means算法是聚类算法中经典的划分方法之一.但如果数据集内相邻的簇之间离散度相差较大,或者是属性分布区间相差较大,则算法的聚类效果十分有 ...

  2. Machine Learning | (10) 回归算法-岭回归

    Machine Learning | 机器学习简介 Machine Learning | (1) Scikit-learn与特征工程 Machine Learning | (2) sklearn数据集 ...

  3. Machine Learning | (9) 回归算法-线性回归

    Machine Learning | 机器学习简介 Machine Learning | (1) Scikit-learn与特征工程 Machine Learning | (2) sklearn数据集 ...

  4. k means算法C语言伪代码,K均值算法(K-Means)

    1. K-Means算法步骤 算法步骤 收敛性定义,畸变函数(distortion function): 伪代码: 1) 创建k个点作为K个簇的起始质心(经常随机选择) 2) 当任意一个点的蔟分配结果 ...

  5. k近邻算法原理c语言,实验二 K-近邻算法及应用

    作业信息 一.[实验目的] 理解K-近邻算法原理,能实现算法K近邻算法: 掌握常见的距离度量方法: 掌握K近邻树实现算法: 针对特定应用场景及数据,能应用K近邻解决实际问题. 二.[实验内容] 实现曼 ...

  6. k均值聚类算法(K Means)及其实战案例

    算法说明 K均值聚类算法其实就是根据距离来看属性,近朱者赤近墨者黑.其中K表示要聚类的数量,就是说样本要被划分成几个类别.而均值则是因为需要求得每个类别的中心点,比如一维样本的中心点一般就是求这些样本 ...

  7. Machine Learning | (12) 非监督学习-k-means

    Machine Learning | 机器学习简介 Machine Learning | (1) Scikit-learn与特征工程 Machine Learning | (2) sklearn数据集 ...

  8. 独家 | R语言中K邻近算法的初学者指南:从菜鸟到大神(附代码&链接)

    作者:Leihua Ye, UC Santa Barbara 翻译:陈超 校对:冯羽 本文约2300字,建议阅读10分钟 本文介绍了一种针对初学者的K临近算法在R语言中的实现方法. 本文呈现了一种在R ...

  9. 【Machine Learning】OpenCV中的K-means聚类

    在上一篇([Machine Learning]K-means算法及优化)中,我们介绍了K-means算法的基本原理及其优化的方向.opencv中也提供了K-means算法的接口,这里主要介绍一下如何在 ...

最新文章

  1. 什么猫咪最受欢迎?Python爬取全网猫咪图片,哪一款是你最爱的
  2. 无人驾驶油电混动牵引车_比纯电动更轻,省油率高达10%!解读首台国产非插电式混动卡车...
  3. xshell远程连接自动断开的问题解决
  4. Fedora13安装Gnome-shell
  5. java内部类文件,Java内部类学习
  6. angularJs项目初建
  7. win10系统开不了机
  8. 路由器距离向量算法计算举例_文本去重算法:Minhash/Simhash/Klongsent
  9. 【X264系列】之命令参数解析
  10. matlab汽车驱动力与行驶阻力,最新汽车理论1.3和2.7matlab编程答案
  11. 39.伪造重定向ICMP数据包
  12. 将图片转换成url链接
  13. matlab二阶系统的单位阶跃响应为,2 二阶系统阶跃响应
  14. pl/sql如何使用
  15. 跳步游戏2--返回最小跳步数
  16. ubuntu conda、pip 设置代理
  17. Android UI切图命名规范
  18. U盘安装CentOS7 解决各种问题
  19. Kubernetes_28_Ingress服务暴露
  20. 零知识证明学习资源汇总

热门文章

  1. jvm性能调优实战 - 25模拟在Young GC过后因为放入下Survivor区域直接进入老年代
  2. 从零开始的51单片机——VsCode+EIDE环境搭建
  3. android studio ide内部错误,Android Studio内部IDE错误
  4. ajax 取值 返回map_springboot|前端发ajax请求到后台Controller及常见的坑
  5. 学习笔记(二十)—— 网络编程
  6. 学习笔记(十八)——MongoDB(CRUD)与Python交互
  7. html5 滤色,深入理解CSS mix-blend-mode滤色screen混合模式
  8. Go中的Map实现机制
  9. 一个方法搞定安卓路由跳转
  10. android studio引用module出的错:Unable to resolve dependency for‘:app@debug/........