Advanced clustering methods (Cure, Chameleon, Rock, Jarvis-Petrich)
转载自:http://blog.csdn.net/lycorislqy/article/details/23595723
首先是关于Hierarchical Clustering:
A hierarchical method creates a hierarchical decomposition of the given set of data objects. Hierarchical methods suffer from the fact that once a step (merge or split) is done, it can never be undone.
CURE
关于CURE其实很容易查到相关的内容,Datamining的书(《数据挖掘导论》)里面也有详细的介绍,算是比较好找到的资料。可以参考这个http://wiki.madio.net/index.php?doc-view-996 以及搜这篇paper:Studipto Guha, Rajeev Rastogi, Kyuseok Shim, “CURE: An Efficient Clustering Algorithm for Large Databases”
CURE是一种聚类算法,它使用各种不同的技术创造一种方法,该方法能 够处理大型数据,离群点和具有非球形和非均匀大小的簇的数据。
Shrinking representative points toward the center helps avoid problems with noise and outliers
Output: the set of clusters
1. Draw a random sample from the data set.
The CURE paper is notable for explicitly deriving a formula for what the size of this sample should be in order to gurantee. with high probability, that all clusters are represented by a minimum number of points.
2. Partition the sample into p equal-sized partitions.
3. Cluser the points in each partition into m/pq clusers using CURE's hierarchical clustering algorithm to obtain a total of m/q clusters.
4. Use CURE's hierachical clustering algorithm to cluster the m/q clusters found in the previous step until only k clusters remain.
5. Eliminate outliers. This is the second phase of outlier elimination
6. Assign all remaining data points to the nearesr cluster to obtain a complete clustering.
From the experiment before you can point out CURE is better able to handle clusters of arbitrary shapes and sizes.
But CURE Cannot Handle Differing Densities
换句话说,簇间相似度是基于来自不同簇而有相同邻居的点的数目。
A pair of points is defined to be neighbors if their similarity is greater than some threshold
Use a hierarchical clustering scheme to cluster the data.
Obtain a sample of points from the data set
Compute the link value for each set of points, i.e., transform the original similarities (computed by Jaccard coefficient) into similarities that reflect the number of shared neighbors between points
Perform an agglomerative hierarchical clustering on the data using the “number of shared neighbors” as similarity measure and maximizing “the shared neighbors” objective function
Assign the remaining points to the clusters that have been found
Start with the proximity matrix
Consider each point as a node in a graph
Each edge between two nodes has a weight which is the proximity between the two points
Initially the proximity graph is fully connected
MIN (single-link) and MAX (complete-link) can be viewed as starting with this graph
In the simplest case, clusters are connected components in the graph.
- Main property is the relative closeness and relative inter-connectivity of the cluster
- Two clusters are combined if the resulting cluster shares certain properties with the constituent clusters
- The merging scheme preserves self-similarity
- Build a k-nearest neighbor graph
- Partition the graph using a multilebel graph partitioning algorithm
- Repeat
- Merge the clusters that best preserve the cluster self-similarity with respect to relative interconnectibity and relative closeness
- Until No more cluster can be merged
Chameleon 的产生是基于对两个层次聚类算法 CURE 和 ROCK的缺点的观察。CURE 及其相关的方案忽略了关于两个不同簇中的对象的互连性的信息,而 ROCK及其相关的方案强调对象间互连性,却忽略了关于对象间近似度的信息。
if two points are similar to many of the same points, then they are simiar to one another, even if a direct measurement of similarity does not indicate this.
Find the K-nearest neighbors of all points.
if two points, x and y are not among the k-nearest neighors of each other then
similarity(x,y) <- 0
else
similarity(s,y) <- number of shared neighbors
end if
In graph terms this can be regarded as breaking all but the k strongest links from a point to other points in the proximity graph
A pair of points is put in the same cluster if
any two points share more than T neighbors and
the two points are in each others k nearest neighbor list
For instance, we might choose a nearest neighbor list of size 20 and put points in the same cluster if they share more than 10 near neighbors
Jarvis-Patrick clustering
First, the k-nearest neighbors of all points are found
In graph terms this can be regarded as breaking all but the k strongest links from a point to other points in the proximity graph
A pair of points is put in the same cluster if
any two points share more than threshold neighbors and
the two points are in each others k nearest neighbor list
For instance, we might choose a nearest neighbor list of size 20 and put points in the same cluster if they share more than 10 near neighbors
(2)Work well for high-dimensional data and is particularly good at finding tight cluster of strongly related objects.
not all objects are cluster
It's hard to choosing the best values for the parameters
Storage requirements of the JP clustering algorithm are only O(km)
The basic time complexity of JP clustering is O(m2)
And for low-dimensional Euclidean data, it can use more efficiently find the k-nearest neighbors. This can reduce the time complexity from O(m2) to O(mlogm)
Advanced clustering methods (Cure, Chameleon, Rock, Jarvis-Petrich)相关推荐
- 时间序列 R 10 其他进阶预测方法 Advanced forecasting methods
1 Dynamic regression models 动态回归模型 前面的内容中要么只考虑了时间,要么只考虑了其他自变量的影响,这一节将考虑各个变量和时间的综合影响. 1.1 regression ...
- 数据挖掘:聚类算法CURE、SNN和ROCK
文章目录 Hierarchical clustering: revisited CURE: another hierarchical approach CURE cannot handle diffe ...
- Graph Neural Networks: A Review of Methods and Applications(图神经网络:方法与应用综述)
Graph Neural Networks: A Review of Methods and Applications 图神经网络:方法与应用综述 Jie Zhou , Ganqu Cui , Zhe ...
- 吴恩达深度学习课程deeplearning.ai课程作业:Class 2 Week 2 Optimization methods
吴恩达deeplearning.ai课程作业,自己写的答案. 补充说明: 1. 评论中总有人问为什么直接复制这些notebook运行不了?请不要直接复制粘贴,不可能运行通过的,这个只是notebook ...
- 聚类分析(Clustering Analysis)
聚类分析(Clustering Analysis) 聚类作为数据挖掘与统计分析的一个重要的研究领域,近年来倍受关注.从机器学习的角度看,聚类是一种无监督的机器学习方法,即事先对数据集的分布没有任何的了 ...
- 文献学习(part78-A)--A Survey of Clustering Algorithms for Big Data: T axonomy Empirical Analysis
学习笔记,仅供参考,有错必纠 关键词:聚类算法.无监督学习.大数据 文章目录 A Survey of Clustering Algorithms for Big Data: T axonomy &am ...
- 聚类算法教程(3):层次聚类算法Hierarchical Clustering Algorithms
基本工作原理 给定要聚类的N的对象以及N*N的距离矩阵(或者是相似性矩阵),层次式聚类方法的基本步骤(参看S.C. Johnson in 1967)如下: 1. 将每个对象归为一类,共得 ...
- 【论文阅读笔记】:CGD: Multi-View Clustering via Cross-View Graph Diffusion
Summary 现有的多视图处理模型都是先进行表征学习,通过学习得到的表征得到统一的图,再利用该图进行谱聚类.本文考虑将特征通过kNN构图得到每个视图的图,再通过多视图融合迭代公式进行融合扩散.这样, ...
- Paper reading (四十四): Machine learning methods for metabolic pathway prediction
论文题目:Machine learning methods for metabolic pathway prediction scholar 引用:149 页数:14 发表时间:2010.01 发表刊 ...
最新文章
- php开发支持的文件类型整理
- python 东八区
- TCP端口状态 LISTENING、ESTABLISHED、TIME_WAIT及CLOSE_WAIT详解,以及三次握手,滑动窗口
- dax 计算某一列重复出现次数
- CentOS YUM / RPM Error Signature Key ID BAD
- Android Java包
- Java 11:将集合转换为数组
- java中如何设计答题小系统_java的一点问题,设计一个答题的程序
- 【正则表达式】以字母或下划线开头,包含字母、数字、以及下划线
- 小程序css之字体镂空
- 【云栖大会】业务和安全的融合实践详解
- mysql主要的两个索引Innodb和MyIASM。
- linux 文件 跳板机_linux 跳板机得搭建
- 问世间最大的乐趣是什么?
- network 节点label以及相关字体设置
- 问卷星刷问卷(一)xpath使用
- C++:建立一个被称为sroot()的函数,返回其参数的二次方根。重载sroot()3次,让它返回整数、长整数与双精度的二次方根
- windows 系统遍历USB设备 VID和PID
- 西门子bop20显示电流_SIEMENS/西门子BOP20基本操作员面板使用方法说明
- staf linux运行模式,IBM 自动化测试框架STAF介绍