|Title:||On the k-Medoids Model for Semi-supervised Clustering||Journal:||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)||Volume:||11328 LNCS||First page:||13||Last page:||27||Conference:||6th International Conference on Variable Neighborhood Search, ICVNS 2018; Sithonia; Greece; 4 October 2018 through 7 October 2018||Issue Date:||1-Jan-2019||Rank:||M33||ISBN:||978-3-030-15842-2||ISSN:||0302-9743||DOI:||10.1007/978-3-030-15843-9_2||Abstract:||
Clustering is an automated and powerful technique for data analysis. It aims to divide a given set of data points into clusters which are homogeneous and/or well separated. A major challenge with clustering is to define an appropriate clustering criterion that can express a good separation of data into homogeneous groups such that the obtained clustering solution is meaningful and useful to the user. To circumvent this issue, it is suggested that the domain expert could provide background information about the dataset, which can be incorporated by a clustering algorithm in order to improve the solution. Performing clustering under this assumption is known as semi-supervised clustering. This work explores semi-supervised clustering through the k-medoids model. Results obtained by a Variable Neighborhood Search (VNS) heuristic show that the k-medoids model presents classification accuracy compared to the traditional k-means approach. Furthermore, the model demonstrates high flexibility and performance by combining kernel projections with pairwise constraints.
|Keywords:||k-medoids | Semi-supervised clustering | Variable Neighborhood Search||Publisher:||Springer Link|
Show full item record
checked on Mar 1, 2021
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.