Yes
Give the cluster an expert domain and ask him to analyze whether the structure found by the algorithm is reasonable. Not so much if it is new, but if it is reasonable.
... and No :
There is no automatic rating that is fair. In the sense that it takes into account the goal of uncontrolled clustering: discovering knowledge aka: learn something new about your data.
There are two common ways to evaluate clusters:
internal cohesion. That is, there is a certain property, such as dispersion in a cluster compared to dispersion between clusters to minimize. The problem is that it's usually pretty trivial to cheat. That is, to build a trivial solution that is very well evaluated. Therefore, this method should not be used to compare methods based on different assumptions. You cannot even compare different types of relationships for hierarchical clustering.
external assessment. You use a labeled dataset and evaluate the algorithms how well they rediscover existing knowledge. Sometimes this works quite well, so for an assessment this is an acceptable level of technology. However, any controlled or semi-controlled method, of course, would be much better at evaluating this. Thus, this A) is biased against controlled methods, and B) is actually completely opposed to the idea of ​​discovering knowledge about finding what you did not know yet.
If you really want to use clustering - that is, you will learn something about your data - you will at some point have to check the clusters, preferably using a completely independent method, such as a domain expert. If he can tell you, for example, a user group identified by clustering is a nontrivial group that has not yet been studied carefully, then you are the winner.
However, most people want a one-click rating (and one rating), unfortunately.
Oh, and “clustering” is not really a machine learning task. There is really no training. For the machine learning community, this is an ugly duckling that no one cares about.
source share