Interpretabilty of Machine Learning Models created by Clustering Algorithms

Interpretability is a key characteristics of machine learning model, if our goal is to persuade experts from chosen domain, for which we propose our model, to accept it and use it. The better we are able to explain the behaviour of our model, the greater is the chance it will be accpeted.
That’s why it is importatnt to strive not just for better results of our model, but also its interpretability. One without other lacks the meaning. Sometimes maybe even model with worse result has better chances of being accepted if we are able to explain it better in a way that is comprehensible for people.
We are focusing on clustering and models created by clustering algorithms. Our goal is to use feature importance of these models to raise their interpretability.