Evaluating and comparing modelling languages is a prerequisite for progress in the field of conceptual modelling. However, only little research has been dedicated to the investigation of appropriate evaluation methods. It is common practice that mainly those who design modelling language decide about the relevance of particular features. In this paper, we argue that such an approach does not satisfy common academic standards. The quality of a modelling language depends on a variety of tasks and potential users, some of which are beyond the scope of language designers. Therefore, the evaluation of modelling languages requires a cross-disciplinary approach. Furthermore, it has to be taken into account that the analysis of a language imposes severe epistemological problems. Against this background, we introduce a meta-framework for the evaluation of modelling languages. It includes a conceptual model to guide and structure multi-perspective evaluations.