Higher education rankings are a specific but also controversial instrument for quality assurance in higher education. With the ranking of higher education institutions by gender aspects, CEWS has since 2003 regularly provided a tool to compare the position of higher education institutions concerning gender relations on a national level. In the meantime, other institutions developed rankings with indicators on gender equality, and we present their objectives and approach. Finally, this section provides a brief introduction to the general discussion on rankings.
Current ranking: CEWS ranking of higher education institutions 2021 (in German)
Dr. Andrea Löther presented the CEWS ranking in an online lecture on March 10, 2022 (in German). The video of the lecture "Ranking of higher education institutions by gender aspects: Data basis, methodology, use and limitations" and the slides are available on the GESIS series pages "Meet the Experts".
The CEWS ranking of higher education institutions has been published every two years since 2003. Since its first publication, it has established itself as a component of quality assurance for gender equality at higher education institutions, complementing instruments like evaluations or benchmarking. The CEWS ranking aims at presenting quantitative gender (in)equalities at higher education institutions in a nationwide comparison. It relates to the institutions' gender equality mandate: Universities are obliged to ensure the equal participation of men and women in studies, academic qualification and personnel of the higher education institutions. For this reason, the ranking address decision-makers in higher education institutions, such as administrations and management, equal opportunities officers, as well as federal and state ministries, research organizations and politics.
Methodology And Indicators
To assess the performance of higher education institutions, including universities of applied sciences and colleges of art and music, the indicators of the ranking inform about students, academic qualifications, personnel and changes over time. The indicators follow the logic of the cascade model (see the subpage Gender Monitoring in Practice). The reference value is the proportion of women students or the proportion of women doctorates. The ranking includes the following indicators:
Proportion of women doctorates to women students
- Academic qualification after the doctorate
Women's share of habilitations and junior professorships to women doctorates
- Academic staff below the lifetime professorship level
Proportion of women among academic staff to women students
Proportion of women professors to women doctorates
- Changes in the proportion of women among academic staff below the lifetime professorship level
Difference in the proportion of women in the reference year compared to five years earlier
- Change in the proportion of women in professorships
Difference in the proportion of women in the reference year compared to five years previously
Proportion of women students in disciplines in which the proportion of women students nationwide is below 40 percent, in relation to the proportion of women students nationwide
The CEWS ranking does not show individual positions but calculates three ranking groups: top group, middle group and bottom group. The assignment to the ranking groups rely on quartiles for most indicators: The top group includes the best 25 per cent, the bottom group the quarter of higher education institutions with the worst scores. Thresholds are set for the trend indicators.
The calculation of the overall ranking results from summing the points for the individual indicators. The indicators are not weighted. The student indicator is not included in the overall ranking because not all higher education institutions have any of the disciplines for which it is calculated.
The CEWS ranking use data from the Federal Statistical Office. A separate data collection does not take place. The ranking includes all higher education institutions that are members of the German Rectors' Conference (German abbreviation: HRK) and have at least ten professorships. Higher education institutions that are not members of the HRK are included if they have at least 30 professorships. The ranking differentiates between the three types of higher education institutions (universities incl. universities of education and theological colleges; universities of applied sciences and administrative colleges; colleges of art and music).
The CEWS ranking has followed the above-described methodology since 2015. After a discussion with invited experts, CEWS made changes to the original methods without, however, abandoning the basic logic. This basic logic consists of attention to the discipline profile of the higher education institutions, formation of ranking groups instead of exact positions, overall indicator from a limited number of individual indicators, and using exclusively quantitative data from the Federal Statistical Office. The changes concerned:
- Application of the cascade model (referencing the proportion of women in doctoral degrees)
- Stronger differentiation between professorships and academic staff below the professorship level
- Inclusion of junior professorships in the "academic qualification after doctorate" indicator
- Limiting the student indicator to the underrepresentation of women students and exclusion from the calculation of the overall indicator
Higher education institutions can use the methodology of the CEWS ranking, adapted if necessary, for an internal comparison (departments, faculties or institutes).
CEWS uses the "ranking" instrument for a nationwide comparison of higher education institutions in achieving gender equality. The tool creates attention for gender equality and generates pressure for gender equality policy measures at higher education institutions and in federal states. At the same time, however, the CEWS ranking thus follows the logic of rankings: it submits to the logic of competition and the associated governance of higher education. By creating ranking groups, the logic of the ranking also leads to the fact that there are always higher education institutions in the bottom group, even if they have also undertaken gender equality policy efforts. Furthermore, there is a danger of reducing gender equality to women's shares. The clarity of the ranking – the exclusive use of quantitative data from the Federal Statistical Office – is also its weakness: the lack of data integration on gender equality policies. Please note that a placement in the CEWS ranking cannot be seen directly as the effect of an institution's gender equality policy initiatives. Contextual factors such as a federal state's policies, fluctuations in staff due to various factors, or the competitive nature of the ranking also influence the placement. Therefore, when using the instrument "ranking", you should consider its limits, and the interplay with other quality assurance instruments, such as evaluations and internal monitoring.
Rankings And Changes In Higher Education Policy
There are over 150 national and specialized higher education rankings and twenty global rankings (see Hazelkorn 2017: 6). In Germany, “Spiegel” published the first higher education ranking in 1989, followed by others in 1993 and 1999. Unlike “Spiegel”, the rankings of the Centre for Higher Education (CHE), published since 1998, initially in media partnership with the magazine Stern and later with the newspaper ZEIT, do not determine "the best university". The CHE rankings aim at "a comparative and evaluative description of various performance dimensions (...) and characteristics of the study program, the department, the higher education institution and the location" (Hornbostel 2001: 85). Of the global rankings, the Shanghai Academic Ranking of World Universities (ARWU, since 2003), the ranking of the British newspaper Times Higher Education (THE, since 2004) and Quacquarelli Symonds World Ranking (QS, originally together with THE) are the most relevant. In contrast to the methodology of these global rankings, a consortium of university and higher education research centers developed U-Multirank on behalf of the European Commission, a multidimensional international university ranking oriented toward the users – primarily students.
Rankings reflect developments in higher education policy. In the 1980s, the US-rankings (the National Academy of Science's Research Achievement Rankings, since 1982, and the ranking of undergraduate programs, U.S. News and World Report College Rankings, since 1983) were a response to "growing massification, student mobility and the 'glorification of markets'” (Hazelkorn 2017: 7). In the market-oriented U.S. higher education system, rankings serve a high information need created by many organizations delivering higher education and the differentiation of offering profiles and quality (see CEWS 2003; Pechar 1997). The global rankings developed since 2003 should be seen in the context of intensified globalization and global competition. Rankings are associated with changes in higher education governance, especially increased accountability and competitive logic and a changed relationship between higher education and the state. They are an expression of (global) competition between higher education institutions and, at the same time, a medium of this competition (Federkeil 2013: 36). Or, as Ellen Hazelkorn puts it, “Rankings reflect and map this changing dynamic.” (Hazelkorn 2017: 21).
Methodology And Indicators
In addition to the role of rankings in global competition and changing higher education governance, academic studies are concerned with rankings' methodology. Most rankings follow a common basic methodological approach (see Federkeil 2013):
- Comparisons at the level of entire universities without differentiation by individual disciplines and thus limited information for students, for example
- Calculation of overall values from weighted individual indicators so that a single figure measures the complex system of higher education. Weighting has no theoretical basis and is not robust.
- Assignment of exact ranking positions, suggesting that any difference in rank is also a difference in performance
The Shanghai Ranking (ARWA) indicators relate only to research achievements (bibliometric data, Nobel Prizes, and Fields Medals). The THE and the QS ranking assess research, teaching, international orientation, and reputation. The indicators and databases lead to systematic biases (cf. Federkeil 2013): the indicators of the Shanghai ranking favor science-oriented research universities. In the THE and QS rankings, especially the high weighting and the measurement of the indicator “reputation” are problematic (sample and method of survey, measurement of reputation and not actual quality). Gero Federkeil states that the rankings' biggest higher education policy problem lies in the “focus on research excellence on an international scale” resulting from the methodology (Federkeil 2013: 44). The rankings have definitional power over “world class universities” and threaten the diversity of higher education institutions. Global rankings include 500-600 higher education institutions and thus only 3.5 % of all higher education institutions.
Higher education institutions and significantly higher education administrations use rankings as a source of information for strategic decisions or to strengthen their reputation. However, studies also show that higher education institutions use rankings only as one tool among others for strategic planning and deal with the rankings in a thoroughly reflective manner precisely because of the methodological weaknesses (cf. Hazelkorn et al. 2014; Leiber 2017).
Federkeil, Gero (2013): Internationale Hochschulrankings – Eine kritische Bestandsaufnahme. In: Beiträge zur Hochschulforschung 35, pp. 34–48. (URL: https://www.bzh.bayern.de/fileadmin/news_import/2-2013-Federkeil.pdf).
Hazelkorn, Ellen (2017): Rankings and higher education. Reframing relationships within and between states. Published by Centre for Global Higher Education: London (Centre for Global Higher Education working paper series, no. 19). (URL: https://www.researchcghe.org/perch/resources/publications/wp19.pdf).
Hazelkorn, Ellen; Loukkola, Tia; Zhang, Thérèse (2014): Rankings in institutional strategies and processes: impact or illusion? (URL: https://eua.eu/component/attachments/attachments.html?id=415).
Hornbostel, Stefan (2001): Der Studienführer des CHE: ein multidimensionales Ranking. In: Engel, Uwe (Ed.): Hochschul-Ranking. Zur Qualitätsbewertung von Studium und Lehre. Frankfurt/Main: Campus-Verlag, pp. 83–120.
Kompetenzzentrum Frauen in Wissenschaft und Forschung (CEWS) (2003): Hochschulranking nach Gleichstellungsaspekten. Unter Mitarbeit von Andrea Löther. Bonn (cews.publik, 5). (URL: http://www.gesis.org/fileadmin/cews/www/download/cews-publik5.pdf).
Leiber, Theodor (2017): University governance and rankings. The ambivalent role of rankings for autonomy, accountability and competition. In: Beiträge zur Hochschulforschung 39 (3/4), pp. 30–51. (URL: https://www.bzh.bayern.de/fileadmin/news_import/3-4-2017-Leiber.pdf).
Pechar, Hans (1997): Leistungstransparenz oder Wünschelrute? Über das Ranking von Hochschulen in den USA und im deutschsprachigen Raum. In: Altrichter, Herbert; Schratz, Michael; Pechar, Hans (Ed.): Hochschulen auf dem Prüfstand. Was bringt Evaluation für die Entwicklung von Universitäten und Fachhochschulen? Innsbruck: Studienverlag, pp. 157–178.
In addition to the CEWS ranking of higher education institutions by gender aspects, several other higher education rankings include gender equality indicators or focus on gender equality.
Times Higher Education has been producing rankings on the UN’s Sustainable Development Goals (SDG) since 2019. The ranking for SDG 5 (Gender Equality) “measures universities’ research on the study of gender, their policies on gender equality and their commitment to recruiting and promoting women.”
The ranking calculates exact ranking positions and an overall indicator composed of six weighted individual indicators with a total of 18 sub-indicators:
- Research (27 %)
- Proportion of first-generation female students (15.4 %)
- Student access measures (15.4 %)
- Proportion of senior female academics (professorships, deanships, and senior university leaders) (15.4 %)
- Proportion of women receiving degrees (BA) (11.5 %)
- Women’s progress measures (15.3 %)
The indicators include bibliometric data (research indicator), quantitative data (students, degrees, staff), and qualitative data (existence of gender equality measures such as mentoring, anti-discrimination policies, childcare, and the like). Except for the bibliometric data, all data are based on self-reporting by the universities. For the gender equality measures, higher education institutions should provide evidence to support their claims. The exact calculation, especially the linking of quantitative and qualitative data, remains unclear.
A higher education institution is included in the ranking if they provide their data to THE. Requirements for inclusion are BA degree programs (undergraduate level) and official accreditation. The 2020 ranking ranks fewer than 500 higher education institutions worldwide. The University of Hamburg (ranked 31) and the University of Passau (ranked 61) are the only German universities; three Swiss universities (Universities of Geneva, Lausanne and Lucerne) and one private Austrian university are also included in the ranking. The ten highest-ranked higher education institutions group includes four European universities (Trinity College Dublin, University of Bologna, University of Worchester, University of East London).
THE presents an ambitious and differentiated ranking on gender equality that integrates intersectional perspectives with “first generation students”. The linking of quantitative output indicators and qualitative indicators on gender equality measures is also an interesting approach. However, the ranking bears the general weaknesses of most global rankings: the calculation of exact rankings suggests exact differences in performance. A large number of indicators makes it difficult to say which issue affects the position of a university. The indicator’s weighting is not supported theoretically or empirically. In addition, the ranking does not disclose the information on the individual indicators for the evaluated higher education institutions.
The ranking is updated regularly. Currently, data for 2019 and 2020 are available.
Ranking of WBS Group (link in German)
In 2019, WBS Group, a private education provider, published three rankings on gender equality at higher education institutions (women professors, women deans, women rectors).
Methodology And Indicators
Each ranking is based on a single indicator, the percentage of women professors, women deans, and women rectors. WBS created the rankings according to the level of the respective share of women. The data are self-reported by the higher education institutions in response to a survey by WBS Group.
The ranking contains data on the 37 largest higher education institutions in Germany (mainly universities). However, the universities in Munich, TU Berlin, or Heidelberg and Bielefeld, for example, are missing.
The ranking attracted a lot of publicity through a publication in Spiegel (link in German) and a striking public relations campaign (Gender debate in higher education: Most female professors work at these universities). The reduction of gender equality to one indicator, the formation of rankings without considering an institution’s discipline profile, and universities' insufficient representation can be seen as weaknesses.
The ranking is not updated.
U-Multirank is a multidimensional, user-driven approach to the international ranking of higher education institutions. U-Multirank address predominantly students. A consortium led by the Centre for Higher Education (CHE), the Center for Higher Education Policy Studies (CHEPS, University of Twente), the Centre for Science and Technology Studies (CWTS, Leiden University) and Fundación CYD (Spain) developed this ranking
Methodology And Indicators
U-Multirank compares the performances of higher education institutions in five dimensions (teaching and learning, research, knowledge transfer, international orientation and regional engagement), each for individual disciplines. Users can select particular areas and indicators according to their needs. The ranking does not show exact ranking positions but rather ranking groups.
The ranking integrates various indicators that reflect a higher education institution’s gender relations:
- Proportion of female students
- Proportion of women among academic personnel
- Probability of male and female students to complete a doctorate (“gender balance”)
The data source is a survey of departments and higher education institutions.
User can integrate data on “Gender Balance” into the ranking of individual disciplines by changing the indicators via “Change measures” after creating a ranking. The unique values become visible by mouseover.
In the 2021 publication, the “Gender Balance” indicator will also be made available for rankings of the entire higher education institution. The consortium considers adding indicators on social inclusion (student groups that are typically underrepresented in the university context).
U-Multirank’s methodology (evaluation of disciplines, ranking groups, individual selection and adjustment) clearly distinguishes it from many other rankings. The gender balance indicators are more meaningful and avoid distortions caused by the universities’ discipline profiles by referring to disciplines. The “gender balance” indicator enables a more differentiated assessment than simply the share of women in doctorates. Users can link them to other indicators without calculating a combined value. A critical note is that users can integrate the “gender balance” indicator only at a late stage in creating the ranking and that the indicator is therefore not very visible.
The ranking is updated regularly.
GEW Code Check (link in German)
In 2017, Germany’s Education and Science Workers’ Union (German abbreviation: GEW) published the “Code Check”, which rank state universities in Germany on working and employment conditions. The ranking is based on the ten criteria of the Herrschingen Code “Good Work in Higher Education” (link in German), which include “Family-friendly design of career paths” and “Equal opportunities for women and men”.
Methodology And Indicators (link in German)
Data from the study “Employment conditions and academic staff policies” conducted by the Humboldt University of Berlin is the basis of the code check. The data on family-friendliness and equal opportunities come from the universities’ websites (as of February 2015 and March 2018) and the Federal Statistical Office (2016). The Code Check provides information on 88 universities (including universities of education).
The code check collects the following information for the criterion of family-friendliness:
- Concept for family-friendliness
- Audit family-friendly university
- Member of the Best Practice Club
Users can create a ranking (available or not) for the individual instruments. The code check does not calculate an overall ranking. Users have access to the design of the concept for family-friendliness via “Detailed values”.
The following information is part of the criterion equal opportunities:
- Proportion of women among full-time academic staff with differentiation of temporary and part-time positions
- Existence of an equal opportunities concept
Users can create a ranking for the individual data, which, in the case of quantitative data, results from the level of the proportion of women (the higher the proportion of women, the higher the rank). Via “Detailed values”, further information is accessible for the individual universities, including the design of the gender equality concept, positioning in the CEWS ranking 2017, and proportions of women and men by career stage.
The Code Check places equal opportunities and family-friendliness in a larger context on working and employment conditions. The integration of conceptual measures and data on part-time and fixed-term employment is also positive. No overall value is calculated from the quantitative and qualitative data; instead, the data serves primarily as information for users. One weakness is that the ranking for the quantitative data is calculated from the level of the proportion of women, without considering horizontal segregation, and follows the logic of “the higher, the better”. The highest-ranked universities with female percentages above 60 % are colleges of education and veterinary schools.
It remains unclear whether the organization plans a future update of the ranking.