GESIS Leibniz Institute for the Social Sciences: Go to homepage
Center of Excellence Women and Science

Data And Information On Gender Monitoring

Higher Education Rankings And Gender Equality

Higher education rankings are a specific but controversial instrument for quality assurance in higher education. With the ranking of higher education institutions by gender aspects, CEWS has since 2003 regularly provided a tool to compare the position of higher education institutions concerning gender relations on a national level. In the meantime, other institutions developed rankings with indicators on gender equality, and we present their objectives and approach. Finally, this section briefly introduces the general discussion on rankings.

Current ranking: CEWS ranking of higher education institutions 2023 (in German) | Further issues under CEWSpublik

Andrea Löther offers a comprehensive presentation and contextualisation of the CEWS university ranking in her German talk in the GESIS series “Meet the Experts” in March 2022 (presentation und slides).

Objective

The CEWS ranking of higher education institutions has been published every two years since 2003. Since its first publication, it has established itself as a quality assurance component for gender equality at higher education institutions, complementing instruments like evaluations or internal monitoring. The CEWS ranking aims at presenting quantitative gender (in)equalities at higher education institutions in a nationwide comparison. It relates to the institutions' gender equality mandate: Universities are obliged to ensure the equal participation of men and women in studies, academic qualification and personnel of the higher education institutions. For this reason, the ranking addresses decision-makers in higher education institutions, such as administrations and management, equal opportunities officers, federal and state ministries, research organisations and politics.

Methodology And Indicators

To assess the performance of higher education institutions, including universities of applied sciences and colleges of art and music, the ranking indicators inform about students, academic qualifications, personnel and changes over time. The indicators follow the logic of the cascade model (see the subpage Gender Monitoring in Practice). The reference value is the proportion of women students or the proportion of women doctorates. The ranking includes the following indicators:

  • Doctorates
    Proportion of women doctorates to women students
  • Academic qualification after the doctorate
    Women's share of habilitations and junior professorships to women doctorates
  • Academic staff below the lifetime professorship level
    Proportion of women among academic staff to women students
  • Professorships
    Proportion of women professors to women doctorates
  • Changes in the proportion of women among academic staff below the lifetime professorship level
    Difference in the proportion of women in the reference year compared to five years earlier
  • Change in the proportion of women in professorships
    Difference in the proportion of women in the reference year compared to five years previously
  • Students | Proportion of women students in disciplines in which the proportion of women students nationwide is below 40 percent, concerning the proportion of women students nationwide

The CEWS ranking does not show individual positions but calculates three ranking groups: top group, middle group and bottom group. The assignment to the ranking groups rely on quartiles for most indicators: The top group includes the best 25 per cent, the bottom group the quarter of higher education institutions with the worst scores. Thresholds are set for the trend indicators.

The calculation of the overall ranking results from summing the points for the individual indicators. The indicators are not weighted. The student indicator is not included in the overall ranking because not all higher education institutions have any of the disciplines for which it is calculated.

The CEWS ranking does not show individual positions but calculates three ranking groups: top group, middle group and bottom group. The ranking groups' assignment relies on quartiles for most indicators: The top group includes the best 25 per cent, the bottom group is the quarter of higher education institutions with the worst scores. Thresholds are set for the trend indicators.

The calculation of the overall ranking results from summing the points for the individual indicators. The indicators are not weighted. The student indicator is not included in the overall ranking because not all higher education institutions have any of the disciplines for which it is calculated.

The CEWS ranking use data from the Federal Statistical Office. A separate data collection does not take place. The ranking includes all higher education institutions that are members of the German Rectors' Conference (German abbreviation: HRK) and have at least ten professorships. Higher education institutions, not members of the HRK are included if they have at least 30 professorships. The ranking differentiates between the three types of higher education institutions (universities incl. universities of education and theological colleges; universities of applied sciences and administrative colleges; colleges of art and music).

The CEWS ranking has followed the above-described methodology since 2015. After a discussion with invited experts, CEWS made changes to the original methods without abandoning the basic logic. This basic logic consists of attention to the discipline profile of the higher education institutions, formation of ranking groups instead of exact positions, overall indicators from a limited number of individual indicators, and using exclusively quantitative data from the Federal Statistical Office. The changes concerned:

  • Application of the cascade model (referencing the proportion of women in doctoral degrees)
  • Stronger differentiation between professorships and academic staff below the professorship level
  • Inclusion of junior professorships in the "academic qualification after doctorate" indicator
  • Limiting the student indicator to the underrepresentation of women students and exclusion from the calculation of the overall indicator

Higher education institutions can use the methodology of the CEWS ranking, adapted if necessary, for an internal comparison (departments, faculties or institutes).

Assessment

CEWS uses the "ranking" instrument for a nationwide comparison of higher education institutions in achieving gender equality. The tool creates attention for gender equality and generates pressure for gender equality policy measures at higher education institutions and in federal states. At the same time, however, the CEWS ranking thus follows the logic of rankings: it submits to the logic of competition and the associated governance of higher education. By creating ranking groups, the logic of the ranking also leads to the fact that higher education institutions are always in the bottom group, even if they have also undertaken gender equality policy efforts. Furthermore, there is a danger of reducing gender equality to women's shares. The clarity of the ranking – the exclusive use of quantitative data from the Federal Statistical Office – is its weakness: the lack of data integration on gender equality policies. Please note that a placement in the CEWS ranking cannot be seen directly as the effect of an institution's gender equality policy initiatives. Contextual factors such as a federal state's policies, fluctuations in staff due to various factors, or the competitive nature of the ranking also influence the placement. Therefore, when using the instrument "ranking", you should consider its limits, and the interplay with other quality assurance instruments, such as evaluations and internal monitoring. While using data from the Federal Statistical Office, only binary-coded data on gender are currently available.

Rankings And Changes In Higher Education Policy

There are over 150 national and specialized higher education rankings and twenty global rankings (see Hazelkorn 2017: 6). In Germany, “Spiegel” published the first higher education ranking in 1989, followed by others in 1993 and 1999. Unlike “Spiegel”, the rankings of the Centre for Higher Education (CHE), published since 1998, initially in media partnership with the magazine Stern and later with the newspaper ZEIT, do not determine "the best university". The CHE rankings aim at "a comparative and evaluative description of various performance dimensions (...) and characteristics of the study program, the department, the higher education institution and the location" (Hornbostel 2001: 85). Of the global rankings, the Shanghai Academic Ranking of World Universities (ARWU, since 2003), the ranking of the British newspaper Times Higher Education (THE, since 2004) and Quacquarelli Symonds World Ranking (QS, originally together with THE) are the most relevant. In contrast to the methodology of these global rankings, a consortium of university and higher education research centers developed U-Multirank on behalf of the European Commission, a multidimensional international university ranking oriented toward the users – primarily students.

Rankings reflect developments in higher education policy. In the 1980s, the US-rankings (the National Academy of Science's Research Achievement Rankings, since 1982, and the ranking of undergraduate programs, U.S. News and World Report College Rankings, since 1983) were a response to "growing massification, student mobility and the 'glorification of markets'” (Hazelkorn 2017: 7). In the market-oriented U.S. higher education system, rankings serve a high information need created by many organizations delivering higher education and the differentiation of offering profiles and quality (see CEWS 2003; Pechar 1997). The global rankings developed since 2003 should be seen in the context of intensified globalization and global competition. Rankings are associated with changes in higher education governance, especially increased accountability and competitive logic and a changed relationship between higher education and the state. They are an expression of (global) competition between higher education institutions and, at the same time, a medium of this competition (Federkeil 2013: 36). Or, as Ellen Hazelkorn puts it, “Rankings reflect and map this changing dynamic.” (Hazelkorn 2017: 21).

Methodology And Indicators

In addition to the role of rankings in global competition and changing higher education governance, academic studies are concerned with rankings' methodology. Most rankings follow a common basic methodological approach (see Federkeil 2013):

  • Comparisons at the level of entire universities without differentiation by individual disciplines and thus limited information for students, for example
  • Calculate overall values from weighted individual indicators so that a single figure measures the complex higher education system. Weighting has no theoretical basis and is not robust.
  • Assignment of exact ranking positions, suggesting that any difference in rank is also a difference in performance

The Shanghai Ranking (ARWA) indicators relate only to research achievements (bibliometric data, Nobel Prizes, and Fields Medals). The THE and the QS ranking assess research, teaching, international orientation, and reputation. The indicators and databases lead to systematic biases (cf. Federkeil 2013): the indicators of the Shanghai ranking favor science-oriented research universities. In the THE and QS rankings, especially the high weighting and the measurement of the indicator “reputation” are problematic (sample and method of survey, measurement of reputation and not actual quality). Gero Federkeil states that the rankings' biggest higher education policy problem lies in the “focus on research excellence on an international scale” resulting from the methodology (Federkeil 2013: 44). The rankings have definitional power over “world class universities” and threaten the diversity of higher education institutions. Global rankings include 500-600 higher education institutions, and thus only 3.5 % of all higher education institutions.

Higher education institutions and significantly higher education administrations use rankings as a source of information for strategic decisions or to strengthen their reputation. However, studies also show that higher education institutions use rankings only as one tool among others for strategic planning and deal with the rankings in a thoroughly reflective manner precisely because of the methodological weaknesses (cf. Hazelkorn et al. 2014; Leiber 2017).

Sources:

Federkeil, Gero (2013): Internationale Hochschulrankings – Eine kritische Bestandsaufnahme. In: Beiträge zur Hochschulforschung 35, pp. 34–48. (URL: https://www.bzh.bayern.de/fileadmin/news_import/2-2013-Federkeil.pdf).

Hazelkorn, Ellen (2017): Rankings and higher education. Reframing relationships within and between states. Published by Centre for Global Higher Education: London (Centre for Global Higher Education working paper series, no. 19). (URL: https://www.researchcghe.org/perch/resources/publications/wp19.pdf).

Hazelkorn, Ellen; Loukkola, Tia; Zhang, Thérèse (2014): Rankings in institutional strategies and processes: impact or illusion? (URL: https://eua.eu/component/attachments/attachments.html?id=415).

Hornbostel, Stefan (2001): Der Studienführer des CHE: ein multidimensionales Ranking. In: Engel, Uwe (Ed.): Hochschul-Ranking. Zur Qualitätsbewertung von Studium und Lehre. Frankfurt/Main: Campus-Verlag, pp. 83–120.

Kompetenzzentrum Frauen in Wissenschaft und Forschung (CEWS) (2003): Hochschulranking nach Gleichstellungsaspekten. Unter Mitarbeit von Andrea Löther. Bonn (cews.publik, 5). (URL: http://www.gesis.org/fileadmin/cews/www/download/cews-publik5.pdf).

Leiber, Theodor (2017): University governance and rankings. The ambivalent role of rankings for autonomy, accountability and competition. In: Beiträge zur Hochschulforschung 39 (3/4), pp. 30–51. (URL: https://www.bzh.bayern.de/fileadmin/news_import/3-4-2017-Leiber.pdf).

Pechar, Hans (1997): Leistungstransparenz oder Wünschelrute? Über das Ranking von Hochschulen in den USA und im deutschsprachigen Raum. In: Altrichter, Herbert; Schratz, Michael; Pechar, Hans (Ed.): Hochschulen auf dem Prüfstand. Was bringt Evaluation für die Entwicklung von Universitäten und Fachhochschulen? Innsbruck: Studienverlag, pp. 157–178.

In addition to the CEWS ranking of higher education institutions by gender aspects, several other higher education rankings include gender equality indicators or focus on gender equality.

THE – Impact Rankings by SDG: gender equality

Times Higher Education has been producing rankings on the UN’s Sustainable Development Goals (SDG) since 2019. The ranking for SDG 5 (Gender Equality) “measures universities’ research on the study of gender, their policies on gender equality and their commitment to recruiting and promoting women.”

Methodology and Indicators

The ranking calculates exact ranking positions and an overall indicator composed of six weighted individual indicators with a total of 18 sub-indicators:

  • Research (27 %)
  • Proportion of first-generation female students (15.4 %)
  • Student access measures (15.4 %)
  • Proportion of senior female academics (professorships, deanships, and senior university leaders) (15.4 %)
  • Proportion of women receiving degrees (BA) (11.5 %)
  • Women’s progress measures (15.3 %)

The indicators include bibliometric data (research indicator), quantitative data (students, degrees, staff), and qualitative data (existence of gender equality measures such as mentoring, anti-discrimination policies, childcare, and the like). Except for the bibliometric data, all data are based on self-reporting by the universities. For the gender equality measures, higher education institutions should provide evidence to support their claims. The exact calculation, especially the linking of quantitative and qualitative data, remains unclear.

A higher education institution is included in the ranking if they provide their data to THE. Requirements for inclusion are BA degree programs (undergraduate level) and official accreditation. The 2023 ranking ranks about 1,100 higher education institutions worldwide. Freie Universität Berlin (ranked 76) is the highest of the ten German universities. Four Swiss universities and one private Austrian university are also included in the ranking. The ten highest-ranked higher education institutions group includes three European universities (University of Bologna, Dublin City University, National and Kapodistiran University of Athens).

Assessment

THE presents an ambitious and differentiated ranking on gender equality that integrates intersectional perspectives with "first-generation students". The linking of quantitative output indicators and qualitative indicators on gender equality measures is also an interesting approach. However, the ranking bears the general weaknesses of most global rankings: the calculation of exact rankings suggests exact differences in performance. A large number of indicators makes it difficult to say which issue affects the position of a university. The indicator's weighting is not supported theoretically or empirically. In addition, the ranking does not disclose information on the individual indicators for the evaluated higher education institutions.

The ranking is updated regularly. Currently, data for 2019-2023 are available.

 

U-Multirank

U-Multirank is a multidimensional, user-driven approach to the international ranking of higher education institutions. U-Multirank addresses predominantly students. A consortium led by the Centre for Higher Education (CHE), the Center for Higher Education Policy Studies (CHEPS, University of Twente), the Centre for Science and Technology Studies (CWTS, Leiden University) and Fundación CYD (Spain) developed this ranking.

Methodology and Indicators

U-Multirank compares the performances of higher education institutions in five dimensions (teaching and learning, research, knowledge transfer, international orientation and regional engagement), each for individual disciplines. Users can select particular areas and indicators according to their needs. The ranking does not show exact ranking positions but rather ranking groups.

The ranking integrates the following indicators that reflect a higher education institution's gender relations:

  • Proportion of female students
  • Proportion of women among academic personnel
  • Probability of male and female students to complete a doctorate ("gender balance")
  • Percentage of female authors among all publications of the institution

The data source is a survey of departments and higher education institutions.

User can integrate data on "Gender Balance" into the ranking of individual disciplines and universities as whole by changing the indicators via "Change measures" after creating a ranking. The unique values become visible by mouseover.

Since 2022, the ranking on individual disciplines contains an indicator on social inclusion which represents the percentages among all new bachelor entrants of selected groups of traditionally underrepresented groups: mature students, students with disabilities as well as bachelor and master students with non-academic family backgrounds (first-generation students).

U-Multirank also summarizes the data in a Gender Monitor (see section Access to statistical data).

Assessment

U-Multirank's methodology (evaluation of disciplines, ranking groups, individual selection and adjustment) clearly distinguishes it from many other rankings. The gender balance indicators are more meaningful and avoid distortions caused by the universities' discipline profiles by referring to disciplines. The "gender balance" indicator enables a more differentiated assessment than simply the share of women in doctorates. Users can link them to other indicators without calculating a combined value. A critical note is that users can integrate the "gender balance" indicator at a late stage in creating the ranking and that the indicator is therefore not very visible.

The ranking is updated regularly.

 

vsvbb ranking: Women's quota at German universities and universities of applied sciences

In 2022 and 2023, the “Verbraucherschutzverein Berlin/Brandenburg e.V.” (vsvbb) published a data analysis on the proportion of women at 40 and 50 universities and universities of applied sciences. The analysis includes data on professors, university and faculty management and students.

Methodology and Indicators

The analysis includes three tables: Professors* and junior professors*, faculty management and rector*/president* and students. Each ranking is based on the proportion of women among professors, faculty heads and students. The vsvbb received the data through a survey among the universities, selecting the universities according to their size.

The 2023 ranking contains data on 42 universities.

Assessment

Weaknesses of the ranking are the reduction of gender equality to one indicator, the formation of rankings without taking into account the subject profile of a university and the limitation to a small number of universities.

 

Ranking of WBS Group

In 2019, WBS Group, a private education provider, published three rankings on gender equality at higher education institutions (women professors, women deans, women rectors).

Methodology and Indicators

Each ranking was based on a single indicator, the percentage of women professors, women deans, and women rectors. WBS created the rankings according to the level of the respective share of women. The higher education institutions self-reported the data in response to a survey by WBS Group.

The ranking contained data on Germany's 37 largest higher education institutions (mainly universities). However, the universities in Munich, TU Berlin, or Heidelberg and Bielefeld, for example, were missing.

Assessment

The ranking attracted a lot of publicity through a publication in Spiegel (link in German) and a striking public relations campaign (Gender debate in higher education: Most female professors work at these universities). The reduction of gender equality to one indicator, the formation of rankings without considering an institution's discipline profile, and universities' insufficient representation can be seen as weaknesses.

The ranking is not updated.

 

GEW Code Check

In 2017, Germany's Education and Science Workers' Union (German abbreviation: GEW) published the "Code Check", which rank state universities in Germany on working and employment conditions. The ranking is based on the ten criteria of the Herrschingen Code "Good Work in Higher Education" (link in German), which include "Family-friendly design of career paths" and "Equal opportunities for women and men".

Methodology and Indicators

Data from the study “Employment conditions and academic staff policies” conducted by the Humboldt University of Berlin was the basis of the code check. The data on family-friendliness and equal opportunities came from the universities' websites (as of February 2015 and March 2018) and the Federal Statistical Office (2016). The Code Check provided information on 88 universities (including educational universities).

The code check collected the following information for the criterion of family-friendliness:

  • Concept for family-friendliness
  • Audit family-friendly university
  • Member of the Best Practice Club

Users could create a ranking (available or not) for the individual instruments. The code check did not calculate an overall ranking. Users had access to the concept design for family-friendliness via "Detailed values".

The following information was part of the criterion equal opportunities:

  • Proportion of women among full-time academic staff with differentiation of temporary and part-time positions
  • Existence of an equal opportunities concept

Users could create a ranking for the individual data, which, in the case of quantitative data, results from the level of the proportion of women (the higher the proportion of women, the higher the rank). Via "Detailed values", further information was accessible for the individual universities, including the design of the gender equality concept, positioning in the CEWS ranking 2017, and proportions of women and men by career stage.

Assessment

The Code Check placed equal opportunities and family-friendliness in a larger context on working and employment conditions. The integration of conceptual measures and data on part-time and fixed-term employment was also positive. No overall value was calculated from the quantitative and qualitative data; instead, the data served primarily as information for users. One weakness was that the ranking for the quantitative data is calculated from the level of the proportion of women, without considering horizontal segregation, and followed the logic of "the higher, the better". The highest-ranked universities with female percentages above 60 % are colleges of education and veterinary schools.

The Code Check is not available anymore and was not updated.