The ESC working group on “Quantitative Methods in Criminology” (EQMC) and the ESA Research Network on “Quantitative Methods” (ESA RN21) welcome abstract submissions for presentations at the conference
Modes, Measurement, Modelling: Achieving Equivalence in Quantitative Research
in Mannheim, Germany, 24.-25. October 2014, hosted by GESIS – Leibniz Institute for the Social Sciences.
Local Organizing Committee: Tobias Gummer and Henning Best
The conference fee is 30 € for non-members and 20 € for ESA RN21/ EQMC members.
We especially encourage submissions covering topics of the sessions listed below.
We also invite abstracts that deal with other aspects of quantitative methodology, such as comparability and equivalence between survey modes, measurement techniques, cultures, time, modelling approaches, research designs, and data analysis techniques in the Social Sciences. We also encourage submissions on recent developments in data collection.
Please email your abstracts to the corresponding session organizers as well as cc to henning.best(at)gesis(dot)org by
15 August 2014
at the latest. Proposals should contain a title and an abstract of up to 200 words. You will be informed by August 31 regarding acceptance of the paper.
When analyzing change, we commonly distinguish between social or individual change. The research question we choose to focus on determines the longitudinal research design we employ: when analyzing social change in a population, repeated cross-sectional designs are considered ideal. But, repeated cross-sections are not able to track the same individuals over time and consequently, when measuring individual change, panel designs are preferred. However, looking at the current state of research, we find combinations of both designs (e.g. rotating panels, repeated panels, split panels). These hybrids are characterized by different ways of combining repeated cross-sections and panels, each chosen to tackle specific weaknesses of either design.
Due to budget restrictions (and/or other constraints), an ideal research design is not always possible to adopt. Therefore, it is crucial for practitioners to be in a position to assess limitations, applicability, and analytical potentials of different longitudinal research designs to be able to make informed decisions when it comes to determining a suitable research design. This session seeks to explore these issues further.
It welcomes – but is not limited to – contributions on:
- substantial research on both individual and/or social change using a longitudinal research design.
- concept and implementation of longitudinal research designs.
- limitations / advantages of one or multiple longitudinal research designs.
- comparisons of different longitudinal research designs and their potentials.
Chair: Tobias Gummer, Mail: Tobias.Gummer(at)gesis(dot)org
International comparative survey research dealing (once or repetitively) with the assessment of values, attitudes, and behaviors of certain groups of individuals is plagued with noninvariant measurement models of the latent factors under study across cultural groups or longitudinally. Therefore it is necessary to test for measurement invariance (MI) across groups (e.g., countries, cultures or time points) before they are compared . Numerous methodological studies have shown that in order to make valid comparisons across groups (or a given group across time), high levels of measurement invariance need to be supported by the data. A high level of MI is typically scalar invariance which assumes identical factor loadings and intercepts of indicators across the groups under study and which allows factor mean comparisons. In practice, survey data often do not exhibit such strict levels of MI for certain latent factors across some or all groups under study. Because of frequent violations of measurement assumptions, researchers have proposed to explore the possibility of relaxing strict MI assumptions. Partial invariance has been proposed more than two decades ago whereas newer approaches such as Bayesian structural equation modeling (BSEM), exploratory structural equation modeling (ESEM) or alignment have been proposed more recently. The session aims at presenting developments in this literature, applications of traditional and newer methods to assess MI, for example on large-scale survey data, and inform participants on contemporary challenges in MI testing and adequate ways of dealing with them.
Keywords: measurement equivalence / measurement invariance; Bayesian structural equation modeling (BSEM); alignment; exploratory structural equation modeling (ESEM)
Chairs: Eldad Davidov & Peter Schmidt, Mail: davidov(at)soziologie.uzh(dot)ch
Latent variable approaches succeed rapidly by implementation of a wide range of different statistical methods with varied methodological innovations, different application areas and a wide circle of users. The session pursues three interrelated objectives: (1) Firstly, to provide a forum for the methodological discussion of potential applications and limitations of current developments of latent variable models, e. g. structural equation- , latent class- or multilevel approaches, with a special focus on comparative research. (2) Secondly, to compare different latent variable specifications which try to test the same theoretical assumptions. (3) Thirdly, to present exemplary substantive applications of innovative practices.
Possible topics include, but are not limited to the following examples:
- Multilevel structural equation modeling for investigating hierarchical/non-hierarchical nested data structures, cross-level effects, and mixed populations (e.g. multilevel latent class models).
- Applications of Bayesian statistics in the context of multilevel and structural equation modeling.
- Latent Class models for investigating latent categorical constructs.
- Special specification problems or data situations, e.g. categorical data or non-linear effects.
Chairs: Jochen Mayerl & Elmar Schlüter, E-Mail: Jochen.Mayerl(at)sowi.uni-kl(dot)de
Matching techniques are most often used for creating quasi-experimental designs. With those techniques scholars create groups with (almost) similar characteristics allowing them to study the impact of a treatment compared to the control group. In the past years there has been a growing interest in applying and advancing matching methods. The session aims to discuss these new applications and advancements. One special focus is the discussion of the application of matching for non-quasi-experimental purposes. Most scholars concentrate on causal inference using these methods but matching could be suitable for a broader range of questions. For instance, in a panel study one could ask if the matched group is getting bigger in number or changed its composition over time highlighting differences in self-selection. Analyses like this might be important for questions typically asked by social scientists. An application of matching like this could allow social scientists to study the differentials of an outcome - such as wages, happiness, health, or gambling – between and within matched and non-matched groups.
The group calls for papers addressing new applications and statistical advancements of matching methods, or a discussion regarding the applicability of matching methods beyond quasi-experimental designs.
Chair: Andreas Haupt, Mail: andreas.haupt(at)kit(dot)edu
The main research aim of cross-national survey instruments (such as the European Social Survey, the ISSP or the World Value Survey) is to achieve comparable results and the key term to reach this goal is “equivalence”, more exactly: “functional equivalence”. Regarding certain aspects (e.g. sampling, survey modes, translation) considerable progress has been made during the last years but in certain central areas there is still a considerable need for further research.
One particular area is the challenge to achieve construct equivalence and to guarantee content validity of the major research themes in cross-cultural research. In many applied projects we can still observe the problematic strategy of an unchallenged use of Western-based approaches which claim universality. On the other hand several researchers already started to apply rather strict tests of equivalence (e.g. Multi Group Confirmatory Factor Analysis (MGCFA)) before conducting a cross- national analysis of survey data.
In the proposed session we aim to search for adequate methods to test for construct equivalence, to look for strategies to enhance content validity of certain (culturally sensitive) constructs and to discuss alternative and innovative approaches (in various research projects) of equivalence testing. Quantitative researchers who are active in those research fields are highly welcome to present methodological groundwork on the proposed issues or to present their own strategies to deal with construct equivalence in ongoing research projects.
Chairs: Wolfgang Aschauer & Martin Weichbold, Mail: Wolfgang.Aschauer(at)sbg.ac(dot)at
The growing availability of secondary data offers the possibility to tackle the same research question with different studies (Eurobarometer, ESS, EVS, EU-SILC, ISSP, etc.) that share similar operationalization and sample designs. Pooling different studies allows increasing the number of cases and expanding the period of analysis, while controlling for data quality. Nonetheless, the pooling procedure introduces the potential for biases. One of the main challenges in pooling datasets coming from different studies is indeed linked to differences in question wording, coding procedures, question ordering and, more generally, the topics covered by the questionnaire (questionnaire context effect). Harmonization cannot therefore be applied mechanically. On the contrary, harmonized procedures do require both technical awareness and profound knowledge of the specific field of study to achieve valid and reliable measurement in comparative research. In the literature on substantive topics (e.g. education, religion, voting behavior), different strategies to deal with this problem are proposed, but a systematic reflection is lacking notwithstanding the relevance of the problem.
The session welcomes papers that address substantive research questions using pooled survey data from different studies, with a focus on strategies adopted to harmonize either single variables or multiple items measurement instruments.
Chairs: Ferruccio Biolcati-Rinaldi, Markus Quandt & Cristiano Vezzoni, Mail: ferruccio.biolcati(at)unimi(dot)it
Multilevel models have become predominant in analyses of comparative survey datasets, where respondents are clustered in higher-level units like countries or regions. Such models have also long been fitted to data clustered within units -- i.e., repeated observations on individuals or countries. Increasingly, however, researchers are fitting multilevel models to data that are clustered both ways, such as multiple waves of surveys whose respondents are nested in countries or regions each observed multiple times. (Such datasets may be traditional panels, where each respondent is observed more than once, or they may draw new samples each time.) These comparative longitudinal survey datasets should be useful resources for studies of social change in the broadest sense, and for testing inferences previously based on only cross-sectional analyses. This session welcomes papers grappling with the challenges of analysing such datasets, whether using multilevel modelling or other related techniques with different capabilities/advantages. Papers might address recent methodological advances; present illuminating or innovative applications in some field of the social sciences; and/or discuss limitations and challenges that remain.
Chairs: Alexander Schmidt-Catran & Malcolm Fairbrother, Mail: alexander.schmidt(at)wiso.uni-koeln(dot)de
Mixed-mode surveys often combine survey modes at the data collection stage of the survey process. The major motivation of mixed-mode designs is a reduction of selection error, which is indicated by increased response and coverage rates reported in many mixed-mode surveys. However, mixed-mode surveys involve the problem of mode effects, also called measurement effects. Measurement effects imply that the same respondents observed under different modes provide varying extents of random and systematic measurement error when answering the same question. A survey question is then called ‘measurement non-equivalent’ across modes.
Since a reduction in selection error may be outweighed by an increase in measurement error in a mixed-mode design, researchers are interested in evaluating measurement equivalence either at the design or at the analysis stage of a mixed-mode survey. Moreover, when measurement non-equivalence is detected in mixed-mode data, survey practitioners need tools for adjusting the measurement difference. An important problem in diagnosis and adjustment is the confounding of measurement and selection effects, because a mixed-mode design may reach different types of respondents in different modes. Careful modeling of effects is therefore essential for valid conclusions about measurement equivalence.
This session invites papers that address one or multiple of the following aspects:
- Statistical models or other approaches for assessing measurement equivalence of modes
- Approaches that control for selection effects when estimating measurement effects in mixed-mode surveys
- Methods for adjusting survey estimates in case of measurement non-equivalence in mixed-mode data
Chair: Thomas Klausch, Mail: l.t.klausch(at)uu(dot)nl
Experimental designs for the study of causal mechanisms have been increasingly applied in the social sciences in recent years. The range of applications includes natural experiments, field experiments and laboratory experiments as well as techniques that combine experimental strategies with survey research methods. Typical applications are to be found in research on norms and deviant behavior, on problems of trustworthiness, environmental behavior, and in research on methodologies itself.
In this session, we encourage an intensive exchange and a critical review of the applicability of experimental techniques in the social sciences. We want to bring together expertise on conducting experimental strategies in field, laboratory and survey contexts, considering methodological challenges in particular. What did we learn? Which problems became apparent? How can they possibly be resolved?
We specifically invite contributions on one or several of the following aspects:
- Applications in the laboratory, in the field, in the survey and of natural experiments
- Comparisons of experimental data from different types and sources: field, laboratory, survey
- Especially survey experiments (Split-Ballot, Factorial Survey): analysis of effects of context, of question order, presentation mode (visual, verbal, open, closed); effects of response format, effects of survey mode (face-to-face, PAPI, CATI, CAWI)
- Contributions of experimental methods in the analysis of sensitive topics (criminal or deviant behavior)
Chairs: Stefanie Eifler & Knut Petzold, Mail: knut.petzold(at)ku(dot)de
The session aims at presenting current debates and advancements in assessment of different types of equivalence and varying meanings of comparability in quantitative sociological research conducted online. The session offers the opportunity to discuss current approaches to investigating comparability and equivalence of different online measurement techniques, sampling methods, and online surveys completed on different devices (PC, mobile phones, and tablets). Besides exploring fast growing evidence pro and contra comparability and equivalence of online and offline modes of data collection in surveys and experiments, the methodological issues of equivalence and comparability of other types of online and offline data such us personality inventories, time budget/travel diaries, etc. will be discussed. The contributions from researchers using such novel methods of comparative quantitative research as gathering sociologically relevant non-reactive Web data and digital footprints (including automatically captured Web traffic data and online social media data), which are focused on methodological problems of data collection, archiving and analysis, are also encouraged.
Chairs: Inna F. Deviatko & Aigul Mavletova, Mail: deviatko(at)aha(dot)ru
Underreporting of deviant behaviour is an old and well-known problem of many crime-statistics. For obvious reasons, administrative data published by police or courts are in most cases rather incomplete. Standardised interviews focussing on the victimisation reported by concerned citizens or on self-reported crimes disclosed by offenders, tend to have the same drawback, mainly due to the fear of punishment and the shame of the victims. Hence this CfP solicits methodological and empirical contributions, including international comparisons and case studies,
a) which compare for a given type of crime the quantitative bias of different methods for the estimation of deviant behaviour;
b) which discuss the pitfalls of the different methods of crime-estimation as well as the possibilities to avoid these pitfalls;
c) which search for the unbiased truth with regard to the quantitative prevalence or incidence of crime in a given context.
Other types of contributions to the methodological problem of under- or mis-reporting of crime – e.g. by expert interviews or the observation of anonymous physical traces – are also very welcome.
Chair: Georg P. Mueller, Mail: Georg.Mueller(at)Unifr(dot)ch
This session welcomes presentations on topics of the conference that are not explicitly covered by the sessions listed above. If you would like to present a paper on Quantitative Methods an this conference that does not fit in the other sessions, please submit your paper to this open session.
Chair: Henning Best, Mail: Henning.Best(at)gesis(dot)org
This session welcomes presentations on topics of the conference that are not explicitly covered by the sessions listed above. If you would like to present a paper on quantitative criminological research in this conference that does not fit in the other sessions, please submit your paper to this open session.
Chairs: Heinz Leitgöb & Daniel Seddig, Mail: heinz.leitgoeb(at)jku(dot)at