Declining response rates are an ongoing source of concern for decades and publications documenting international nonresponse date back to the turn of this century (e.g. De Heer, 1999). The first studies tried to describe and explain nonresponse trends, discerning between noncontact and refusal rates. Coping strategies included refusal conversion and basic adjustment. In the 21st century the focus shifted to the assessment and reduction of nonresponse bias and representativity, tailored survey designs, mixed-mode strategies, and combining data from different sources (De Leeuw, Luiten, & Stoop 2020).

Central in both reducing and adjusting for nonresponse is knowledge about important factors influencing the propensity to respond. These include both factors that are under the direct control of the researcher (e.g., mode choices, survey design, communication) and outside the direct control of the researcher (e.g. attributes of potential respondents, topic including difficulty of response task, and social environment, including survey climate). Although many factors are not under the direct influence of the researcher, the negative influence of certain factors on response propensity may be reduced by clever design (e.g. adaptive designs, special modes for special groups, structure of request, adaptation of questionnaire, selective incentives).

Despite much research effort into response enhancing methods, trend studies over the years showed that response rates are declining. However, although almost all counties show a clear decreasing trend, trends do differ between different countries, and some countries show a steeper decline over the years than others. These differences in nonresponse trends over countries can be explained only partially by differences in survey design and field methods between countries.

In the scientific literature on nonresponse survey general attitudes towards surveys and survey climate are often named as important theoretical concepts for explaining nonresponse (e.g. Loosveldt & Joye, 2016). However, trend analyses of attitudes towards surveys are scant, calling for an international survey climate barometer. The relative lack of empirical data on survey climate and its contextual effect on nonresponse rates is mainly due to the absence of a brief and reliable measurement instrument. To fill this gap, the Survey Attitude Scale was developed (de Leeuw, Hox, Silber, Struminskaya, & Vis, 2019). This scale has been used in Germany and the Netherlands, and is now implemented in the first and the last wave of the CRONOS-2 panel, a probability based online panel that is fielded in 12 European countries.

In the first part of this presentation I will briefly review studies into nonresponse trends and discuss known determinants of survey nonresponse. In the second part, I will describe the development of the Survey Attitude Scale. I will end with a description of the CRONOS-2 implementation.

Recent research suggests that the public is interested in poll results for a number of reasons, but public evaluations of poll results are not based on knowledge of their underlying methodology or the source of reported results. An important element of perceptions of their accuracy and trustworthiness is whether or not the results conform to an individual's own views on the subject. This view is supported by survey experiments utilizing both policy and horse race polls in different countries. The consequences of this understanding for expectations of the role of polls in a democratic society and how poll results might affect policy making will be discussed.


Corinna König

IAB - Institut für Arbeitsmarkt- und Berufsforschung, Germany


Surveys are generally of great importance in research and policy analysis, yet the survey cli- mate does not always reflect this importance as voluntary surveys are facing declining re- sponse rates. This article analyzes response rate trends in the IAB Establishment Panel, one of the largest and longest-running establishment panel surveys in Germany, over a period of 17 years. It is already known that many cross-sectional surveys have experienced decreasing response rates and increasing risk of nonresponse bias in recent decades, but response rate trends in establishment panel surveys are largely unknown. We show that wave-to-wave re- sponse rates were very stable over this period with no unusual patterns of nonresponse bias, but that establishment- and interviewer-level influences are associated with subsequent par- ticipation. In contrast, cumulative response rates have been decreasing and nonresponse bi- ases increasing over time. Here, establishment-level characteristics and response behavior are associated with continued participation in the panel. We conclude the paper by making the case that voluntary establishment panel surveys should put more effort into motivating certain types of establishments that seem to be more difficult to recruit and retain.

Anna-Carolina Haensch, Bernd Weiß, Patricia Steins, Priscilla Chyrva, Katja Bitz

Ludwig Maximilian University, Germany


In this study, we demonstrate how supervised learning can extract interpretable survey motivation measurements from numerous responses to an open-ended question. We manually coded a subsample of 5,000 responses to an open-ended question on survey motivation from the GESIS Panel (25,000 responses in total); we utilized supervised machine learning to classify the remaining responses. The responses on survey motivation in the GESIS Panel are particularly well suited for automated classification, since they are mostly one-dimensional. The evaluation of the test set also indicates excellent overall performance. We present the pre-processing steps and methods we used for our data, and by discussing other popular options that might be more suitable in other cases, we also generalize beyond our use case. We also discuss various minor problems, such as a necessary spelling correction. Finally, we can showcase the analytic potential of the resulting categorization of panelists' motivation through an event history analysis of panel dropout. The analytical results allow a close look at respondents' motivations: they span a wide range, from the urge to help to interest in questions or the incentive and the wish to influence those in power through their participation.

Imke Herold, Michael Bergmann, Arne Bethman

Munich Center for the Economics of Aging (MEA), Max-Planck-Institute for Social Law and Social Policy, Germany; Chair for the Economics of Aging, Technical University of Munich, Germany


Like most surveys, SHARE suffers from a considerable amount of item non-response, particularly regarding income questions as well as respondents’ reluctance to consent to record linkage. Both aspects have a potentially severe impact on data quality leading to less precise as well as biased estimates and thus might hamper substantive analyses. While SHARE tries to alleviate these problems, e.g. through income imputations, it seems especially important to understand the underlying issues that keep respondents from reporting their income or consenting to record linkage. In order to add to the available body of research on the workshop’s subject from the SHARE perspective with its specific sample and study focus, we used the opportunity to include an additional Paper-and-Pencil drop-off questionnaire with the German sub study, focusing on issues like trust in surveys and organizations, data privacy concerns as well as attitudes towards income questions. Using data from the German SHARE Wave 8 drop-off questionnaire, we want to contribute to the question whether several aspects of trust are interrelated with income non-response on the one hand and denial of linkage consent on the other hand. By choosing income questions – as one of the classics of item-non response – and consent to record linkage – as a more innovative survey tool – we try to cover a broad range of aspects that can be relevant for respondents’ trust. Due to the pandemic-induced stop of fieldwork in early 2020, our data so far only comprises the cases collected until that point in time. For a subsequent publication, we plan to include cases collected during the currently ongoing resumption of fieldwork, allowing for a precise comparison of the survey climate before and after the pandemic.

Henning Silber1, Sven Stadtmüller1, Alexandru Cernat2 

1GESIS – Leibniz Institute for the Social Sciences, Germany 2University of Manchester, United Kingdom 


In times of declining response rates and over-surveying, it is more important than ever to improve our understanding of why people participate in surveys. Brüggen et al. (2011) showed that participants of a Belgian online panel have intrinsic (e.g., topic interest, helping) and extrinsic (e.g., incentives) participation reasons. Our study expands previous research by implementing an experiment using two common forms of survey measurement simultaneously: ranking and rating. The experiment was fielded in a professional respondents’ sample from a German online access panel (n~400) and in an address-based sample (mail and online) of German nonprofessional respondents (n~1200). Besides extrinsic and intrinsic motivations, the experiment included various study design features (i.e., mode, length, data security) and the mood during the time of contact as possible reasons for participation. Preliminary results, which include latent class models, confirm previous findings regarding the motivations of online panelists but also show important differences between professional and nonprofessional respondents. Specifically, the main participation reasons of professionals are topic interest (intrinsic) and incentives (extrinsic), while nonprofessionals are primarily motivated by intrinsic reasons (topic interest and purpose of the study). The differences between the two samples spotlight that online panel members have different motivation structures than participants in general population surveys, which may undermine generalizability. With respect to the survey climate, the study shows how important it is to contribute to a survey-taking climate in which people see good reasons for their participation in survey research.

Isabelle Fiedler1, Ulrike Schwabe1, Swetlana Sudheimer, Thorsten Euler1, Niklas Jungermann2, Nadin Kastirk, Gritt Fehring

1German Centre for Higher Education Research and Science Studies (DZHW) Hannover, Germany 2University of Kassel, Germany


Besides others, general attitudes towards surveys are part of respondents’ motivation for survey participation. There is empirical evidence that these attitudes do predict participants’ willingness to perform supportively during (online) surveys (de Leeuw et al. 2017; Jungermann et al. 2019). Therefore, several attempts have already been made to measure generalized attitudes towards surveys (e.g. Rogelberg et al. 2001). Most recently, the Survey Attitude Scale (SAS) was proposed by de Leeuw et al. (2010, 2019): a short nine-item instrument that claims to be operational internationally and independent from the corresponding survey-mode. Theoretically and empirically, it differentiates between three dimensions: (i) survey enjoyment, (ii) survey value, and (iii) survey burden. To the best of our knowledge, there is no empirical study to date that has validated the scale with samples of highly qualified population. Based on work of de Leeuw and colleagues (2019), we therefore investigate into the question whether the SAS measures can be compared across different online survey samples of highly qualified from German higher education and whether these measures match those of the general population. To validate the SAS instrument, we implemented its nine item short form, adopted from the GESIS Panel (Struminskaya et al. 2015), in four different online surveys for German students, graduates and PhD students:

  • (1) the HISBUS Online Access Panel (winter 2017/2018: n=4,895),
  • (2) the seventh online survey of the National Educational Panel Study (NEPS) - Starting Cohort “First-Year Students” (winter 2018: n=5,109),
  • (3) the third survey wave of the DZHW graduate panel 2009 (2019, n=664) and
  • (4) a quantitative pre-test among PhD students within the National Academics Panel Study (Nacaps; spring 2018: n=2,424).

Additionally, data from the GESIS Online Panel functions as reference data for benchmarking (GESIS 2022). In a first step, we use confirmatory factor analysis (CFA) to validate the SAS. In a second step, we perform multi-group CFA using an integrated dataset to ensure measurement invariance. We evaluate the measurement invariance hierarchically on four levels (Chen 2007; Ender 2013). First, the CFA results indicate that the latent structure of the SAS is reproducible in all four samples. Factor loadings as well as reliability scores support the theoretical structure adequately. Compared to our reference data, however, the models for the group of highly qualified fit slightly worse. We also find differences within our highly educated sample: the instrument performes worst in both samples of PhD students and graduates. Secondly, we implement tests of measurement invariance applying the multi-group CFA. Our empirical findings support construct and metric invariance among the four samples; however, scalar and strict invariance are not supported. Since de Leeuw and colleagues’ (2019) analyses are based on general population surveys, we extend the picture focussing on young and highly educated respondents. This is relevant for two reasons: First, higher education research also suffers from declining response rates. Therefore, knowledge about respondents’ survey attitudes might help to get ideas for survey design to soften this trend. Secondly, it is important to test whether measurement instruments for the general population also work for this specific group of highly qualified.

Don A. Dillman1, Abbey Hammell2, Katie Dentzman3, Kenneth Wallen4

1Washington State University, USA 2University of Minnesota, USA 3Iowa State University, USA 4University of Idaho, USA


Conflicting perspectives are emerging on the continued importance of surveys as a source of knowledge on societal issues. One perspective is that their usefulness is declining due to the incredible increase in the use of surveys, accompanied by a similarly large decline in response rates fueled by lack of trust and even anger over how results will be used. Another perspective is that more and more surveys are being conducted because of the greater importance of the information collected by them for real-time decisions, the direct impact of which may result in an increase in people’s willingness to participate in surveys. This paper explores these contradictory possibilities.

There is no doubt that the number of surveys people are being asked to complete has increased dramatically in the 21st century. The availability of individual email addresses and do-it-yourself surveying software have resulted in more people being able to do surveys as well as diversifying the types of surveys that get done. Surveys are now used to schedule meetings, request answers to one or two questions likely to affect the survey sponsors decisions, and get quick readings on client satisfaction with products and services. In addition, survey results are quickly combined with previously collected organizational records to interpret people’s answers. Institutionally, the survey method is expanding from an expensive undertaking, which took weeks or months to implement, and for which cost considerations made sampling essential, to a tool that can potentially provide feedback from entire survey populations, in a few hours, for extremely specific purposes.

The notion of what constitutes a survey and how to evaluate the results is also in the process of changing. In the 20th century surveys were expensive and their accuracy and worth for making decisions were typically assessed by applying scientifically based criterion for reducing survey error i.e., coverage, sampling, measurement, and nonresponse. Such surveys still exist and are needed. But the term survey is now being applied to data collection that might more generally be thought of as “feedback” that can have quick and significant effects on corporate and societal decisions. Increasingly, “survey” has become a general label for asking large numbers of people for feedback in a variety of ways. Arguably, some of these new forms of surveying have more influence on government and corporate decisions than traditional surveys that were expensive, slow to design and implement, and often took years to produce actionable results.

Some of these “new” surveys have an aura of legitimacy and use, which past surveys done by traditional methods seem not to have had. For example, an airline reservation system that had changed procedures for identifying flight possibilities, interrupted the process of making a reservations with the simple question, “We have recently changed how we present flight alternatives, is the current way better or worse, than the one you used previously.” A customer’s immediate response may be more accurate and useful than if customers were asked a few weeks later in a general satisfaction survey in which they may remember little about the specific process of making a reservation. This is only one tiny example of what is new about surveying in the 21st century.

The changing nature of who does surveys and how they get done may be having conflicting impacts. On the one hand it has been suggested that we are experiencing a “tragedy of the survey commons” whereby many surveyors are pursuing the same people for responses to their surveys, and not doing so in a respectful or understandable way. Consequently the response rates to all surveys are declining, as is the willingness of people to provide thoughtful answers. On the other hand, the increase in volume of surveys, and a tendency for people to see different surveys in diverse ways, may portend a long-term trend towards more precise and shorter surveys that produce practical results so responding to begins to be seen as a routine and useful part of life. It also seems plausible that potential respondents may see and appreciate the use of certain surveys as providing better results for decision making in organizations and society than when decision makers would just talk to their colleagues to get their opinions as often happened in the past.

Our purpose in this paper is to describe and categorize emerging methods of surveying by purpose and effects on respondents. We also discuss how these diverse types of surveys may be affecting people’s response behaviors. Our goal is to develop a long-term agenda for assessing how the changing the nature of surveying is affecting the potential cooperation of respondents, whose responses are critical for surveys to have value.

Timothy P. Johnson1, Henning Silber2 and Jill Darling3

1University of Illinois at Chicago, USA 2GESIS – Leibniz Institute for the Social Sciences, Germany 3University of Southern California, USA


Anecdotal evidence suggests that the term “pollster” may have in recent years become more stigmatized in the U.S., potentially caused by an increasingly more hostile survey climate. This question is explored using split ballot experiments administered to national probability-based online samples after the 2016 and 2020 elections in the U.S. (studies 1 & 2, respectively) using University of Southern California’s probability-based Understanding America Study (UAS) Panel. Respondents were randomly asked to rate the “honesty and ethical standards” of “pollsters” vs. “survey researchers” (Study 1, n=2,462) and “public opinion researchers” (Study 2, n=6,885). In each study, pollsters obtained significantly more negative ratings, suggesting that the general public views pollsters, who are more likely to be associated with elections and political polling, as being less honest and/or ethical than survey researchers and/or public opinion researchers, who may be perceived as being more scientific and/or objective. Interaction models revealed that those with the most favorable ratings of pollster critic Donald Trump had the most unfavorable ratings of the honesty/ethical standards of pollsters and public opinion researchers, in contrast to survey researchers.

A second experiment embedded in Study 2 investigated whether the negative perceptions of pollsters affect the perceived trustworthiness of survey results. In the 3x2 vignette experiment, two variables were manipulated in a report of findings from a public opinion poll concerned with gun control laws: the source of the information provided (pollster, public opinion researcher, vs. survey researcher) and the amount of methodological information provided (mode, sample type, sample size, eligibility, margin of error, response rate and specific wording vs. no details). Regression analyses revealed no impact of the information source. Consistent with motivated reasoning expectations, however, persons who more strongly supported stricter gun control laws found the vignette poll findings, which reported that a majority of the public also supported strict gun control measures, to be more trustworthy. Counter-intuitively, providing more methodological information was significantly associated with less trust in the survey result. We conclude that while there appears to be lower perceptions of the honesty of pollsters in the U.S., particularly among supporters of Donald Trump, those perceptions do not directly translate into less trust in the findings of public opinion surveys.

Sven Stadtmüller1, Henning Silber1, Jessica Daikeler1 and Florian Keusch2

1GESIS – Leibniz Institute for the Social Sciences, Germany 2University of Mannheim, Germany & University of Maryland, United States


Today, survey results on almost every political event of significance are reported promptly in the media. But how do people process such results and on what do they base their trust in them? So far, research has found two main answers to these questions: first, people are more likely to believe survey results that are aligned with their own attitudes (motivated reasoning). Second, people rely on intuitive indicators of survey quality when assessing credibility if such information is available. With this study, we want to shed more light on the interplay of motivated reasoning and the use of survey quality information. Based on the concept of heuristics and the role of cognitive capabilities for heuristic processing we derive hypotheses on when pre-existing attitudes and survey quality indicators dominate in processing survey results. To test our hypotheses, we rely on a 4x2x3x3 vignette experiment implemented in a probabilistic internet panel from Germany (n~4,000). Each respondent received four randomly selected vignettes describing surveys on attitudes toward a further enlargement of the European Union (EU). The survey descriptions differed in four dimensions: the result (41%, 45%, 55%, or 59% of respondents are in favor of EU enlargement); the description of the survey as representative or not; the net sample size (500, 1,000, or 5,000), and the response rate (10%, 30%, or 50%). In addition, prior to the experiment, respondents were asked to state their attitude toward EU enlargement, their attitude certainty, and their personal interest in this matter. The results show clear indications of motivated reasoning and a positive effect of survey quality on the perceived credibility of the result. Most notable, however, is a substantial and significant interaction of the effects of pre-existing attitudes and survey quality on the perceived credibility. Moreover, cognitive abilities not only moderate the effect of survey quality indicators but also the degree to which respondents make use of their pre-existing attitudes to evaluate the credibility of the survey result. In the discussion, we focus on the implications of our study for the communication of survey results.

Ruoh-rong Yu, Su-hao Tu

Academia Sinica


Surveys on surveys have been widely applied by survey researchers to investigate public attitudes toward surveys. Among the relevant studies, many researchers have stressed the importance of survey experiences on respondents’ attitudes toward surveys. For example, Goyder (1986) suggested that more experienced respondents are more likely to show positive attitudes toward surveys in a survey on surveys. However, existing studies on the association between respondents’ survey experiences and survey attitudes have focused on their linear (or monotonic) relationship and ignored the possibility that respondents with different level of survey experiences might be divergent in their attitudes toward surveys.

This study aims to examine whether respondents with low-, mid-, and high- frequency of survey attendance are different in their demographic traits, and the similarities and differences in attitudes toward surveys and data sharing among these three types of respondents. The data used this study are from an online survey based on a probability-based online panel maintained by the Center for Survey Research (CSR), Academia Sinica, Taiwan. The members of the online panel were from the in-person surveys, telephone surveys, online surveys, and SMS surveys conducted by the CSR. Our online survey was conducted between April 19 and May 3, 2021, with its theme being public attitudes toward surveys and data. At most three reminders were sent to those who didn’t fill in the questionnaire. Among the 7,400 invitation emails or short messages sent successfully, 3,770 respondents had completed the questionnaire, with the completion rate being 50.95%. Each individual who finished the questionnaire received a convenient store e-voucher worth of 50 Taiwan dollars (about 1.6 Euros).

In the questionnaire, the question on respondent’s frequency of attending online surveys is: “Besides the online surveys implemented by our center (CSR), in the past three months, how often have you participated in online surveys conducted by the other organizations?” The options of this questions contain: (1) almost every day, (2) about one to three times a week, (3) about one to three times a month, (4) about once every two to three months, and (5) none. In considering that there are only few cases in options (1) and (2), we regrouped the first three options (options (1)-(3)) into the “high-frequency” type, and the last two options (options (4) and (5)) were kept intact (namely the “mid-frequency” and “low-frequency” types respectively).

Our analysis for the demographic traits of these three types of respondents shows that the high-educated and high-income respondents are more likely to be the “mid-frequency” type, whereas the high- and low-frequency types are more similar in their education and income levels. Relative to females, males are more likely to attend online surveys more frequently. The findings also reveal that respondents who reside in urban areas are more likely to belong to the “low-frequency” type. These findings indicate that these three types of respondents are heterogenous in their demographic characteristics.

By controlling the relevant demographic traits (e.g., sex, education, work status, income, and residential area) that might affect respondents’ attitudes toward surveys and data, we adopted ordered probit models to analyze the effects of respondents’ types (low-, mid-, or high- frequency types) on attitudes regarding survey participation, survey organizations, and data sharing. The main findings are briefly discussed in the following. First, on respondents’ perceived importance of different factors in determining whether to attend a survey, the findings show that, relative to the other types of respondents, mid-frequency type regards whether the survey topic is related to public policy and the length of survey as important factors in deciding whether to take a survey. On the other hand, the high-frequency type attaches more importance to whether there is a reward for the survey.

Second, as to trust for surveys or surveyors, our findings indicate that, compared to the other types, the low-frequency type is less confident that the surveyors will not fake their survey results. In addition, the low-frequency type tends to worry more that the media will distort survey results, and that many companies will use the excuse of conducting a survey to sell products.

Third, regarding data sharing, the high-frequency type is found to be more open to personal data usage in the sense that they tend to agree more that anonymized personal information held by the government can be sold to private companies. In addition, the high-frequency type is more willing to share own Facebook, GPS, and health check data for academia use. Even with the unique personal identification number that can be used to linked to other personal information, the high-frequency type is less sensitive than the other two types.

The results listed above indicate that these three types of respondents are not only heterogenous in their demographic characteristics, they are also divergent in their motives of attending a survey, degrees of trust for surveys or surveyors, and attitudes toward data sharing. These findings suggest that the association between survey experiences and attitudes toward surveys and/or data deserves more attention in future studies.

Julia Kleinewiese

Mannheim Center for European Social Research


Lack of trust in surveys and issues with survey climate are an increasing problem in survey research. This threatens the quality of survey data (e.g., its validity). Moreover, it poses challenges in gathering data, particularly among “hard-to-reach” populations. Finally, these societal attitudes restrict the use of scientific results in practice – by policy-makers and other relevant stakeholders.

While in line with these general trends, research on deviance and crime faces particularly acute challenges – with regard to trust and social desirability bias. Many people fear that they, their (work)group or organization may fall into disrepute if negative tendencies should be brought to light by research. Moreover, people may feel pressured to respond according to social norms and standards (social desirability), biasing the results. They feel insulted (“are you suggesting that we/I would do that?”) leading to several possible outcomes, such as refusal to participate in the survey, biased responses or general anger towards researchers. In the long run, this results in generally low-response rates and distrust toward research results. Additionally, many organizations refuse to participate in research on deviance all together. This collective unit-non-response is due to a “code of silence” within the organization (people maintaining silence about possible wrongdoings within the organization).

These issues are particularly acute with items directly inquiring about deviance or crime. Therefore, to increase trust in surveys and reduce social desirability bias in this field of research, it is expedient to apply survey methodologies that are less about the respondents’ own experiences; for instance, survey experiments describing fictitious situations. These allow for some distancing, for example, by describing the situation from a third-person perspective. Previous research suggests that survey experiments can help reduce social desirability bias (Auspurg & Hinz 2015, Mutz 2011). Survey experiments are also likely to increase trust (lower skepticism) of respondents towards the survey and scientists. While this approach has been applied in a number of studies, it is still met with skepticism by many researchers. Existing publications using factorial survey experiments in empirical research, however, show that this approach is an asset in research on deviance and crime (e.g., Dickel & Graeff 2018). This presentation elaborates on why and how this is the case and why establishing this approach as a scientific standard is important for the research field of deviance and crime – in order to increase participation and honest answering behavior as well as to improve data quality.

Lukas Griessl

University of Essex


Since the mid-20th century, probability sampling has been seen as the gold-standard among pollsters and survey researchers. While the statistical principles behind probability sampling remained valid and stable through this period, due to developments such as increasing non- response rates, rising costs, time considerations and a rise in new sources of data gathering online, non-probability sampling has become widespread. This has led to a heated and charged controversy, which is unlikely to have escaped anyone in the survey community. The importance of this debate lies not only in the way it is expressed, but also because it touches the core of statistical inference: how to bridge the gap between a part and the whole. Furthermore, this controversy has also been problematised in the media and the public, which further endangers the public trust and support in scientific surveys and polls.

As part of my doctoral research, I conducted 25 semi-structured interviews with statisticians, pollsters, and survey researchers from Germany, the Netherlands and the US to explore the emotional, historical, and social dimensions of this controversy. From the analysis of those interview materials, several themes, such as ‘war’, ‘fraud’, ‘charlatanry’, ‘magic’, or ‘counter- enlightenment’, emerged, which serve as a background on which the controversy can be mapped. Those themes, which have been articulated by leading participants in the controversy, point to a deep divide within the survey landscape that is reminiscent of previous episodes in the history of sampling. In this paper, I will present some of those themes, combined with an analysis of what they mean for the current survey climate. Furthermore, with a view from the history and sociology of (social) science, I will elaborate on the role of those themes in the history of survey sampling and the development of the field.

Blanka Szeitl, Zita Fellner

University of ELTE


The quality of survey research is under a permanent pressure as quantitative data analysis is required more and more often in the social sciences, during university degree processes, and in the scientific literature as well. Meanwhile, by the broader access to conduct survey research by online platforms, the quality of online data collection is becoming more and more the focus of attention.

The authors conducted a qualitative research that aimed to asses the prevalence of quota sampling and online surveys in Hungary. For this purpose, public opinion researchers were asked about the surveys they performed in the last 3 years. Published papers between 2019–2021 the most important Hungarian journals on the field of sociology and economics are also assessed by the adequacy of the survey methodology used. They show that self-conducted data collections rarely met the criteria of high-quality survey research; nonetheless, this is not an insurmountable obstacle to publication.

In this paper, authors also present through a simulation study that assuming difference in the preferences of the internet-user and non-user population, the bias in the estimation for the total population is so large that in most of the possible cases, the estimation based on internet-users does not included in the 95 per cent confidence interval of the estimation based on the total population. These results suggest that the validity of online surveys highly depends on the homogeneity of preferences among internet-users and non-users, which has to be confirmed in each research field before online surveys become acceptable scientifically.