🚧 Website Maintenance in Progress: Thank you for visiting! We are currently in the process of enhancing our website to serve you better. Please check back soon for our new and improved website.

Internet-based sampling has become commonplace in trauma research. The potential of internet-based studies was highlighted during COVID-19 with a multitude of online surveys launched across the world. Given the pandemic-related restrictions and given the immediate need to understand the ways in which the pandemic was impacting mental health, internet-based studies provided an important way to reach people widely to conduct rapid research, including the recruitment of harder-to-reach or vulnerable populations. However, there are some aspects of internet-based sampling that should be considered as we examine the body of literature that has emerged from this research and attempt to draw conclusions.

We need to consider the representativeness of these samples. Online survey research tends to be skewed towards younger participants perhaps due to higher levels of online engagement in general, and women are also more likely to participate in these kinds of studies than men (Sanchez et al. 2020). Furthermore, if the study was promoted using social media or snowballing methods, how wide was the reach? There is also a question about whether these samples have the same levels of trauma exposure or distress as the population we are interested in (van Stolk‐Cooke et al., 2018). It is particularly important to examine the reasons why people choose to participate in these kinds of studies. Is this because they were particularly impacted by the pandemic and therefore felt drawn to participate? Or alternatively, is it because they are not too distressed and therefore feel able to cope with participating in a study?

We need to think about the quality of the data. This is true for any kind of study, but there is growing evidence that recruiting from online research platforms such as MTurk, Prolific, and CloudResearch can be particularly problematic (Kim and Hodgins, 2020). Some of these problems include inattentive participants, intentionally dishonest respondents, and the increasing presence of bots (computer programs designed to mimic workers) and farmers (individuals who use server farms to hide their location or identity, mostly for the purpose of receiving payment for participation).

There are some ways to address these issues. In terms of representativeness quota sampling can help, as can targeted adverts for harder-to-reach groups. Once data are collected, we can use statistical methods to control for disproportionate representation. Quality control procedures can be integrated into the surveys—for example using simple arithmetic questions, CAPTCHAS, and screening for inconsistent responding or statistically improbable responses, and identifying unusual responses to open-ended questions (Chmielweski & Kucker, 2019; Mellis & Bickel, 2020). Agley et al. (2021) recently published a study in which they found that employing differing quality control approaches significantly affected scores relating to alcohol use (USAUDIT), depression (PHQ-9) and anxiety (GAD-7), further highlighting this need.

Furthermore, it is critical that we conduct meta-research (research about research) in this area. What is the experience of participants in these studies? Why do they participate? Critically—how does it make them feel? Do they not only know how to get help if they need it, but also do they actually get help when their participation has caused distress? 

Internet-based studies are increasingly becoming an integral part of our “empirical knowledge.” Therefore, it is critical to continue to develop methods to improve the quality, reliability, and representativeness of online research.

About the Author

Talya Greene, PhD, is Associate Professor and Head of the Department of Community Mental Health at the University of Haifa. Her research focuses on investigating the dynamics of traumatic stress symptoms in daily life using ecological momentary assessment, and on psychopathological symptom networks. She is Chair of the Research Methodology SIG in ISTSS.


Agley, J., Xiao, Y., Nolan, R., & Golzarri-Arroyo, L. (2021). Quality control questions on Amazon’s Mechanical Turk (MTurk): A randomized trial of impact on the USAUDIT, PHQ-9, and GAD-7. Behavior Research Methods, 1-13.

Chmielewski, M., & Kucker, S. C. (2020). An MTurk crisis? Shifts in data quality and the impact on study results. Social Psychological and Personality Science11(4), 464-473.

Mellis, A. M., & Bickel, W. K. (2020). Mechanical Turk data collection in addiction research: Utility, concerns and best practices. Addiction115(10), 1960-1968.

Sanchez, C., Grzenda, A., Varias, A., Widge, A. S., Carpenter, L. L., McDonald, W. M., ... & Rodriguez, C. I. (2020). Social media recruitment for mental health research: a systematic review. Comprehensive Psychiatry103, 152197.

van Stolk‐Cooke, K., Brown, A., Maheux, A., Parent, J., Forehand, R., & Price, M. (2018). Crowdsourcing trauma: Psychopathology in a trauma‐exposed sample recruited via Mechanical Turk. Journal of Traumatic Stress31(4), 549-557.