🚧 Website Maintenance in Progress: Thank you for visiting! We are currently in the process of enhancing our website to serve you better. Please check back soon for our new and improved website.


In response to COVID-19, pandemic scientists, including trauma scholars, have made herculean efforts to develop successful treatments, vaccines, and effective mitigation strategies and to understand the impact of the pandemic on public mental health. The need to socially distance made survey research a popular tool for some of this work: A web search of science for “COVID-19” and “survey” produces 23,174 articles as of March 16, 2022. Translating these findings into effective public health policy, however, has faltered as the public has struggled to cope with conflicting messages, misinformation, and disinformation (Dhawan et al., 2021). The result has been shifting guidelines, public confusion, and a declining trust in public health agencies like the U.S. Centers for Disease Control (CDC; RAND Corporation, 2021). Inconsistencies in messaging have also fueled the sense of uncertainty created by the pandemic, which could further undermine public trust in the guidelines suggested to mitigate COVID-19 risk. To remedy this situation, we need to recognize how science itself may have contributed to the problem.  

Meaningful public health policy depends on policymakers having accurate, unbiased, population-based data from which to draw conclusions. During the pandemic there was a surge of social science research addressing the mental health consequences of COVID-19, most of which was conducted with online convenience samples. However, creating effective policies depends on getting unbiased, meaningful data that can readily translate into policy actions. Data from nonprobability samples such as “snowball” and big data samples (e.g., Google Trends) have significant biases, as people self-select into these samples and they rely on people having access to the internet (Bradley et al., 2021; Pierce et al., 2020). Many people from at-risk populations do not have access to the internet, making the data from these samples biased in favor of more privileged groups. When opt-in, nonrepresentative online panels (M-Turk, Prolific, etc.) are used, the self-selection biases that encouraged individuals to join the panel in the first place may be further exacerbated when they are weighted back to the population estimates. These biases then undermine the utility of the data for policymaking (Bradley et al., 2021; Pierce et al., 2020). A white paper from the COVID States Project (2021), a study using the nonprobability sample from the online survey sampling company PureSpectrum shows that their sample significantly underrepresented the poorly educated, male, Hispanic, and Indigenous populations. Moreover, there are many other serious selection biases that are not examined when they compare their sample to the U.S. population. For example, how well are they capturing serious mental or physical illness? What other unmeasured characteristics of the population might be skewing the sample? Weighting such a large, biased sample back to population estimates not only skews population estimates, but it also increases statistical confidence around these biased estimates, making the valid estimates of population uncertainty impossible (Bradley et al., 2021). To the extent that these biases lead to ill-informed policies, they can undermine the public’s trust in the very institutions people rely on for accurate information. There is also the real risk of causing harm when biased data produces misinformation that spreads rapidly among the public via the media. The problem is then compounded when limited resources are allocated in ways that are not beneficial for the public. Indeed, some have argued that it is worse to act on misleading information than to have no information (Bradley et al., 2021). 

Obtaining data that can more accurately inform public policy depends on having probability-based sampling methods that limit biases and make it possible to quantify sources of non-response. Only truly random sampling can provide the confidence that these sources of bias are not unduly affecting our findings. It is critical for samples to include people from the most vulnerable demographic groups who are at great risk but unable to access the internet. It is critical for researchers to recognize the limitations of their samples and not draw unwarranted conclusions (e.g., prevalence rates) from data that is not truly representative. Without such rigorous social and behavioral science research, public resources could be expended on ineffective strategies that fail to motivate the public to engage in protective health behaviors, undermine the public trust, and fuel the spread of misinformation. Rigorous studies that draw from probability-based representative samples can provide policymakers, service providers, members of the media, and educators with the information needed to design effective risk communications and interventions that are evidence-based, cost-effective, and sensitive to the needs of the populace.  

About the Author 

E. Alison Holman, PhD, FNP: Dr. Holman’s  work examines early trauma-related processes (e.g., acute stress, media use, distorted time perception) that help explain how psychological trauma affects subsequent mental and physical health problems. She has been principal investigator (PI) or co-PI on several community-based studies of coping with trauma (e.g., firestorms, terrorism) funded by the National Science Foundation, Josiah Macy Jr. Foundation, and Robert Wood Johnson Foundation. She has helped pioneer new approaches that include rapid entry into the field to assess acute stress in real time and internet-based methods that provide representative samples of the U.S. population and allow participants’ anonymity in responding to sensitive questions. After the September 11th terrorist attacks she and her colleagues conducted a three-year prospective, longitudinal study of coping in a nationally representative sample. Their findings were published in high impact journals such as JAMA, Archives of General Psychiatry, Psychological Science, with coverage in the New York Times. She examined an innovative re-conceptualization of trauma exposure as an early predictor of well-being after the Boston Marathon bombings. Her findings on the link between media exposure and acute stress symptoms were published in the Proceedings of the National Academy of Sciences and received much international acclaim. She and her collaborators are currently working on a large national study of coping with the COVID-19 pandemic and the war in Ukraine in a nationally representative sample. She has also studied historical collective trauma and its implications for both mental and physical health disparities across generations of Black and Indigenous Americans. 


Bradley, V. C., Kuriwaki, S., Isakov, M., Sejdinovic, D., Meng, X. L., & Flaxman, S. (2021). Unrepresentative big surveys significantly overestimated US vaccine uptake. Nature, 600(7890), 695-700. 

COVID States Project (2021).  Validating the COVID States Method: A comparison of non-probability and probability-based survey methods. https://osf.io/qxez5/ 

Dhawan, D., Bekalu, M., Pinnamaneni, R., McCloud, R., & Viswanath, K. (2021). COVID-19 News and Misinformation: Do They Matter for Public Health Prevention?. Journal of Health Communication, 26(11), 799-808 

Pierce, M., McManus, S., Jessop, C., John, A., Hotopf, M., Ford, T., Hatch, S., Wessely, S., & Abel, K. M. (2020). Says who? The significance of sampling in mental health surveys during COVID-19. The Lancet Psychiatry, 7(7), 567-568. 

RAND Corporation (2021). Decline in trust in the Centers for Disease Control and Prevention during the COVID-10 pandemic.  https://www.rand.org/pubs/research_reports/RRA308-12.html