Introduction: Paula Schnurr, PhD, will present a poster, "Functional and Behavioral Sexual Outcomes in Women Treated for PTSD" at the ISTSS Annual Meeting, November 15-17 in Baltimore. Fran Norris, PhD, will present in the symposium, "Optimizing Prevention in Trauma-Focused Research: Social and Clinical Epidemiologic Approaches."
ISTSS issued a press release announcing the August issue of the Journal Traumatic Stress, which featured coverage of the 2006 Annual Meeting Highlights. In particular, the issue focused on an ongoing controversy among leaders in the field regarding the results of the 1988 National Vietnam Veterans Readjustment Study (NVVRS). The story was picked up by national news service HealthDay, and received coverage in Forbes, The Washington Post, and the U.S. News and World Report, among other outlets.
The Journal of Traumatic Stress (JTS), the official journal of the International Society for Traumatic Stress Studies, uses a two-pronged system for review of submitted manuscripts. Each paper, of course, is reviewed for its content: Is its purpose compelling? Is it clearly written and well organized? Does it cover relevant research yet remain succinct? Does it make a significant contribution to knowledge or practice? Many papers, in addition, are reviewed for the quality of their statistics and methods. Statistical reviewers look for several key attributes that are discussed below (see Schnurr, 1998, for greater detail). Making sure that one’s paper has addressed these various points can hasten the review process and increase the likelihood of a favorable result.
- PURPOSE: Are the hypotheses and objectives clearly stated?
- DESIGN: Is the design appropriate for testing the hypotheses or meeting the objectives of the study?
- SAMPLE: Is the sample adequately described?
- RECRUITMENT/PARTICIPATION: Is there adequate information about how participants were recruited? Is there adequate information about nonparticipants? Are recruitment procedures adequate?
- POWER: Is the sample size adequate for testing the proposed hypotheses or meeting the proposed objectives? Is the discussion of power issues adequate?
- MEASUREMENT: Is there adequate information about measurement? Are reliability and validity issues adequately handled?
- DATA ANALYSIS: Are the statistical tests appropriate for testing the proposed hypotheses or meeting the proposed objectives? Are the data analyses well done? Is all conventionally reported information included?
- MISSING DATA: Are missing data handled appropriately? Is variation in sample size from analysis to analysis understandable and appropriate?
- INFERENCES: Are the conclusions supported by the data? Are cautions noted when appropriate?
- TABLES AND FIGURES: Are the tables and figures necessary? Are they well done?
- TERMINOLOGY: Are technical terms used correctly? Are statistical and methodological abbreviations correct?
- FORMAT: Has JTS/APA format been followed in the reporting of all statistical information and in the construction of tables?
Problems arise in every conceivable combination. Issues of design, recruitment/participation, and power are probably, on average, most likely to lead reviewers to believe that problems are “unfixable.” If a sample is one of convenience, there is little point is writing a paper on prevalence. If the sample size is small, a lack of differences between groups is not likely to be interpretable, and the use of modeling approaches that require large Ns is inappropriate. If the design is naturalistic or non-experimental, one should be cautious about comparing groups (e.g., PTSD vs. non-PTSD) by using simple statistics like t-tests or ANOVA.
Problems in data analysis tend to be most frequent. If the design, measures and power are sound, reviewers often describe these problems as “fixable” and may even make suggestions about better ways to analyze the data. The types of problems we see at JTS are too varied to discuss them all here, but very common errors include: using inappropriate methods for testing interactions or determining their form, conducting an abundance of post hoc tests (comparing everything to everything) rather than using planned contrasts that match stated hypotheses, using empirically rather than theoretically driven regression approaches, using too many predictors in a multiple regression (especially in the case of logistic regression), failing to account for clustering (e.g., within families or therapy groups), and confusing standardized and non-standardized results. Handling missing data appropriately can be a challenge for even very experienced researchers, and methods for doing this have become more complex than they once were, especially in clinical studies. An author writing up the results of an intervention would do well to read prominent clinical trials closely to see how the investigators handled missing data. Probably the best advice we can offer authors is to involve a statistician or a trusted colleague in the research if they are uncertain about the best approach for analyzing the data at hand (or not at hand).
Statistical reviewers frequently conclude that authors’ inferences go too far beyond the data. For example, it is now common to analyze cross-sectional, correlational data by testing mediation, i.e., by conducting path analyses showing that the effect of “A” on “C” is mediated or partially mediated by “B.” Such analyses can be helpful for interpreting correlational data, but they do not solve the basic issue of cause and effect. Rarely, it seems, do authors test alternative models that might explain the data equally well. Likewise, interpretations of retrospectively collected data must be cautious due to various biases that can occur. Most of us understandably hope that our results match our expectations. We have literally read discussion sections that could have been written without the results; unanticipated findings are ignored or explained away, and anticipated findings are given much attention, even if weak. Authors should never say, for example, “The groups were different, although not significantly so.” What’s the solution? One good step is to ask a colleague who has no vested interest in the results to read the paper before submission.
A carefully planned and well-executed table can greatly enhance the quality of a manuscript. There is an art to good tables — they provide data beyond what an author can write in the text, but they are not overly complicated with unnecessary detail. The fifth edition of the Publication Manual of the American Psychological Association (2001) has many examples of good tables. Skimming through recent issues of JTS is another way to find examples of tables that are similar to those needed for a paper in preparation.
What happens after statistical review? Most papers are revised at least once before being accepted. At JTS, our statistical review assistant, Laurie Slone, evaluates how well a revised paper has complied with the recommendations of the original statistical review. Authors are not required to do everything a reviewer suggests, but they should aim to address each point, even if only in the letter that accompanies the re-submission.
After a paper has been accepted by a JTS associate editor, it is then edited for correct use of statistics and formatting of tables. Most manuscripts have to be returned to the authors one more time after this step because they have too many errors to forward to the publisher. Papers that format their statistics and tables properly thus skip one whole round of the process and get published more quickly.
JTS has been using statistical review for almost 10 years now. We have a crew of experts in different statistical methods that we rely on and appreciate greatly. Statistical review has enhanced the quality of many good articles. If statistical review seems like one more hurdle to jump over, authors should be assured that the process also protects them from receiving incorrect statistical advice and from making mistakes that they might later regret.
References
American Psychological Association. (2001). Publication manual of the American Psychological Association (5th edition). Washington, DC: Author.
Schnurr, P. (1998). Statistical review: An approach to common methodological and statistical problems. Journal of Traumatic Stress, 11, 405-412.
1Paula Schnurr and Fran Norris are editor-in-chief and deputy editor, respectively, of the Journal of Traumatic Stress.