

EDITORIAL 

Year : 2022  Volume
: 6
 Issue : 2  Page : 3338 

Statistical inference through estimation: Recommendations from the International Society of Physiotherapy Journal Editors
Mark R Elkins^{1}, Rafael Zambelli Pinto^{2}, Arianne Verhagen^{1}, Monika Grygorowicz^{3}, Anne Söderlund^{4}, Matthieu Guemann^{5}, Antonia GómezConesa^{6}, Sarah Blanton^{7}, JeanMichel Brismée^{8}, Shabnam Agrawal^{9}, Alan Jette^{10}, Sven Karstens^{11}, Michele Harms^{12}, Geert Verheyden^{13}, Umer Sheikh^{14}
^{1} International Society of Physiotherapy Journal Editors; Journal of Physiotherapy ^{2} International Society of Physiotherapy Journal Editors; Brazilian Journal of Physical Therapy/Revista Brasileira de Fisioterapia ^{3} BMC Sports Science, Medicine and Rehabilitation ^{4} European Journal of Physiotherapy ^{5} European Rehabilitation Journal ^{6} Fisioterapia ^{7} Journal of Humanities in Rehabilitation ^{8} Journal of Manual & Manipulative Therapy ^{9} Journal of Society of Indian Physiotherapists ^{10} Physical Therapy ^{11} Physioscience ^{12} Physiotherapy ^{13} Physiotherapy Research International ^{14} The Journal of Physiotherapy & Sports Medicine
Date of Submission  21Dec2021 
Date of Acceptance  11Jan2022 
Date of Web Publication  02Jan2023 
Correspondence Address: Mark R Elkins Centre for Education and Workforce Development, Sydney Local Health District, Sydney, Australia
Source of Support: None, Conflict of Interest: None
DOI: 10.4103/jsip.jsip_18_21
How to cite this article: Elkins MR, Pinto RZ, Verhagen A, Grygorowicz M, Söderlund A, Guemann M, GómezConesa A, Blanton S, Brismée JM, Agrawal S, Jette A, Karstens S, Harms M, Verheyden G, Sheikh U. Statistical inference through estimation: Recommendations from the International Society of Physiotherapy Journal Editors. J Soc Indian Physiother 2022;6:338 
How to cite this URL: Elkins MR, Pinto RZ, Verhagen A, Grygorowicz M, Söderlund A, Guemann M, GómezConesa A, Blanton S, Brismée JM, Agrawal S, Jette A, Karstens S, Harms M, Verheyden G, Sheikh U. Statistical inference through estimation: Recommendations from the International Society of Physiotherapy Journal Editors. J Soc Indian Physiother [serial online] 2022 [cited 2023 Feb 5];6:338. Available from: http://www.jsip.ac.in/text.asp?2022/6/2/33/366644 
Null hypothesis statistical tests are often conducted in health care research,^{[1]} including in the physiotherapy field.^{[2]} Despite their widespread use, null hypothesis statistical tests have important limitations. This copublished editorial explains statistical inference using null hypothesis statistical tests and the problems inherent to this approach; examines an alternative approach for statistical inference (known as estimation); and encourages readers of physiotherapy research to become familiar with estimation methods and how the results are interpreted. It also advises researchers that some physiotherapy journals that are members of the International Society of Physiotherapy Journal Editors (ISPJE) will be expecting manuscripts to use estimation methods instead of null hypothesis statistical tests.
What Is Statistical Inference?   
Statistical inference is the process of making inferences about populations using data from samples.^{[1]} Imagine, for example, that some researchers want to investigate something (perhaps the effect of an intervention, the prevalence of a comorbidity, or the usefulness of a prognostic model) in people after stroke. It is unfeasible for the researchers to test all stroke survivors in the world; instead, the researchers can only recruit a sample of stroke survivors and conduct their study with that sample. Typically, such a sample makes up a miniscule fraction of the population, so the result from the sample is likely to differ from the result in the population.^{[3]} Researchers must therefore use their statistical analysis of the data from the sample to infer what the result is likely to be in the population.
What Are Null Hypothesis Statistical Tests?   
Traditionally, statistical inference has relied on null hypothesis statistical tests. Such tests involve positing a null hypothesis (e.g., that there is no effect of an intervention on an outcome, that there is no effect of exposure on risk, or that there is no relationship between two variables). Such tests also involve calculating a pvalue, which quantifies the probability (if the study were to be repeated many times) of observing an effect or relationship at least as large as the one that was observed in the study sample, if the null hypothesis is true. Note that the null hypothesis refers to the population, not the study sample.
Because the reasoning behind these tests is linked to imagined repetition of the study, they are said to be conducted within a “frequentist” framework. In this framework, the focus is on how much a statistical result (e.g., a mean difference, a proportion, or a correlation) would vary among the repeats of the study. If the data obtained from the study sample indicate that the result is likely to be similar among the imagined repeats of the study, this is interpreted as an indication that the result is in some way more credible.
One type of null hypothesis statistical test is significance testing, developed by Fisher.^{[4],[5],[6]} In significance testing, if a result at least as large as the result observed in the study would be unlikely to occur in the imagined repeats of the study if the null hypothesis is true (as reflected by p < 0.05), then this is interpreted as evidence that the null hypothesis is false. Another type of null hypothesis statistical test is hypothesis testing, developed by Neyman and Pearson.^{[4],[5],[6]} Here, two hypotheses are posited: the null hypothesis (i.e., that there is no difference in the population) and the alternative hypothesis (i.e., that there is a difference in the population). The pvalue tells the researchers which hypothesis to accept: if p ≥ 0.05, retain the null hypothesis; if p < 0.05, reject the null hypothesis and accept the alternative. Although these two approaches are mathematically similar, they differ substantially in how they should be interpreted and reported. Despite this, many researchers do not recognize the distinction and analyze their data using an unreasoned hybrid of the two methods.
Problems with Null Hypothesis Statistical Tests   
Regardless of whether significance testing or hypothesis testing (or a hybrid) is considered, null hypothesis statistical tests have numerous problems.^{[4],[5],[7]} Five crucial problems are explained in [Table 1]. Each of these problems is fundamental enough to make null hypothesis statistical tests unfit for use in research. This may surprise many readers, given how widely such tests are used in published research.^{[1],[2]}  Table 1: Problems with null hypothesis statistical tests: Modified from Herbert^{[26]}
Click here to view 
It is also surprising that the widespread use of null hypothesis statistical tests has persisted for so long, given that the problems in [Table 1] have been repeatedly raised in health care journals for decades,^{[8],[9]} including physiotherapy journals.^{[10],[11]} There has been some movement away from null hypothesis statistical tests, but the use of alternative methods of statistical inference has increased slowly over decades, as seen in analyses of health care research, including physiotherapy trials.^{[2],[12]} This is despite the availability of alternative methods of statistical inference and promotion of those methods in statistical, medical, and physiotherapy journals.^{[10],[13],[14],[15],[16]}
Estimation as an Alternative Approach for Statistical Inference   
Although there are multiple alternative approaches to statistical inference,^{[13]} the simplest is estimation.^{[17]} Estimation is based on a frequentist framework but, unlike null hypothesis statistical tests, its aim is to estimate parameters of populations using data collected from the study sample. The uncertainty or imprecision of those estimates is communicated with confidence intervals.^{[10],[14]}
A confidence interval can be calculated from the observed study data, the size of the sample, the variability in the sample, and the confidence level. The confidence level is chosen by the researcher, conventionally at 95%. This means that if hypothetically the study were to be repeated many times, 95% of the confidence intervals would contain the true population parameter. Roughly speaking, a 95% confidence interval is the range of values within which we can be 95% certain that the true parameter in the population actually lies.
Confidence intervals are often discussed in relation to treatment effects in clinical trials,^{[18],[19]} but it is possible to put a confidence interval around any statistic, regardless of its use, including mean difference, risk, odds, relative risk, odds ratio, hazard ratio, correlation, proportion, absolute risk reduction, relative risk reduction, number needed to treat, sensitivity, specificity, likelihood ratios, diagnostic odds ratios, and difference in medians.
Interpretation of the Results of the Estimation Approach   
To use the estimation approach well, it is not sufficient simply to report confidence intervals. Researchers must also interpret the relevance of the information portrayed by the confidence intervals and consider the implications arising from that information. The path of migration of researchers from statistical significance and pvalues to estimation methods is littered with examples of researchers calculating confidence intervals at the behest of editors, but then ignoring the confidence intervals and instead interpreting their study’s result dichotomously as statistically significant or nonsignificant depending on the pvalue.^{[20]} Interpretation is crucial.
Some authors have proposed a ban on terms related to interpretation of null hypothesis statistical testing. One prominent example is an editorial published in The American Statistician,^{[13]} which introduced a special issue on statistical inference. It states:
The American Statistical Association Statement on PValues and Statistical Significance stopped just short of recommending that declarations of “statistical significance” be abandoned. We take that step here. We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term “statistically significant” entirely. Nor should variants such as “significantly different,” “p < 0.05,” and “nonsignificant” survive, whether expressed in words, by asterisks in a table, or in some other way.
This may seem radical and unworkable to researchers with a long history of null hypothesis statistical testing, but many concerns can be allayed. First, such a ban would not discard decades of existing research reported with null hypothesis statistical tests; the data generated in such studies maintain their validity and will often be reported in sufficient detail for confidence intervals to be calculated. Second, reframing the study’s aim involves a simple shift in focus from whether the result is statistically significant to gauging how large and how precise the study’s estimate of the population parameter is. (For example, instead of aiming to determine whether a treatment has an effect in stroke survivors, the aim is to estimate the size of the average effect. Instead of aiming to determine whether a prognostic model is predictive, the aim is to estimate how well the model predicts.) Third, the statistical imprecision of those estimates can be calculated readily. Existing statistical software packages already calculate confidence intervals, including free software such as R.^{[21],[22]} Lastly, learning to interpret confidence intervals is relatively straightforward.
Many researchers and readers initially come to understand how to interpret confidence intervals around estimates of the effect of a treatment. In a study comparing a treatment versus control with a continuous outcome measure, the study’s best estimate of the effect of the treatment is usually the average betweengroup difference in outcome. To account for the fact that estimates based on a sample may differ by chance from the true value in the population, the confidence interval provides an indication of the range of values above and below that estimate where the true average effect in the relevant clinical population may lie. The estimate and its confidence interval should be compared against the “smallest worthwhile effect” of the intervention on that outcome in that population.^{[23]} The smallest worthwhile effect is the smallest benefit from an intervention that patients feel outweighs its costs, risk, and other inconveniences.^{[23]} If the estimate and the ends of its confidence interval are all more favorable than the smallest worthwhile effect, then the treatment effect can be interpreted as typically considered worthwhile by patients in that clinical population. If the effect and its confidence interval are less favorable than the smallest worthwhile effect, then the treatment effect can be interpreted as typically considered trivial by patients in that clinical population. Results with confidence intervals that span the smallest worthwhile effect indicate a benefit with uncertainty about whether it is worthwhile. Results with a narrow confidence interval that spans no effect indicate that the treatment’s effects are negligible, whereas results with a wide confidence interval that spans no effect indicate that the treatment’s effects are uncertain. For readers unfamiliar with this sort of interpretation, some clear and nontechnical papers with clinical physiotherapy examples are available.^{[10],[14],[18],[19]}
Interpretation of estimates of treatment effects and their confidence intervals relies on knowing the smallest worthwhile effect (sometimes called the minimum clinically important difference).^{[23]} For some research questions, such a threshold has not been established or has been established with inadequate methods. In such cases, researchers should consider conducting a study to establish the threshold or at least to nominate the threshold prospectively.
Readers who understand the interpretation of confidence intervals around treatment effect estimates will find interpretation of confidence intervals around many other types of estimates quite familiar. Roughly speaking, the confidence interval indicates the range of values around the study’s main estimate where the true population result probably lies. To interpret a confidence interval, we simply describe the practical implications of all values inside the confidence interval.^{[24]} For example, in a diagnostic test accuracy study, the positive likelihood ratio tells us how much more likely a positive test finding is in people who have the condition than it is in people who do not have the condition. A diagnostic test with a positive likelihood ratio greater than about 3 is typically useful and greater than about 10 is very useful.^{[25]} Therefore, if a diagnostic test had a positive likelihood ratio of 4.8 with a 95% confidence interval of 4.1–5.6, we could anticipate that the true positive likelihood ratio in the population is both useful and similar to the study’s main estimate. Conversely, if a study estimated the prevalence of depression in people after anterior cruciate ligament rupture at 40% with a confidence interval from 5% to 75%, we may conclude that the main estimate is suggestive of a high prevalence but too imprecise to conclude that confidently.
ISPJE Member Journals’ Policy Regarding the Estimation Approach   
The executive of the ISPJE strongly recommends that member journals seek to foster use of the estimation approach in the papers they publish. In line with that recommendation, the editors who have coauthored this editorial advise researchers that their journals will expect manuscripts to use estimation methods instead of null hypothesis statistical tests. We acknowledge that it will take time to make this transition, so editors will give authors the opportunity to revise manuscripts to incorporate estimation methods if the manuscript seems otherwise potentially viable for publication. Editors may assist authors with those revisions where required.
Readers who require more detailed information to address questions about the topics raised in this editorial are referred to the resources in [Table 2], such as the Research Note on the problems of significance and hypothesis testing^{[25]} and an excellent textbook that addresses confidence intervals and the application of estimation methods in various research study designs with clinical physiotherapy examples.^{[26]} Both are readily accessible to researchers and clinicians without any prior understanding of the issues.  Table 2: Resources that provide additional information to respond to questions about the transition from null hypothesis statistical tests to estimation methods
Click here to view 
Quantitative research studies in physiotherapy that are analyzed and interpreted using confidence intervals will provide more valid and relevant information than those analyzed and interpreted using null hypothesis statistical tests. The estimation approach is therefore of great potential value to the researchers, clinicians, and consumers who rely upon physiotherapy research, and that is why ISPJE is recommending that member journals foster the use of estimation in the articles they publish.
Acknowledgement
We thank Professor Rob Herbert from Neuroscience Research Australia (NeuRA) for his presentation to the ISPJE on this topic and for comments on a draft of this editorial.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
References   
1.  Nickerson RS. Null hypothesis significance testing: A review of an old and continuing controversy. Psychol Methods 2000;5:241301. 
2.  Freire APCF, Elkins MR, Ramos EMC, Moseley AM. Use of 95% confidence intervals in the reporting of betweengroup differences in randomized controlled trials: Analysis of a representative sample of 200 physical therapy trials. Braz J Phys Ther 2019;23:30210. 
3.  Altman DG, Bland JM. Uncertainty and sampling error. Bmj 2014;349:g7064. 
4.  Barnett V. Comparative Statistical Inference. London, New York: Wiley; 1973. 
5.  Royall RM. Statistical Evidence: A Likelihood Paradigm. 1st ed. London, New York: Chapman & Hall; 1997. 
6.  Gigerenzer G. The Empire of Chance: How Probability Changed Science and Everyday Life. Cambridge, England: Cambridge University Press; 1989. 
7.  Goodman SN, Royall R. Evidence and scientific research. Am J Public Health 1988;78:156874. 
8.  Ziliak S, McCloskey D. The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor, USA: University of Michigan Press; 2008. 
9.  Hubbard R. Corrupt Research: The Case for Reconceptualizing Empirical Management and Social Science. Thousand Oaks, USA: Sage; 2016. 
10.  Herbert RD. How to estimate treatment effects from reports of clinical trials. I: Dichotomous outcomes. Aust J Physiother 2000;46:22935. 
11.  Maher CG, Sherrington C, Elkins M, Herbert RD, Moseley AM. Challenges for evidencebased physical therapy: Accessing and interpreting highquality evidence on therapy. Phys Ther 2004;84:64454. 
12.  Yi D, Ma D, Li G, Zhou L, Xiao Q, Zhang Y, et al. Statistical use in clinical studies: Is there evidence of a methodological shift? Plos One 2015;10:e0140159. 
13.  Wasserstein RL, Schirm AL, Lazar NA. Moving to a world beyond “p< 0.05”. Am Stat 2019;73(Suppl. 1):119. https://doi.org/10.1080/00031305.2019.1583913 
14.  Herbert RD. How to estimate treatment effects from reports of clinical trials. II: Dichotomous outcomes. Aust j Physiother 2000;46:30913. 
15.  Sim J, Reid N. Statistical inference by confidence intervals: Issues of interpretation and utilization. Phys Ther 1999;79:18695. 
16.  Rothman KJ. Disengaging from statistical significance. Eur J Epidemiol 2016;31:4434. 
17.  Cumming G. Multivariate Applications Series. New York: Routledge; 2012. 
18.  Kamper SJ. Showing confidence (intervals). Braz J Phys Ther 2019;23:2778. 
19.  Kamper SJ. Confidence intervals: Linking evidence to practice. J Orthop Sports Phys Ther 2019;49:7634. 
20.  Fidler F, Thomason N, Cumming G, Finch S, Leeman J. Editors can lead researchers to confidence intervals, but can’t make them think: Statistical reform lessons from medicine. Psychol Sci 2004;15:11926. 
21.  R Core Team (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.Rproject.org/ 
22.  RStudio Team (2019). RStudio: Integrated Development for R. RStudio, Inc., Boston, USA. http://www.rstudio.com/ 
23.  Ferreira M. Research note: The smallest worthwhile effect of a health intervention. J Physiother 2018;64:2724. 
24.  Amrhein V, Greenland S, McShane B. Scientists rise up against statistical significance. Nature 2019;567:3057. 
25.  Herbert R. Research Note: Significance testing and hypothesis testing: Meaningless, misleading and mostly unnecessary. J Physiother 2019;65:17881. https://doi.org/10.1016/j.jphys.2019.05.001 
26.  Herbert RD, Jamtvedt G, Mead J, Hagen KB. Practical EvidenceBased Physiotherapy. 2nd ed. Oxford: Elsevier; 2011. 
27.  Boos DD, Stefanski LA. Pvalue precision and reproducibility. Am Stat 2011;65:21321. 
28.  Wasserstein R, Lazar N. The ASA’s statement on pvalues: Context, process, and purpose. Am Stat 2016;70:12933. https://doi.org/10.1080/00031305.2016.1154108 
29.  International Committee of Medical Journal Editors. ICMJE recommendations for the conduct, reporting, editing and publication of scholarly work in medical journals. 2013. http://www.icmje.org/icmjerecommendations.pdf 
30.  McGough JJ, Faraone SV. Estimating the size of treatment effects: Moving beyond p values. Psychiatry (Edgmont) 2009;6:219. 
31.  Hayat MJ, Chandrasekhar R, Dietrich MS, Gifford RH, Golub JS, Holder JT, et al. Moving otology beyond p < 0.05. Otol Neurotol 2020;41:5789. 
32.  Hayat MJ, Staggs VS, Schwartz TA, Higgins M, Azuero A, Budhathoki C, et al. Moving nursing beyond p < 0.05. Res Nurs Health 2019;42:2445. 
33.  Cumming G, Fidler F, Kalinowski P, Lai J. The statistical recommendations of the American Psychological Association Publication Manual: Effect sizes, confidence intervals, and metaanalysis. Aust J Psychol 2012;64:13846. 
34.  CalinJageman RJ, Cumming G. Estimation for better inference in neuroscience. eNeuro 2019;6:eNeuro.020519.2019. 
35.  Schreiber JB. New paradigms for considering statistical significance: A way forward for health services research journals, their authors, and their readership. Res Social Adm Pharm 2020;16:5914. 
36.  Erickson RA, Rattner BA. Moving beyond p < 0.05 in ecotoxicology: A guide for practitioners. Environ Toxicol Chem 2020;39:165769. 
37.  Smith RJ. P > .05: The incorrect interpretation of “not significant” results is a significant problem. Am J Phys Anthropol 2020;172:5217. 
38.  Percie du Sert N, Ahluwalia A, Alam S, Avey MT, Baker M, Browne WJ, et al. Reporting animal research: Explanation and elaboration for the Arrive Guidelines 2.0. Plos Biol 2020;18:e3000411. 
[Table 1], [Table 2]
