Science Source
Quantifying statistical uncertainty in the attribution of human influence on severe weather
- States that a common approach to event attribution uses climate model output under factual (real-world) and counterfactual (world that might have been without anthropogenic greenhouse gas emissions) scenarios to estimate the probabilities of the event of interest under the two scenarios
- States that event attribution is then quantified by the ratio of the two probabilities
- States that while this approach has been applied many times in the last 15 years, the statistical techniques used to estimate the risk ratio based on climate model ensembles have not drawn on the full set of methods available in the statistical literature and have in some cases used and interpreted the bootstrap method in non-standard ways
- Presents a precise frequentist statistical framework for quantifying the effect of sampling uncertainty on estimation of the risk ratio
- Proposes the use of statistical methods that are new to event attribution
- Evaluates a variety of methods using statistical simulations
- Concludes that existing statistical methods not yet in use for event attribution have several advantages over the widely-used bootstrap, including better statistical performance in repeated samples and robustness to small estimated probabilities
- Uses the 2011 Texas heat wave as a case study
- Finds that the probability of an extreme heatwave is much greater with anthropogenic influence than without, and for events defined by anomaly values larger than one
- States that while the results vary somewhat with the event definition, the lower bound of the confidence interval provides evidence for a robust attribution statement in light of uncertainty