
Quantifying uncertainty in mean earthquake interevent times for a finite sample
Author(s) -
Naylor M.,
Main I. G.,
Touati S.
Publication year - 2009
Publication title -
journal of geophysical research: solid earth
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.67
H-Index - 298
eISSN - 2156-2202
pISSN - 0148-0227
DOI - 10.1029/2008jb005870
Subject(s) - sample size determination , autocorrelation , statistics , event (particle physics) , sample (material) , function (biology) , convergence (economics) , spatial analysis , gaussian , mathematics , geology , physics , economics , thermodynamics , economic growth , quantum mechanics , evolutionary biology , biology
Seismic activity is routinely quantified using means in event rate or interevent time. Standard estimates of the error on such mean values implicitly assume that the events used to calculate the mean are independent. However, earthquakes can be triggered by other events and are thus not necessarily independent. As a result, the errors on mean earthquake interevent times do not exhibit Gaussian convergence with increasing sample size according to the central limit theorem. In this paper we investigate how the errors decay with sample size in real earthquake catalogues and how the nature of this convergence varies with the spatial extent of the region under investigation. We demonstrate that the errors in mean interevent times, as a function of sample size, are well estimated by defining an effective sample size, using the autocorrelation function to estimate the number of pieces of independent data that exist in samples of different length. This allows us to accurately project error estimates from finite natural earthquake catalogues into the future and promotes a definition of stability wherein the autocorrelation function is not varying in time. The technique is easy to apply, and we suggest that it is routinely applied to define errors on mean interevent times as part of seismic hazard assessment studies. This is particularly important for studies that utilize small catalogue subsets (fewer than ∼1000 events) in time‐dependent or high spatial resolution (e.g., for catastrophe modeling) hazard assessment.