z-logo
Premium
Random variability explains apparent global clustering of large earthquakes
Author(s) -
Michael Andrew J.
Publication year - 2011
Publication title -
geophysical research letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.007
H-Index - 273
eISSN - 1944-8007
pISSN - 0094-8276
DOI - 10.1029/2011gl049443
Subject(s) - aftershock , cluster analysis , seismology , null hypothesis , geology , cluster (spacecraft) , statistical hypothesis testing , statistics , mathematics , computer science , programming language
The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long‐term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well‐described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here