Discrete geologic faults produce the largest earthquakes in the shallow crust. Here we describe the important characteristics of faults, and how we build fault sources for OpenQuake.
Please note that many of the hazard models developed outside of GEM may use different methods than those described here. However, the following is a description of the practices that we at GEM use for the development of our models.
Fault geometry and mapping
Fault geometry in map view is constrained through geologic mapping, while the geometry in cross-section view is estimated from geologic cross-section construction or based on the fault kinematics and local focal mechanisms.
In seismic hazard work, almost all faults are given as the geographic coordinates of the fault trace, with an average dip that is used to build a three dimensional representation of the fault surface.
Mapping faults for hazard work is a complicated endeavor; a more in-depth description of this process can be found at the GEM Hazard Blog.
Assessing fault activity
Fault activity is assessed through a variety of criteria. The first are instrumental, historical or paleoseismological evidence for earthquakes along the fault; second is strain accumulation that is rapid and localized enough to be measurable through geodetic techniques (GPS, InSAR, optical geodesy); and third is Quaternary geomorphic evidence such as fault scarps, offset streams, and so forth. If the evidence is strong in favor of activity, or a fault is thought to pose a great societal risk, then the fault will be included in the fault source model (with its appropriate uncertainty). If a fault does not display convincing evidence for activity given these criteria, it will be omitted from the fault source model.
The kinematics of faults, if they are not previously known from earlier studies, are inferred from the topographic and geomorphic expression of the fault, from local focal mechanisms, and from regional geodetic strain information. It is not typical that much confusion or ambiguity exists between normal, strike-slip and reverse faults, since these all have very distinct geomorphic expressions; the more confusing cases tend to be when oblique slip may be present, or when fault kinematics have changed over the millions of years of fault activity, and the topography from the previous tectonic regime is still present. It is more challenging to distinguish between left-slip and right-slip strike-slip faults if no focal mechanisms or GPS data are available, but it is still generally possible (particularly by looking at bends or stepovers in the fault and the kinematics of faults in these regions).
Fault slip rates are generally assessed through formal geologic studies of individual faults through neotectonic and paleoseismic studies, or from geodetic studies of faults or fault networks.
These are complicated and time-intensive investigations, and we at GEM do not generally do this work. Instead, we gather and evaluate the existing literature on faults in a region. There are always many more faults in an area than those that have had formal study, so we will use the rates given in the literature for the faults that have information, and then generalize that information in the context of geodetic strain rate data to infer what the slip rates may be for other structures. For example, faults or fault segments that lie along strike of faults with known slip rates are likely to have similar rates. The regional geodetic strain field provides an overall budget for slip rates within the region: if an area has 6 mm/yr of dextral shear, and the major fault in the area has a known slip rate of 3 mm/yr, then the other faults in the area cannot have dextral slip rates that add up to more than 3 mm/yr. The summed slip rate on faults may be less than the overall geodetic strain, though: some amount of strain may not be distributed on smaller structures or through continuous, plastic deformation of the crust instead of being localized on the major faults in a dataset.
The seismogenic thickness of a fault is the total vertical distance between the upper and lower edges of the fault that rupture in a full-length earthquake. It is thought to be a consequence of the frictional stability of the fault materials (and the encompassing crust) at the varying temperature, pressure and fluid contents through the crust. The upper limit of fault slip, the upper seismogenic depth, is usually considered to be the surface of the earth though in some instances (such as subduction zone interfaces) it may be lower. The lower limit is variable based on tectonic environment and the frictional characteristics of the fault materials.
To paint in broad brush strokes, within the continents, normal faults occupy hotter areas of the crust and rupture from (near) the surface to 10-15 km depth; the crust in reverse faulting environments is often colder and the faults will rupture from 15-25 km depth to the surface. Strike-slip faults occupy all environments, so rupture can be from the surface to 10-25 km depth.
Oceanic faults have more variability. Subduction zone interfaces can rupture to near 50 km depth, as they are very cold. Intraplate strike-slip faults can also rupture to >30 km depth, which is well into the mantle in oceanic lithosphere. Hill et al. (2015) report that the 2012 Wharton Basin earthquake east of Indonesia may have ruptured to 50 km. Oceanic spreading ridges and associated transform faults are very hot. Normal faulting does not produce large earthquakes and the lower depth is probably ~5 km. Associated transforms are slightly cooler and faulting will extend a bit deeper.
The most sound way to assess this is to look at finite fault inversions for the largest earthquakes in a region, if these exist. Lacking this, geodetic techniques may sometimes reveal a value indicating the lower limit of fault locking, although the uncertainties are usually quite large (and underestimated). Similarly, small to microseismicity in a region can give some constraints, but be aware that small earthquakes can occur at much deeper levels in the crust than large ones, because those earthquakes can occur in unfaulted rock that exhibits stick-slip frictional behavior and brittle failure to a greater depth than mature faults with well-developed fault gouge zones and circulating fluids.
Building Fault Source Models
Fault source models are usually created by creating three-dimensional fault surfaces and providing information about the style, magnitudes and frequencies of earthquakes that may occur on the fault surface.
Fault geometries are generally created as extrusions of the fault trace (or simplified trace) at a constant dip down to some limit, usually the lower boundary of the seismogenic thickness. Within OpenQuake, these are referred to as 'simple faults'.
In some instances, the geometry of a fault may change sufficiently down-dip that a more complicated representation is warranted. These are known as 'complex faults' in OpenQuake; they are represented by sets of lines of equal depth. OpenQuake then interpolates between these lines to make the fault surface. At GEM, we primarily use complex faults for subduction interfaces.
The occurrence of earthquakes on a fault is parameterized through magnitude-frequency distributions (MFDs). These give the magnitudes of all the earthquakes on a fault that are to be modeled, and the frequency (or annual probability of occurrence) of earthquakes of the corresponding magnitudes.
The two most common types of MFDs are truncated Gutenberg-Richter distributions, and characteristic distributions. Other MFDs exist that may be hybrids or based on other statistical models, but these are less commonly implemented in seismic hazard analysis. At GEM, we typically use the truncated Gutenberg-Richter distribution, but many other institutions use characteristic fault sources as well. It is still scientifically unknown what the 'true' distribution is and to what degree this changes for different faults, so the choice may come down to pragmatism, familiarity, preference and tradition.
Truncated Gutenberg-Richter distributions are typical Gutenberg-Richter Distributions that are bounded (truncated) by minimum and maximum magnitudes for earthquakes, Mmin and Mmax. Within those bounds, they are parameterized by the a and b values.
Mmin and Mmax have to be chosen by the fault modeler. Mmin is usually chosen as the smallest earthquake worth modeling in a given model--lowering this value increases the computation time of the model but may not increase the accuracy of the hazard calculations; lower values are more common in smaller-scale studies. Mmax is not so easily determined. The common practice at GEM is to choose it based on the area of a fault surface and the use of an empirical magnitude-area scaling relationship such as that of Wells and Coppersmith (1984) or the more updated Leonard (2012). Mmax then represents a typical full-fault rupture. However, these scaling relationships are statistically-derived and a good amount of variation is present. If there is convincing evidence of larger Mmax on a given fault than the scaling relationship predicts, one should of course choose that larger value.
The a and b values also need to be determined for each fault. Common practice is to take the b-value for a broader tectonic region that encompasses the fault derived from the instrumental seismic catalog, and apply that b-value to every fault within the region. There are a few theoretical reasons why this should not be absolutely correct: primarily, the sum of multiple truncated Gutenberg-Richter distributions will not produce a Gutenberg-Richter distribution (in mathematical terminology, the truncated GR distribution is not Levy stable). However, it is exceedingly rare for any empirical constraints on b-values for individual faults to exist, so this is a pragmatic compromise.
The a-values are chosen so that the total moment release rate adds up to the seismic moment accumulation rate. To make this calculation, the total moment accumulation rate is calculated as the product of the fault area, the shear modulus of the rock encasing the fault, and the fault slip rate. Then, the 'aseismic coefficient', which is the fraction of this total moment accumulation rate that is not released through earthquakes, is subtracted (note that in the case of creeping faults, this moment may never physically be stored in the crust as elastic strain; nevertheless the calculation will be the same). Finally, the a-value is chosen so that the total amount of seismic moment released annually (on average) by all of the earthquakes on the fault equals the annual moment accumulation.
Characteristic distributions are narrow distributions that typically represent full-length rupture of a given fault. The Mmax values are chosen through fault scaling relationships or inferences from paleoseismic data. These ruptures may also occur quasi-periodically (as opposed to uniformly randomly) though this sort of time-dependence is not often used at GEM.
Hill, Emma M., et al. "The 2012 Mw 8.6 Wharton Basin sequence: A cascade of great earthquakes generated by near‐orthogonal, young, oceanic mantle faults." Journal of Geophysical Research: Solid Earth 120.5 (2015): 3723-3747. https://doi-org.www2.lib.ku.edu/10.1002/2014JB011703
Leonard, Mark. "Earthquake fault scaling: Self‐consistent relating of rupture length, width, average displacement, and moment release." Bulletin of the Seismological Society of America 102.6 (2012): 2797-2797. https://doi-org.www2.lib.ku.edu/10.1785/0120120249
Wells, Donald L., and Kevin J. Coppersmith. "New empirical relationships among magnitude, rupture length, rupture width, rupture area, and surface displacement." Bulletin of the seismological Society of America 84.4 (1994): 974-1002.