Feature: Earthquake
earthquake
1906 rev i s i t e d
I
n the early morning hours of April 18, 1906, disaster struck the San Francisco Bay area. At 5:12 AM, the sleepy dawn was shattered as a powerful earthquake ripped through the San Andreas fault. The temblor, which registered 7.8 on the Richter scale – the energy equivalent of a million atomic bombs – was one of the worst by natural disasters to strike a major Ryan Propper U.S. city. It left more than 3,000 dead, 200,000 homeless, and cost a staggering $400 million in damage. Scientists have long sought to develop accurate models of this destructive event, hoping to gain both historical perspective and pragmatic understanding of the region’s altered geophysical state. Now, almost 100 years after “The Big One,” Stanford University professors Gregory Beroza and Paul Segall in the Department of Geophysics, along with graduate student Seok Goo Song, are rethinking the traditional interpretations of this catastrophic quake.
Yesterday’s Models on Shaky Ground Beroza explains that there have been two key models of the 1906 San Francisco earthquake. The first, proposed by Dr. Wayne Thatcher and his colleagues at the U.S. Geological Survey, was based on triangulation data, “the 19th-century equivalent of GPS,” Beroza says. Triangulation involves the precise measurement of angles between geodetic monuments - specific points on the Earth’s surface used for surveying and navigating. This process allows surveyors to determine distances with a high degree of accuracy. “The San Andreas fault follows the coast fairly closely,” Beroza explains. “When the earthquake happened, it changed those angles. By looking at the difference in angles before and after the earthquake, Thatcher and his colleagues were able to turn that into a map of the slip distribution on the fault.” The second model of the 1906 quake was developed by Professor David Wald, a Visiting Associate in Geophysics at the California Institute of Technology. Wald’s analysis was based not on positional data but on recordings of the seismic waves resulting from the temblor. These readings were taken from research stations around the globe, including Japan, Europe, and Puerto Rico. Wald’s analysis of the seismic waves provided another framework for understanding the earthquake. His team used these data to create an interpretation of the quake’s rupture as a function of space and time. Unfortunately, the geodetic and seismic models differed in two important respects. First, though both interpretations suggested that the fault extended from its epicenter near Daly City south to San Juan Batista, Wald’s model showed
Photo courtesy of Stanford University Archives.
a new model for an old quake
The statue of the famous geologist, Louis Agassiz, fell head first into the Quad during the 1906 Quake. Remarkably, Agassiz only suffered damage to his nose. Agassiz is now firmly attached to his ledge in front of Jordan Hall.
34 stanford scientific
its northward extent reaching only to Point Arena, for a total fault distance of 300 kilometers. On the other hand, Thatcher’s model placed the northernmost reach of the rupture much farther up the coast, near Cape Mendocino. This position amounted to a total fault distance of nearly 500 kilometers. Second, whereas Wald’s model measured the quake’s intensity at 7.7 on the Richter scale, Thatcher’s suggested a value of 7.9 – twice as strong when gauged by the amount of energy the temblor released. Beroza notes that the inconsistencies in these models were quite significant, even for the relatively primitive data sources used to create them. “The two models of the earthquake were discrepant in a way that superseded how old and spotty the data was,” he says. In other words, the two early models of the earthquake - the seismic model and the geodetic model were discrepant because of errors in the models themselves. So although the data from these models was inaccurate, the fundamental problem was that one of the models was just plain wrong. Even with relatively inaccurate data, the models gave such different results that it couldn’t be attributed simply to the data.
Supershear Seismology Beroza and his colleagues reexamined both data sets under the lens of a new hypothesis: that the earthquake ruptured the Earth’s crust much more quickly than the previously assumed. “In an elastic medium like the Earth,” Beroza explains, “there are two fundamental wave speeds. One is the compressional wave or P-wave speed; the other is the shear wave or S-wave speed.” The S-wave speed, about 3 kilometers per second in the crust of the San Andreas fault, was traditionally regarded as an upper bound on the rupture velocity of the 1906 quake. However, speeds up to the P-wave velocity, about 5 kilometers per second, are physically possible, though seldom observed. By positing that the San Francisco earthquake was supershear – that is, the rupture velocity exceeded the S-wave speed – Beroza’s team fit both the geodetic and seismic data sets to a single model. The notion of postulating supershear rupture for “The Big One” might have seemed outlandish until very recently. “Ten years ago, you’d have had a hard time convincing our colleagues that this old earthquake with this relatively bad data was supershear,” Beroza says. “But things changed.” The catalyst for change was a string of recent earthquakes – Turkey in 1999, Tibet in 2001, and Alaska in 2002 – that exhibited markedly similar geophysical behavior when compared with the San Francisco quake of 1906. All three measured 7.5 to 8 on the Richter scale and occurred on strike-slip faults, a category of horizontally-slipping tectonic plates that includes the San Andreas fault. In each case, modern data confirmed that supershear rupture was responsible for the temblors’ power and devastation[RP4]. “It certainly gave us the license to propose it for 1906, so we went ahead and did
1.9.0.6.
Feature: Earthquake
by the numbers
Strength: .8 Dead: ,000+ Homeless: 00,000 Cost: $400 million Newly Discovered Rupture Velocity: -5 km/sec that,” Beroza concludes. “And sure enough, we can fit the data acceptably well with supershear rupture faults.”
The Big Picture The Stanford team views their new model of the 1906 San Francisco earthquake as more than just the fruits of an academic exercise. “You might say, ‘This earthquake happened a hundred years ago. Who really cares?’ It’s more than that,” Beroza claims. He explains that our modern understanding of seismic hazard in northern California depends, to a large extent, on a thorough comprehension of the causes and effects of “The Big One.” After 1906, there were very few damaging earthquakes along the San Andreas fault. Other than the 1989 Loma Prieta temblor and some moderate quakes in 1926 near Monterey Bay, seismic activity has been relatively quiet. Beroza and his colleagues think this period may be “the calm before the storm,” and that we’re due for another powerful earthquake sometime soon. “It’s strongly time-dependent,” he notes. “Understanding exactly, or even approximately, when we might enter a more active phase depends on knowing how much slip, and where it was, in the 1906 earthquake.” By looking at the past in a new light, Stanford researchers have moved one step closer toward understanding forces at work deep within the Earth and predicting the future of seismic activity in California. S
Our modern understanding of seismic hazard...depends on a thorough comprehension of the causes and effects of “The Big One”.
layout design: Marisa Dowling
rYan ProPPer is a junior majoring in Computer Science at Stanford University. He has been affiliated with the Stanford Scientific Review for three years, serving in various capacities as a writer, editor, and head of distribution. Originally from Colorado, Ryan enjoys programming, tennis, and following Colorado Avalanche hockey.
volume iv 5