Seismic tomography

From HandWiki
Revision as of 20:50, 6 February 2024 by Steve2012 (talk | contribs) (fix)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Imaging technique used in seismology

Seismic tomography or seismotomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. P-, S-, and surface waves can be used for tomographic models of different resolutions based on seismic wavelength, wave source distance, and the seismograph array coverage.[1] The data received at seismometers are used to solve an inverse problem, wherein the locations of reflection and refraction of the wave paths are determined. This solution can be used to create 3D images of velocity anomalies which may be interpreted as structural, thermal, or compositional variations. Geoscientists use these images to better understand core, mantle, and plate tectonic processes.

Theory

Tomography is solved as an inverse problem. Seismic travel time data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but the compositional layering, tectonic structure, and thermal variations reflect and refract seismic waves. The location and magnitude of these variations can be calculated by the inversion process, although solutions to tomographic inversions are non-unique.

Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of traveltime difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source.[2]

History

Seismic tomography requires large datasets of seismograms and well-located earthquake or explosion sources. These became more widely available in the 1960s with the expansion of global seismic networks, and in the 1970s when digital seismograph data archives were established. These developments occurred concurrently with advancements in computing power that were required to solve inverse problems and generate theoretical seismograms for model testing.[3]

In 1977, P-wave delay times were used to create the first seismic array-scale 2D map of seismic velocity.[4] In the same year, P-wave data were used to determine 150 spherical harmonic coefficients for velocity anomalies in the mantle.[1] The first model using iterative techniques, required when there are a large numbers of unknowns, was done in 1984. This built upon the first radially anisotropic model of the Earth, which provided the required initial reference frame to compare tomographic models to for iteration.[5] Initial models had resolution of ~3000 to 5000 km, as compared to the few hundred kilometer resolution of current models.[6]

Seismic tomographic models improve with advancements in computing and expansion of seismic networks. Recent models of global body waves used over 107 traveltimes to model 105 to 106 unknowns.[7]

Process

Seismic tomography uses seismic records to create 2D and 3D images of subsurface anomalies by solving large inverse problems such that generate models consistent with observed data. Various methods are used to resolve anomalies in the crust and lithosphere, shallow mantle, whole mantle, and core based on the availability of data and types of seismic waves that penetrate the region at a suitable wavelength for feature resolution. The accuracy of the model is limited by availability and accuracy of seismic data, wave type utilized, and assumptions made in the model.

P-wave data are used in most local models and global models in areas with sufficient earthquake and seismograph density. S- and surface wave data are used in global models when this coverage is not sufficient, such as in ocean basins and away from subduction zones. First-arrival times are the most widely used, but models utilizing reflected and refracted phases are used in more complex models, such as those imaging the core. Differential traveltimes between wave phases or types are also used.

Local tomography

Local tomographic models are often based on a temporary seismic array targeting specific areas, unless in a seismically active region with extensive permanent network coverage. These allow for the imaging of the crust and upper mantle.

  • Diffraction and wave equation tomography use the full waveform, rather than just the first arrival times. The inversion of amplitude and phases of all arrivals provide more detailed density information than transmission traveltime alone. Despite the theoretical appeal, these methods are not widely employed because of the computing expense and difficult inversions.
  • Reflection tomography originated with exploration geophysics. It uses an artificial source to resolve small-scale features at crustal depths.[8] Wide-angle tomography is similar, but with a wide source to receiver offset. This allows for the detection of seismic waves refracted from sub-crustal depths and can determine continental architecture and details of plate margins. These two methods are often used together.
  • Local earthquake tomography is used in seismically active regions with sufficient seismometer coverage. Given the proximity between source and receivers, a precise earthquake focus location must be known. This requires the simultaneous iteration of both structure and focus locations in model calculations.[7]
  • Teleseismic tomography uses waves from distant earthquakes that deflect upwards to a local seismic array. The models can reach depths similar to the array aperture, typically to depths for imaging the crust and lithosphere (a few hundred kilometers). The waves travel near 30° from vertical, creating a vertical distortion to compact features.[9]

Regional or global tomography

Simplified and interpreted P- and S-wave velocity variations in the mantle across southern North America showing the subducted Farallon Plate.

Regional to global scale tomographic models are generally based on long wavelengths. Various models have better agreement with each other than local models due to the large feature size they image, such as subducted slabs and superplumes. The trade off from whole mantle to whole Earth coverage is the coarse resolution (hundreds of kilometers) and difficulty imaging small features (e.g. narrow plumes). Although often used to image different parts of the subsurface, P- and S-wave derived models broadly agree where there is image overlap. These models use data from both permanent seismic stations and supplementary temporary arrays.

  • First arrival traveltime P-wave data are used to generate the highest resolution tomographic images of the mantle. These models are limited to regions with sufficient seismograph coverage and earthquake density, therefore cannot be used for areas such as inactive plate interiors and ocean basins without seismic networks. Other phases of P-waves are used to image the deeper mantle and core.
  • In areas with limited seismograph or earthquake coverage, multiple phases of S-waves can be used for tomographic models. These are of lower resolution than P-wave models, due to the distances involved and fewer bounce-phase data available. S-waves can also be used in conjunction with P-waves for differential arrival time models.
  • Surface waves can be used for tomography of the crust and upper mantle where no body wave (P and S) data are available. Both Rayleigh and Love waves can be used. The low frequency waves lead to low resolution models, therefore these models have difficulty with crustal structure. Free oscillations, or normal mode seismology, are the long wavelength, low frequency movements of the surface of the Earth which can be thought of as a type of surface wave. The frequencies of these oscillations can be obtained through Fourier transformation of seismic data. The models based on this method are of broad scale, but have the advantage of relatively uniform data coverage as compared to data sourced directly from earthquakes.
  • Attenuation tomography attempts to extract the anelastic signal from the elastic-dominated waveform of seismic waves. The advantage of this method is its sensitivity to temperature, thus ability to image thermal features such as mantle plumes and subduction zones. Both surface and body waves have been used in this approach.
  • Ambient noise tomography cross-correlates waveforms from random wavefields generated by oceanic and atmospheric disturbances. A major advantage of this method is that unlike other methods, it does not require an earthquake or other event to occur in order to produce results.[10] A disadvantage of the method is that it requires a significant amount of time, usually a minimum of one year but several years of data collection are also common. This method has produced high-resolution images and is an area of active research.
  • Waveforms are modeled as rays in seismic analysis, but all waves are affected by the material near the ray path. The finite frequency effect is the result the surrounding medium has on a seismic record. Finite frequency tomography accounts for this in determining both travel time and amplitude anomalies, increasing image resolution. This has the ability to resolve much larger variations (i.e. 10–30%) in material properties.

Applications

Seismic tomography can resolve anisotropy, anelasticity, density, and bulk sound velocity.[6] Variations in these parameters may be a result of thermal or chemical differences, which are attributed to processes such as mantle plumes, subducting slabs, and mineral phase changes. Larger scale features that can be imaged with tomography include the high velocities beneath continental shields and low velocities under ocean spreading centers.[4]

Hotspots

The African large low-shear-velocity province (superplume)

The mantle plume hypothesis proposes that areas of volcanism not readily explained by plate tectonics, called hotspots, are a result of thermal upwelling from as deep as the core-mantle boundary that become diapirs in the crust. This is an actively contested theory,[9] although tomographic images suggest there are anomalies beneath some hotspots. The best imaged of these are large low-shear-velocity provinces, or superplumes, visible on S-wave models of the lower mantle and believed to reflect both thermal and compositional differences.

The Yellowstone hotspot is responsible for volcanism at the Yellowstone Caldera and a series of extinct calderas along the Snake River Plain. The Yellowstone Geodynamic Project sought to image the plume beneath the hotspot.[11] They found a strong low-velocity body from ~30 to 250 km depth beneath Yellowstone, and a weaker anomaly from 250 to 650 km depth which dipped 60° west-northwest. The authors attribute these features to the mantle plume beneath the hotspot being deflected eastward by flow in the upper mantle seen in S-wave models.

The Hawaii hotspot produced the Hawaiian–Emperor seamount chain. Tomographic images show it to be 500 to 600 km wide and up to 2,000 km deep.

Subduction zones

Subducting plates are colder than the mantle into which they are moving. This creates a fast anomaly that is visible in tomographic images. Both the Farallon plate that subducted beneath the west coast of North America[12] and the northern portion of the Indian plate that has subducted beneath Asia[13] have been imaged with tomography.

Limitations

Global seismic networks have expanded steadily since the 1960s, but are still concentrated on continents and in seismically active regions. Oceans, particularly in the southern hemisphere, are under-covered.[9] Tomographic models in these areas will improve when more data becomes available. The uneven distribution of earthquakes naturally biases models to better resolution in seismically active regions.

The type of wave used in a model limits the resolution it can achieve. Longer wavelengths are able to penetrate deeper into the Earth, but can only be used to resolve large features. Finer resolution can be achieved with surface waves, with the trade off that they cannot be used in models of the deep mantle. The disparity between wavelength and feature scale causes anomalies to appear of reduced magnitude and size in images. P- and S-wave models respond differently to the types of anomalies depending on the driving material property. First arrival time based models naturally prefer faster pathways, causing models based on these data to have lower resolution of slow (often hot) features.[7] Shallow models must also consider the significant lateral velocity variations in continental crust.

Seismic tomography provides only the current velocity anomalies. Any prior structures are unknown and the slow rates of movement in the subsurface (mm to cm per year) prohibit resolution of changes over modern timescales.[14]

Tomographic solutions are non-unique. Although statistical methods can be used to analyze the validity of a model, unresolvable uncertainty remains.[7] This contributes to difficulty comparing the validity of different model results.

Computing power limits the amount of seismic data, number of unknowns, mesh size, and iterations in tomographic models. This is of particular importance in ocean basins, which due to limited network coverage and earthquake density require more complex processing of distant data. Shallow oceanic models also require smaller model mesh size due to the thinner crust.[5]

Tomographic images are typically presented with a color ramp representing the strength of the anomalies. This has the consequence of making equal changes appear of differing magnitude based on visual perceptions of color, such as the change from orange to red being more subtle than blue to yellow. The degree of color saturation can also visually skew interpretations. These factors should be considered when analyzing images.[2]

See also

References

  1. 1.0 1.1 Nolet, G. (1987-01-01). "Seismic wave propagation and seismic tomography". in Nolet, Guust (in en). Seismic Tomography. Seismology and Exploration Geophysics. Springer Netherlands. pp. 1–23. doi:10.1007/978-94-009-3899-1_1. ISBN 9789027725837. 
  2. 2.0 2.1 "Seismic Tomography—Using earthquakes to image Earth's interior". Incorporated Research Institutions for Seismology (IRIS). http://www.iris.edu/hq/inclass/downloads/optional/269. 
  3. "A Brief History of Seismology". United States Geologic Survey (USGS). https://earthquake.usgs.gov/hazards/about/workshops/thailand/downloads/CSMpp1_History.pdf. 
  4. 4.0 4.1 Kearey, Philip; Klepeis, Keith A.; Vine, Frederick J. (2013-05-28) (in en). Global Tectonics. John Wiley & Sons. ISBN 978-1118688083. https://books.google.com/books?id=JBF8UGc_M-sC&q=global%2520tectonics%2520kearey%2520and%2520vine&pg=PT9. 
  5. 5.0 5.1 Liu, Q.; Gu, Y. J. (2012-09-16). "Seismic imaging: From classical to adjoint tomography". Tectonophysics 566–567: 31–66. doi:10.1016/j.tecto.2012.07.006. Bibcode2012Tectp.566...31L. 
  6. 6.0 6.1 Romanowicz, Barbara (2003-01-01). "GLOBAL MANTLE TOMOGRAPHY: Progress Status in the Past 10 Years". Annual Review of Earth and Planetary Sciences 31 (1): 303–328. doi:10.1146/annurev.earth.31.091602.113555. Bibcode2003AREPS..31..303R. 
  7. 7.0 7.1 7.2 7.3 Rawlinson, N.; Pozgay, S.; Fishwick, S. (2010-02-01). "Seismic tomography: A window into deep Earth". Physics of the Earth and Planetary Interiors 178 (3–4): 101–135. doi:10.1016/j.pepi.2009.10.002. Bibcode2010PEPI..178..101R. 
  8. Brzostowski, Matthew; McMechan, George (1992). "3-D tomographic imaging of near-surface seismic velocity and attenuation". Society of Exploration Geophysicists. https://pubs.geoscienceworld.org/geophysics/article-abstract/57/3/396/72696/3-D-tomographic-imaging-of-near-surface-seismic. 
  9. 9.0 9.1 9.2 Julian, Bruce (2006). "Seismology: The Hunt for Plumes". mantleplumes.org. http://www.mantleplumes.org/WebpagePDFs/Seismology.pdf. 
  10. Shapiro, N. M. (11 March 2005). "High-Resolution Surface-Wave Tomography from Ambient Seismic Noise". Science 307 (5715): 1615–1618. doi:10.1126/science.1108339. PMID 15761151. Bibcode2005Sci...307.1615S. 
  11. Smith, Robert B.; Jordan, Michael; Steinberger, Bernhard; Puskas, Christine M.; Farrell, Jamie; Waite, Gregory P.; Husen, Stephan; Chang, Wu-Lung et al. (2009-11-20). "Geodynamics of the Yellowstone hotspot and mantle plume: Seismic and GPS imaging, kinematics, and mantle flow". Journal of Volcanology and Geothermal Research. The Track of the Yellowstone HotspotWhat do Neotectonics, Climate Indicators, Volcanism, and Petrogenesis Reveal about Subsurface Processes? 188 (1–3): 26–56. doi:10.1016/j.jvolgeores.2009.08.020. Bibcode2009JVGR..188...26S. http://gfzpublic.gfz-potsdam.de/pubman/item/escidoc:239822. 
  12. "Seismic Tomography". Incorporated Research Institutions for Seismology (IRIS). http://www.iris.edu/hq/files/programs/education_and_outreach/lessons_and_resources/docs/es_tomography.pdf. 
  13. Replumaz, Anne; Negredo, Ana M.; Guillot, Stéphane; Villaseñor, Antonio (2010-03-01). "Multiple episodes of continental subduction during India/Asia convergence: Insight from seismic tomography and tectonic reconstruction". Tectonophysics. Convergent plate margin dynamics: New perspectives from structural geology, geophysics and geodynamic modelling 483 (1–2): 125–134. doi:10.1016/j.tecto.2009.10.007. Bibcode2010Tectp.483..125R. 
  14. Dziewonski, Adam. "Global Seismic Tomography: What we really can say and what we make up". mantleplumes.org. http://www.mantleplumes.org/Penrose/PenPDFAbstracts/Dziewonski_Adam_abs.pdf. 

External links