Note: If running on Colab, uncomment and run the pip install cell below.
Lab 7c: Earthquake Magnitudes#
Learning objectives#
By the end of this lab, students should be able to:
Explain saturation using the omega-square source spectrum — why short-period magnitudes flatten for large events.
Demonstrate how calibration choices (site terms, distance corrections) shift amplitude-based magnitude estimates.
Simulate a Wood-Anderson seismograph response and measure \(M_L\) from a real earthquake.
Pick the maximum P-wave \(A/T\) and compute \(m_b\) using the Gutenberg-Richter formula.
Measure a Rayleigh-wave amplitude at ~20 s period and compute \(M_s\) using the Prague formula.
Compare \(M_L\), \(m_b\), \(M_s\), and catalog \(M_w\) for the same event, and relate discrepancies to source spectrum and bandpass.
Key concepts#
Magnitude |
What is measured |
Typical band |
Saturates at |
|---|---|---|---|
\(M_L\) |
Peak displacement on Wood-Anderson horizontal |
1–10 Hz |
~\(M_w\) 6.5 |
\(m_b\) |
Max \((A/T)\) of P on short-period vertical |
0.5–2 Hz |
~\(M_w\) 6.5 |
\(M_s\) |
Max \((A/T)\) of Rayleigh wave at ~20 s |
0.04–0.065 Hz |
~\(M_w\) 8 |
\(M_w\) |
Seismic moment \(M_0\) (long-period) |
DC–broadband |
Does not saturate |
Prerequisites#
Lecture 7c (Earthquake Magnitudes) — magnitude definitions, saturation, omega-square spectrum
Basic familiarity with ObsPy (
Stream,Trace, instrument response removal)
Estimated time: 3–4 hours#
Notebook outline#
Part |
Section |
Approach |
|---|---|---|
I. Theory |
1. Source spectrum & saturation |
Synthetic |
2. Toy magnitude sensitivity |
Synthetic |
|
II. Observations |
3. Event & data download |
Real data |
4. Measure \(M_L\) |
Real data |
|
5. Measure \(m_b\) |
Real data |
|
6. Measure \(M_s\) |
Real data |
|
7. Compare magnitudes |
Real data |
|
III. Catalogs |
8. Multi-event catalog exploration |
Real data |
9. Wrap-up |
Questions |
📖 Shearer (2009), §9.7: magnitude scales, saturation, and the omega-square model.
# Install dependencies (for Google Colab or missing packages)
# import subprocess, sys
# for pkg in ['numpy', 'matplotlib', 'obspy', 'pandas']:
# subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-q', pkg])
import numpy as np
import matplotlib.pyplot as plt
from obspy import UTCDateTime
from obspy.clients.fdsn import Client
from obspy.geodetics import gps2dist_azimuth
from obspy.taup import TauPyModel
import pandas as pd
# Plotting defaults
plt.rcParams['figure.figsize'] = (10, 4)
plt.rcParams['font.size'] = 11
plt.rcParams['axes.grid'] = True
# FDSN client and travel-time model
client = Client('IRIS')
model = TauPyModel(model='iasp91')
print('Setup complete \u2713')
Part I — Synthetic: Source Spectrum and Saturation#
Before touching real data, we build physical intuition for why different magnitude scales give different answers — and why they saturate.
1. The omega-square spectrum and magnitude saturation#
The Brune (1970) omega-square model describes the far-field displacement spectrum:
Key features:
At low frequencies (\(f \ll f_c\)): spectral amplitude \(\propto M_0\) — the flat plateau.
At high frequencies (\(f \gg f_c\)): spectrum decays as \(f^{-2}\).
As event size increases, \(M_0\) grows and \(f_c\) decreases (larger ruptures = longer durations).
A magnitude measured at a fixed frequency \(f_{\text{obs}}\) (e.g., 1 Hz for \(m_b\), 0.05 Hz for \(M_s\)) samples the spectrum at that point. For small events (\(f_c > f_{\text{obs}}\)), the measurement sits on the flat plateau and scales linearly with \(M_0\). For large events (\(f_c < f_{\text{obs}}\)), it falls on the \(f^{-2}\) decay and grows much more slowly — this is saturation.
Predict#
As event size increases and corner frequency decreases:
What happens to the spectral amplitude at 1 Hz (short period)?
What happens to the spectral amplitude at 0.02 Hz (long period)?
Write your prediction in 1–2 sentences, then run the next two cells.
def omega_square_spectrum(f, M0, fc):
"""Brune omega-square displacement spectrum (amplitude).
Parameters
----------
f : array-like
Frequency (Hz).
M0 : float
Seismic moment (N·m).
fc : float
Corner frequency (Hz).
Returns
-------
Spectral amplitude (proportional to displacement).
"""
return M0 / (1.0 + (f / fc)**2)
# ---- Build a grid of event sizes ----
Mw_vals = np.linspace(3.0, 9.0, 31)
# Mw → M0 (Hanks & Kanamori 1979): log10(M0) = 1.5*Mw + 9.05
M0_vals = 10**(1.5 * Mw_vals + 9.05)
# Corner frequency decreases with Mw (self-similar scaling, constant stress drop)
# fc ∝ M0^(-1/3) → log10(fc) ≈ 1.9 - 0.5*Mw (rough approximation)
fc_vals = 10**(1.9 - 0.5 * Mw_vals)
# ---- Sample the spectrum at fixed frequencies ----
f_1Hz = 1.0 # short-period proxy (mb / ML band)
f_20s = 0.05 # ~20 s period (Ms band)
f_long = 0.005 # 200 s (moment proxy)
A_1Hz = omega_square_spectrum(f_1Hz, M0_vals, fc_vals)
A_20s = omega_square_spectrum(f_20s, M0_vals, fc_vals)
A_long = omega_square_spectrum(f_long, M0_vals, fc_vals)
# ---- Plot: spectral amplitude at each band vs Mw ----
fig, ax = plt.subplots(figsize=(8, 5))
ax.plot(Mw_vals, np.log10(A_1Hz), 'o-', ms=3, label='1 Hz ($m_b$ / $M_L$ band)')
ax.plot(Mw_vals, np.log10(A_20s), 's-', ms=3, label='0.05 Hz ($M_s$ band, ~20 s)')
ax.plot(Mw_vals, np.log10(A_long), '^-', ms=3, label='0.005 Hz (moment proxy, ~200 s)')
ax.set_xlabel('$M_w$')
ax.set_ylabel('$\\log_{10}$ spectral amplitude')
ax.set_title('Omega-square spectrum sampled at different frequencies')
ax.legend()
plt.tight_layout()
plt.show()
# ---- Convert spectral proxies into toy magnitudes ----
# Anchor each so that it equals Mw at small magnitudes
def anchor_magnitude(log_A, Mw_vals, anchor_range=(3.0, 4.5)):
"""Shift log_A so it matches Mw in the anchor range."""
mask = (Mw_vals >= anchor_range[0]) & (Mw_vals <= anchor_range[1])
offset = np.mean(Mw_vals[mask] - log_A[mask])
return log_A + offset
M_1Hz = anchor_magnitude(np.log10(A_1Hz), Mw_vals)
M_20s = anchor_magnitude(np.log10(A_20s), Mw_vals)
M_long = anchor_magnitude(np.log10(A_long), Mw_vals)
fig, ax = plt.subplots(figsize=(8, 5))
ax.plot(Mw_vals, M_1Hz, 'o-', ms=3, label='$m_b$-like (1 Hz)', color='#5485b0')
ax.plot(Mw_vals, M_20s, 's-', ms=3, label='$M_s$-like (20 s)', color='#6ab06a')
ax.plot(Mw_vals, M_long, '^-', ms=3, label='$M_w$-like (moment)', color='gray')
ax.plot(Mw_vals, Mw_vals, 'k--', lw=1, label='1:1 line')
ax.set_xlabel('True $M_w$')
ax.set_ylabel('Measured magnitude (toy)')
ax.set_title('Saturation: band-limited magnitudes flatten for large events')
ax.legend()
ax.set_xlim(3, 9)
ax.set_ylim(3, 9)
plt.tight_layout()
plt.show()
print('At Mw=5: mb-like ≈ {:.1f}, Ms-like ≈ {:.1f}'.format(
np.interp(5.0, Mw_vals, M_1Hz), np.interp(5.0, Mw_vals, M_20s)))
print('At Mw=8: mb-like ≈ {:.1f}, Ms-like ≈ {:.1f}'.format(
np.interp(8.0, Mw_vals, M_1Hz), np.interp(8.0, Mw_vals, M_20s)))
Explain#
At what \(M_w\) does the short-period (1 Hz) proxy begin to deviate from the 1:1 line? Why?
Does the 20 s proxy saturate at the same point or later? Connect to the corner-frequency argument.
Why does the moment proxy never saturate?
Check#
The 1 Hz proxy saturates around \(M_w \sim 6\)–\(6.5\) because \(f_c\) drops below 1 Hz.
The 20 s proxy saturates around \(M_w \sim 8\) because \(f_c\) drops below 0.05 Hz only for great earthquakes.
The moment proxy tracks \(M_0\) at DC — always on the flat plateau.
2. Toy magnitude sensitivity: calibration knobs#
Every amplitude-based magnitude has the generic form:
where \(F\) is a distance/depth correction, \(S\) is a station/site term, and \(C\) is a constant. Different networks choose different values for these coefficients.
Here we use a toy ML-like formula with controllable knobs to see how calibration affects the result.
Predict#
If you double the site amplification (\(S \times 2\)), by how much does \(\log_{10}(A)\) (and thus magnitude) change?
If you increase the coefficient governing the distance correction, does the inferred magnitude at large distance go up or down?
def toy_ml_like(A_meters, R_km, site_factor=1.0, a=1.11, b=0.00189, c=0.0):
"""Toy ML-like mapping (NOT a network-standard implementation).
Parameters
----------
A_meters : float
Peak displacement amplitude (m).
R_km : float
Hypocentral distance (km).
site_factor : float
Multiplicative site amplification.
a, b, c : float
Regression coefficients for distance correction.
"""
A_mm = np.abs(A_meters) * site_factor * 1e3 # m → mm
if A_mm <= 0:
return np.nan
return np.log10(A_mm) + a * np.log10(max(R_km, 1.0)) + b * R_km + c
def synthetic_seismogram(duration_s=60.0, fs=50.0, fc_hz=1.0, noise=0.0):
"""Damped-oscillation synthetic waveform for toy exercises."""
t = np.arange(0, duration_s, 1.0 / fs)
env = np.exp(-t / (duration_s / 3))
x = env * np.sin(2 * np.pi * fc_hz * t)
if noise > 0:
x += noise * np.random.randn(len(t))
return t, x
# Generate a synthetic waveform
t_syn, x_syn = synthetic_seismogram(duration_s=120, fs=50, fc_hz=1.0, noise=0.02)
# Measure peak amplitude in the first 60 s
i_end = int(60 * 50)
A_peak = np.max(np.abs(x_syn[:i_end]))
R_km_toy = 150.0 # toy distance
# Baseline magnitude
M_base = toy_ml_like(A_peak, R_km_toy, site_factor=1.0)
M_site2 = toy_ml_like(A_peak, R_km_toy, site_factor=2.0)
M_alt_a = toy_ml_like(A_peak, R_km_toy, site_factor=1.0, a=1.30)
print(f'Peak amplitude: {A_peak:.4f} (arbitrary units, treated as meters)')
print(f'Toy ML (site=1.0): {M_base:.2f}')
print(f'Toy ML (site=2.0): {M_site2:.2f} → Δ = {M_site2 - M_base:+.3f}')
print(f'Toy ML (a=1.30): {M_alt_a:.2f} → Δ = {M_alt_a - M_base:+.3f}')
print(f'\nExpected Δ for 2× site: log10(2) ≈ {np.log10(2):.3f}')
# ---- Sensitivity sweep: magnitude vs distance for two calibrations ----
R_range = np.linspace(10, 600, 50)
M_calib1 = [toy_ml_like(A_peak, r, a=1.11) for r in R_range]
M_calib2 = [toy_ml_like(A_peak, r, a=1.30) for r in R_range]
fig, ax = plt.subplots()
ax.plot(R_range, M_calib1, label='Calibration A (a=1.11)')
ax.plot(R_range, M_calib2, label='Calibration B (a=1.30)')
ax.set_xlabel('Distance (km)')
ax.set_ylabel('Toy $M_L$')
ax.set_title('Same amplitude, different distance corrections → different magnitudes')
ax.legend()
plt.tight_layout()
plt.show()
print('At R=400 km, calibration difference:',
f'{toy_ml_like(A_peak, 400, a=1.30) - toy_ml_like(A_peak, 400, a=1.11):.2f}')
Explain#
Did the 2× site amplification produce the shift you predicted? Why is this a problem for real networks?
Two networks using different distance correction coefficients will report systematically different \(M_L\) values. Is this an error, or an intrinsic feature of empirical calibration?
Check#
Factor 2 in amplitude → \(\Delta M = \log_{10}(2) \approx 0.30\).
Different \(a\) coefficients shift magnitudes at large \(R\). This is a primary source of inter-network discrepancy.
Part II — Observations: Measuring \(M_L\), \(m_b\), and \(M_s\) from Real Data#
We now apply the magnitude formulas to a real earthquake recorded at teleseismic distance (30°–80°). This distance range gives us body waves for \(m_b\), surface waves for \(M_s\), and we can also attempt \(M_L\) (though it will be out of its calibration range).
3. Event and data download#
We search for a moderate, shallow earthquake (\(M_w\) 6–7, depth < 70 km) from the past year, find a teleseismic broadband station, and download three-component waveforms with instrument response metadata.
# ---- Event search ----
t_end = UTCDateTime() # now
t_start = t_end - 365 * 86400 # past year
cat = client.get_events(starttime=t_start, endtime=t_end,
minmagnitude=6.0, maxmagnitude=7.0,
maxdepth=70, # shallow → good surface waves
orderby='magnitude',
limit=5)
print(f'Events found: {len(cat)}')
# Pick the first event
event = cat[0]
origin = event.preferred_origin() or event.origins[0]
ev_mag = event.preferred_magnitude() or event.magnitudes[0]
ev_lat, ev_lon, ev_dep_km = origin.latitude, origin.longitude, origin.depth / 1e3
ev_time = origin.time
print(f'Event: {ev_time} lat={ev_lat:.2f} lon={ev_lon:.2f} depth={ev_dep_km:.1f} km')
print(f'Catalog magnitude: {ev_mag.magnitude_type} {ev_mag.mag:.1f}')
# ---- Station search (teleseismic, broadband) ----
inv = client.get_stations(
starttime=ev_time, endtime=ev_time + 3600,
latitude=ev_lat, longitude=ev_lon,
minradius=30, maxradius=80, # teleseismic
channel='BH?', level='response',
matchtimeseries=True
)
# Pick first available station
net = inv[0].code
sta = inv[0][0].code
loc = inv[0][0][0].location_code
st_lat = float(inv[0][0].latitude)
st_lon = float(inv[0][0].longitude)
dist_m, az, baz = gps2dist_azimuth(ev_lat, ev_lon, st_lat, st_lon)
dist_km = dist_m / 1e3
dist_deg = dist_km / 111.19
print(f'Station: {net}.{sta} lat={st_lat:.2f} lon={st_lon:.2f}')
print(f'Distance: {dist_km:.0f} km ({dist_deg:.1f}°)')
# ---- Download waveforms (5 min before to 60 min after origin) ----
st_raw = client.get_waveforms(net, sta, loc, 'BH?',
ev_time - 300, ev_time + 3600)
st_raw.merge(fill_value='interpolate')
print(st_raw)
# Remove instrument response → displacement (m)
st_disp = st_raw.copy()
st_disp.detrend('linear')
st_disp.taper(0.02)
pre_filt = (0.004, 0.007, 25.0, 30.0) # conservative pre-filter (Hz)
st_disp.remove_response(inventory=inv, output='DISP',
pre_filt=pre_filt, water_level=60)
print('Instrument response removed → displacement (m)')
# ---- Predicted phase arrivals (for windowing) ----
arrivals = model.get_travel_times(source_depth_in_km=ev_dep_km,
distance_in_degree=dist_deg,
phase_list=['P', 'S'])
t_P = None
for arr in arrivals:
if arr.name == 'P':
t_P = arr.time
break
if t_P is None:
t_P = dist_deg * 8.0 # fallback: ~8 s/deg for teleseismic P
# Rayleigh group velocity window (~3.0–4.2 km/s)
t_R_early = dist_km / 4.2
t_R_late = dist_km / 3.0
print(f'Predicted P arrival: {t_P:.1f} s after origin')
print(f'Rayleigh window: {t_R_early:.0f}–{t_R_late:.0f} s after origin')
4. Measure \(M_L\) — Wood-Anderson simulation#
Richter’s \(M_L\) is defined as the peak trace amplitude (in mm) on a Wood-Anderson torsion seismograph, corrected for distance:
where \(A_{\text{WA}}\) is the half peak-to-peak displacement (mm) on the horizontal components and \(-\log_{10} A_0(\Delta)\) is the Hutton & Boore (1987) distance correction.
We approximate the WA response by bandpass-filtering displacement to 1–10 Hz (the WA natural frequency is ~1.25 Hz).
Predict#
At teleseismic distance (\(\Delta > 30°\)), \(M_L\) was not designed to work — it was calibrated for \(\Delta < 600\) km. Do you expect our \(M_L\) to be reliable? What kind of bias would you predict?
# ---- Approximate Wood-Anderson by bandpass filtering displacement ----
# Select horizontal components
st_h = st_disp.copy()
st_h.traces = [tr for tr in st_h if tr.stats.channel[-1] in 'NE12']
st_h.filter('bandpass', freqmin=1.0, freqmax=10.0, corners=4, zerophase=True)
# Measure peak half-amplitude on whichever horizontal is largest
peak_wa_m = 0.0
for tr in st_h:
half_p2p = (tr.data.max() - tr.data.min()) / 2.0
if half_p2p > peak_wa_m:
peak_wa_m = half_p2p
peak_wa_mm = peak_wa_m * 1e3 # m → mm
print(f'Peak WA half-amplitude: {peak_wa_m:.2e} m = {peak_wa_mm:.4f} mm')
# ---- Plot WA-simulated horizontal seismogram ----
tr_plot = st_h[0]
times_wa = tr_plot.times(reftime=ev_time)
fig, ax = plt.subplots()
ax.plot(times_wa, tr_plot.data * 1e3, 'k', lw=0.5)
idx_max = np.argmax(np.abs(tr_plot.data))
ax.plot(times_wa[idx_max], tr_plot.data[idx_max] * 1e3, 'rv', ms=10, label='peak')
ax.set_xlabel('Time after origin (s)')
ax.set_ylabel('Displacement (mm, WA-simulated)')
ax.set_title(f'Wood-Anderson simulated horizontal — {net}.{sta}')
ax.legend()
plt.tight_layout()
plt.show()
# ---- Compute ML (Hutton & Boore 1987 distance correction) ----
# -log10(A0) = 1.110 * log10(r/100) + 0.00189*(r - 100) + 3.0
# Calibrated for r < ~700 km (Southern California).
r = dist_km
neg_logA0 = 1.110 * np.log10(r / 100.0) + 0.00189 * (r - 100.0) + 3.0
ML = np.log10(peak_wa_mm) + neg_logA0 if peak_wa_mm > 0 else np.nan
print(f'Distance correction -log10(A0): {neg_logA0:.2f}')
print(f'Measured ML: {ML:.2f}')
print(f'Catalog magnitude: {ev_mag.magnitude_type} {ev_mag.mag:.1f}')
print(f'\nNote: ML calibration valid for r < 600 km. At {dist_km:.0f} km this is an extrapolation.')
Explain#
Is your \(M_L\) estimate close to the catalog magnitude? Why or why not?
The Wood-Anderson instrument peaks near 1.25 Hz. For a large event, does this band capture the dominant energy? Relate to the saturation plot from Section 1.
5. Measure \(m_b\) — body-wave magnitude from the P wave#
The body-wave magnitude uses the maximum amplitude-to-period ratio of the P wave on the vertical component in a short-period band (~1 Hz):
where \(A\) is ground displacement (μm), \(T\) is the dominant period (s), and \(Q(\Delta, h)\) is the Gutenberg-Richter distance-depth correction.
Predict#
At what magnitude does \(m_b\) begin to saturate? (Hint: think about corner frequency vs. the 1 Hz measurement band.)
Should \(m_b\) be measured on the first few cycles of P, or anywhere in the P coda?
# ---- Filter vertical component for mb (0.5–2 Hz) ----
tr_z = st_disp.copy().select(component='Z')[0]
tr_mb = tr_z.copy()
tr_mb.filter('bandpass', freqmin=0.5, freqmax=2.0, corners=4, zerophase=True)
# Window around P: P-2s to P+30s (first few cycles)
t_P_abs = ev_time + t_P
tr_mb_win = tr_mb.copy().trim(t_P_abs - 2, t_P_abs + 30)
times_mb = tr_mb_win.times(reftime=ev_time)
data_mb = tr_mb_win.data
# Measure max |A| in the P window
idx_peak = np.argmax(np.abs(data_mb))
A_mb_m = np.abs(data_mb[idx_peak]) # displacement in m
A_mb_um = A_mb_m * 1e6 # displacement in μm
# Estimate dominant period from zero-crossings bracketing the peak
sign_changes = np.where(np.diff(np.sign(data_mb)))[0]
if len(sign_changes) >= 2:
zc_before = sign_changes[sign_changes < idx_peak]
zc_after = sign_changes[sign_changes > idx_peak]
if len(zc_before) > 0 and len(zc_after) > 0:
T_dom = 2.0 * (times_mb[zc_after[0]] - times_mb[zc_before[-1]])
else:
T_dom = 1.0 # fallback
else:
T_dom = 1.0
print(f'P-wave peak displacement: {A_mb_m:.2e} m = {A_mb_um:.2f} μm')
print(f'Dominant period: {T_dom:.2f} s')
print(f'A/T = {A_mb_um / T_dom:.2f} μm/s')
# ---- Plot filtered P waveform with measurement window ----
tr_mb_full = tr_z.copy()
tr_mb_full.filter('bandpass', freqmin=0.5, freqmax=2.0, corners=4, zerophase=True)
times_full = tr_mb_full.times(reftime=ev_time)
fig, ax = plt.subplots()
ax.plot(times_full, tr_mb_full.data * 1e6, 'k', lw=0.5)
ax.axvline(t_P, color='blue', ls='--', label=f'P arrival ({t_P:.0f} s)')
ax.axvspan(t_P - 2, t_P + 30, alpha=0.15, color='orange', label='measurement window')
ax.plot(times_mb[idx_peak], data_mb[idx_peak] * 1e6, 'rv', ms=10, label='peak $A$')
ax.set_xlabel('Time after origin (s)')
ax.set_ylabel('Displacement (μm)')
ax.set_title(f'Vertical BHZ, 0.5–2 Hz — P window for $m_b$')
ax.set_xlim(t_P - 50, t_P + 100)
ax.legend()
plt.tight_layout()
plt.show()
# ---- Compute mb ----
# Simplified Q(Δ,h) from Veith & Clawson (1972) — linear approximation
# Q ≈ 0.01Δ + 5.9 for 30° < Δ < 90° and shallow sources (h < 70 km)
Q_mb = 0.01 * dist_deg + 5.9
mb = np.log10(A_mb_um / T_dom) + Q_mb
print(f'Q(Δ={dist_deg:.1f}°, h={ev_dep_km:.0f} km) ≈ {Q_mb:.2f}')
print(f'Measured mb: {mb:.2f}')
print(f'Catalog magnitude: {ev_mag.magnitude_type} {ev_mag.mag:.1f}')
Explain#
How does your \(m_b\) compare to the catalog? Is it higher, lower, or similar?
If this event were \(M_w\) 8.0 instead of ~6.5, would \(m_b\) scale proportionally? Why not?
6. Measure \(M_s\) — surface-wave magnitude from Rayleigh waves#
The Prague formula (Vaněk et al., 1962):
where \(A\) is displacement amplitude (μm), \(T \approx 20\) s, and \(\Delta\) is epicentral distance in degrees.
Predict#
Why does \(M_s\) require \(\Delta > 20°\) (so that higher modes have decayed)?
For a deep event (> 300 km), would you expect \(M_s\) to work well? Why?
# ---- Filter for Rayleigh waves (~20 s) ----
tr_R = tr_z.copy()
tr_R.filter('bandpass', freqmin=0.04, freqmax=0.065, corners=4, zerophase=True)
# Window: Rayleigh group velocity 3.0–4.2 km/s
t_R_abs_early = ev_time + t_R_early
t_R_abs_late = ev_time + t_R_late
tr_R_win = tr_R.copy().trim(t_R_abs_early, t_R_abs_late)
times_R = tr_R_win.times(reftime=ev_time)
data_R = tr_R_win.data
# Measure peak amplitude
idx_R_peak = np.argmax(np.abs(data_R))
A_R_m = np.abs(data_R[idx_R_peak])
A_R_um = A_R_m * 1e6
# Dominant period (zero-crossing method)
sign_R = np.where(np.diff(np.sign(data_R)))[0]
if len(sign_R) >= 2:
zc_b = sign_R[sign_R < idx_R_peak]
zc_a = sign_R[sign_R > idx_R_peak]
if len(zc_b) > 0 and len(zc_a) > 0:
T_R = 2.0 * (times_R[zc_a[0]] - times_R[zc_b[-1]])
else:
T_R = 20.0 # fallback
else:
T_R = 20.0
print(f'Rayleigh peak displacement: {A_R_m:.2e} m = {A_R_um:.2f} μm')
print(f'Dominant period: {T_R:.1f} s')
print(f'A/T = {A_R_um / T_R:.2f} μm/s')
# ---- Plot filtered Rayleigh waveform ----
times_R_full = tr_R.times(reftime=ev_time)
fig, ax = plt.subplots()
ax.plot(times_R_full, tr_R.data * 1e6, 'k', lw=0.5)
ax.axvspan(t_R_early, t_R_late, alpha=0.15, color='green',
label=f'Rayleigh window ({t_R_early:.0f}–{t_R_late:.0f} s)')
ax.plot(times_R[idx_R_peak], data_R[idx_R_peak] * 1e6, 'rv', ms=10, label='peak $A$')
ax.set_xlabel('Time after origin (s)')
ax.set_ylabel('Displacement (μm)')
ax.set_title(f'Vertical BHZ, 15–25 s period — Rayleigh window for $M_s$')
ax.set_xlim(t_R_early - 100, t_R_late + 200)
ax.legend()
plt.tight_layout()
plt.show()
# ---- Compute Ms (Prague formula) ----
Ms = np.log10(A_R_um / T_R) + 1.66 * np.log10(dist_deg) + 3.3
print(f'Distance term: 1.66 * log10({dist_deg:.1f}°) + 3.3 = {1.66 * np.log10(dist_deg) + 3.3:.2f}')
print(f'Measured Ms: {Ms:.2f}')
print(f'Catalog magnitude: {ev_mag.magnitude_type} {ev_mag.mag:.1f}')
Explain#
How does \(M_s\) compare to \(M_L\) and \(m_b\) for this event?
\(M_s\) measures at ~20 s period. For an event with \(f_c \approx 0.05\) Hz, is the amplitude still on the low-frequency plateau of the source spectrum?
7. Compare all three magnitudes#
We now compare \(M_L\), \(m_b\), and \(M_s\) to each other and to the catalog value.
# ---- Summary table ----
results = pd.DataFrame({
'Magnitude type': ['$M_L$', '$m_b$', '$M_s$', f'Catalog ({ev_mag.magnitude_type})'],
'Value': [ML, mb, Ms, ev_mag.mag],
'Band / method': [
'1–10 Hz, WA horizontal',
'0.5–2 Hz, P vertical',
'15–25 s, Rayleigh vertical',
'Agency-reported'
]
})
results['Value'] = results['Value'].round(2)
display(results)
# ---- Bar chart comparison ----
fig, ax = plt.subplots(figsize=(6, 4))
labels = ['$M_L$', '$m_b$', '$M_s$']
values = [ML, mb, Ms]
colors = ['#e07b54', '#5485b0', '#6ab06a']
bars = ax.bar(labels, values, color=colors, width=0.5, edgecolor='k')
# Reference line: catalog magnitude
ax.axhline(ev_mag.mag, color='k', ls='--', lw=1.5,
label=f'Catalog {ev_mag.magnitude_type} = {ev_mag.mag:.1f}')
for bar, val in zip(bars, values):
ax.text(bar.get_x() + bar.get_width() / 2, bar.get_height() + 0.05,
f'{val:.2f}', ha='center', va='bottom', fontsize=11)
ax.set_ylabel('Magnitude')
ax.set_title('Measured magnitudes vs catalog')
ax.legend()
ax.set_ylim(min(values) - 1, max(values) + 1)
plt.tight_layout()
plt.show()
Explain#
Which magnitude is closest to the catalog value? Is that expected given the event size?
\(M_L\) at teleseismic distance — why is this a poor choice? What would need to change?
Look back at the saturation plot from Section 1. Where does this event sit on those curves?
Part III — Catalogs: Multi-Event Magnitude Comparison#
Individual measurements are instructive, but patterns emerge when we compare multiple magnitude types across many events. This is how we observe saturation “in the wild.”
8. Catalog exploration#
We pull a set of moderate-to-large events from the FDSN catalog and extract every magnitude estimate that each agency has computed for each event. Then we compare pairs like \(m_b\) vs \(M_w\) and \(M_s\) vs \(M_w\).
Predict#
Do you expect every event to have \(M_w\) listed? Why or why not?
If you plot \(m_b\) vs \(M_w\), do you expect a slope of 1 across the full range?
# ---- Fetch events with a range of magnitudes ----
t0_cat = UTCDateTime() - 365 * 86400
t1_cat = UTCDateTime()
cat2 = client.get_events(starttime=t0_cat, endtime=t1_cat,
minmagnitude=5.5, limit=30)
print(f'Events retrieved: {len(cat2)}')
# Build a tidy DataFrame of all magnitude entries
rows = []
for i, ev in enumerate(cat2):
origin = ev.preferred_origin() or ev.origins[0]
for mag in ev.magnitudes:
rows.append({
'event_index': i,
'origin_time': origin.time.datetime,
'mag_type': mag.magnitude_type,
'mag': mag.mag,
'author': getattr(mag.creation_info, 'author', None) if mag.creation_info else None,
'agency': getattr(mag.creation_info, 'agency_id', None) if mag.creation_info else None
})
df = pd.DataFrame(rows)
print(f'Total magnitude entries: {len(df)}')
print(f'Magnitude types present: {sorted(df["mag_type"].unique())}')
df.head(10)
# ---- Compact table per event ----
for idx in sorted(df['event_index'].unique())[:5]: # show first 5
sub = df[df['event_index'] == idx]
print(f'\n=== Event {idx} — {sub["origin_time"].iloc[0]} ===')
display(sub[['mag_type', 'mag', 'agency', 'author']].sort_values('mag_type'))
# ---- Pivot: one row per event, columns for each magnitude type ----
df_pivot = (df.sort_values(['event_index', 'mag_type'])
.groupby(['event_index', 'mag_type'])
.first()
.reset_index())
wide = df_pivot.pivot(index='event_index', columns='mag_type', values='mag')
wide['origin_time'] = df_pivot.groupby('event_index')['origin_time'].first()
wide.head()
# ---- Scatter plots: magnitude pairs ----
def scatter_mag_pair(wide, xcol, ycol):
"""Scatter one magnitude type vs another, with 1:1 reference line."""
if xcol not in wide.columns or ycol not in wide.columns:
print(f'{xcol} or {ycol} not present in this catalog sample.')
return
sub = wide[[xcol, ycol]].dropna()
if len(sub) < 3:
print(f'Not enough data points for {xcol} vs {ycol} (need ≥3).')
return
fig, ax = plt.subplots(figsize=(5, 5))
ax.scatter(sub[xcol], sub[ycol], s=40, edgecolors='k', linewidths=0.5)
lims = [min(sub[xcol].min(), sub[ycol].min()) - 0.3,
max(sub[xcol].max(), sub[ycol].max()) + 0.3]
ax.plot(lims, lims, 'k--', lw=0.8, label='1:1')
ax.set_xlabel(xcol)
ax.set_ylabel(ycol)
ax.set_title(f'{ycol} vs {xcol} (n={len(sub)})')
ax.set_aspect('equal')
ax.legend()
plt.tight_layout()
plt.show()
scatter_mag_pair(wide, 'Mw', 'mb')
scatter_mag_pair(wide, 'Mw', 'Ms')
scatter_mag_pair(wide, 'Mw', 'ML')
Explain#
Which magnitude types appear most consistently in the catalog?
Do you see any flattening in the \(m_b\) vs \(M_w\) scatter? At what magnitude does it begin?
If an agency reports multiple magnitudes for the same event, what does that tell you about the measurement process?
Check#
Saturation appears as a flattening: \(m_b\) stops increasing even as \(M_w\) grows.
\(M_s\) should track \(M_w\) longer before flattening, because it samples at 20 s period.
9. Wrap-up questions#
Answer in 2–3 sentences each:
Preferred magnitude for hazard: Why is \(M_w\) typically preferred over \(M_L\) or \(m_b\) in seismic hazard models?
Small-event monitoring: For rapid monitoring of small events in a dense local network, why might \(M_d / M_c\) (coda duration) be attractive?
Inter-network disagreement: If two agencies systematically disagree on \(M_L\) for the same events, list two plausible calibration-related reasons.
Magnitude differences as diagnostics: Give one scenario where the difference between magnitude types (\(m_b - M_w\), or \(M_s - m_b\)) carries useful information about the source.
Saturation and warning: If an earthquake early-warning system reports \(m_b = 6.5\) within 20 seconds, why should you treat this as a lower bound on the actual event size?
Check (hints)#
\(M_L\) saturates around \(M_w \sim 6.5\) (WA band ~1 Hz above the corner frequency).
\(m_b\) saturates around \(M_w \sim 6.5\) (same reason: short-period measurement).
\(M_s\) saturates around \(M_w \sim 8\) (20 s period eventually above corner frequency of great earthquakes).
Only \(M_w\) (moment magnitude) does not saturate — it measures the DC spectral level (\(M_0\)).
\(m_b - M_w > 0\) can indicate high stress drop; \(m_b \gg M_s\) can discriminate explosions from earthquakes.
📖 Shearer (2009), §9.7 — magnitude scales, saturation, and the omega-square model.