Quantification of Uncertainty inProbabilistic Storm Surge Models
Norberto C. Nadal-Caraballo, Ph.D.
Team: Victor Gonzalez, P.E.Jeffrey A. Melby, Ph.D.Amanda B. LewisEfrain Ramos-Santiago
Coastal and Hydraulics LaboratoryUS Army Engineer R&D Center
2nd Annual NRC PFHA Research Program WorkshopNorth Bethesda, MD – January 23-25, 2017
BUILDING STRONG®
Quantification of Uncertainty inProbabilistic Storm Surge Models
Background► The present study is part of U.S. NRC’s Probabilistic Flood Hazard
Assessment (PFHA) research plan.► Support risk-informed licensing and oversight guidance and tools for
assessment of flooding hazards at nuclear powers. ► Evaluate uncertainty associated with data, models, and methods
associated with probabilistic storm surge models used for coastal flood hazard assessment.
► Storm Surge hazard expressed as a family of hazard curves representing epistemic uncertainty.
► Annual exceedance probabilities (AEPs) of interest for nuclear power plants, including AEPs that go beyond the state-of-practice for flood mapping (e.g., 10-4 to 10-6).
2
BUILDING STRONG®
Quantification of Uncertainty inProbabilistic Storm Surge Models
Computation of Storm Surge Hazard► The estimation of storm surge hazard using historical observations in
hurricane-prone areas is limited by a lack of adequate data. ► This has led to the development of methods that rely on the statistical
characterization of the tropical cyclone (TC) forcing and subsequent modeling of storm surge response.
► These methods have evolved into sophisticated joint probability approaches that allow for comprehensive quantification of uncertainty.
► Joint probability method with optimal sampling (JPM-OS) has become the standard-of-practice for quantifying storm surge hazard in coastal areas affected by TCs.
► Other methods include global climate models (GCM) with downscaling, and Monte Carlo Simulation (MCS) methods.
3
BUILDING STRONG®
General Overview and Logic Tree Approach
Probabilistic Storm Surge
Model
JPM
Reference
Optimal Sampling
Hybrid/GPM
Response Surface
Bayesian Quadrature
Stochastic Track Method
GCM Downscaling
Parametric
Non-Parametric
Monte Carlo Simulation
MCLC
Parametric
Non-Parametric
MC Integration
JPM Integral𝜆𝜆𝑟𝑟 �𝑥𝑥 >𝑟𝑟 = 𝜆𝜆�𝑃𝑃 𝑟𝑟 �𝑥𝑥 + 𝜀𝜀 > 𝑟𝑟| �𝑥𝑥, 𝜀𝜀 𝑓𝑓�𝑥𝑥 �𝑥𝑥 𝑓𝑓𝜀𝜀 𝜀𝜀 𝑑𝑑 �𝑥𝑥𝑑𝑑𝜀𝜀
≈ ∑𝑖𝑖𝑛𝑛 𝜆𝜆𝑖𝑖 𝑃𝑃 𝑟𝑟 �𝑥𝑥 + 𝜀𝜀 > 𝑟𝑟|�𝑥𝑥, 𝜀𝜀
where:𝜆𝜆𝑟𝑟 �𝑥𝑥 >𝑟𝑟 = AEP of TC response r due to forcing vector �𝑥𝑥�𝑥𝑥 = 𝑓𝑓 𝑥𝑥𝑜𝑜,𝜃𝜃,∆𝑝𝑝,𝑅𝑅𝑚𝑚𝑚𝑚𝑥𝑥,𝑉𝑉𝑡𝑡λ = SRR (storms/yr/km)λi = probability mass (storms/yr) or λ 𝑝𝑝𝑖𝑖 ,
with 𝑝𝑝𝑖𝑖=product of discrete probability and TC track spacing (km)𝑃𝑃 𝑟𝑟 �𝑥𝑥 + 𝜀𝜀 > 𝑟𝑟|�𝑥𝑥, 𝜀𝜀 conditional
probability that storm 𝑖𝑖 with parameters �𝑥𝑥𝑖𝑖 generates a response larger than r𝜀𝜀 = unbiased error of r
4
BUILDING STRONG®
Probabilistic Storm Surge Model Description Example: JPM
The application of each surge models typically involves the implementation of several types of analyses.
For each type of analysis several approaches may be available. Using JPM as example:
Model Method Data SRR Marginal/Conditional Integration
JPM
JPM-Reference
JPM-OS(RS, BQ, Hybrid)
JPM-STM
Observed (HURDAT)
Reanalysis
Synthetic (GCM, STM)
Models (GKF,UKF, EKF)
Screening(period of record,
intensity)
Parameterization
Dependencies
Statistical approach(parametric vs. non-
parametric)
Distribution error JPM integral
(standard discretization,
random sampling,Gaussian
redistribution)
Task 4 All Task 2 Task 3 Task 5
5
BUILDING STRONG®
Project Tasks
Task 1 Literature Review Task 2 Investigation of Epistemic Uncertainties in Storm
Recurrence Rate Models Task 3 Explore Technically Defensible Data, Models, and
Methods for Defining Joint Probability of Storm Parameters
Task 4 Explore Technically Defensible Models and Methods for Generating Synthetic Storm Simulation Sets
Task 5 Investigate Approaches for Probabilistic Modeling of Numerical Surge Simulation Errors
Task 6 Synthesis Task 7 Transfer of Knowledge Task 8 Final Report Preparation
6
BUILDING STRONG®
Classification of Uncertainty
Two classifications of uncertainty typically recognized:► Aleatory – natural randomness of a process; not reducible► Epistemic – lack of knowledge about validity of models and data
for the representation of real system; can be reduced.
Classification scheme can be subjective. Traditional uncertainty classification in JPM-OS models:
► The epistemic uncertainty is related to the specific methods and models used in each study.
► Limited to the inclusion of uncertainty as an error term in the JPM integral.
• e.g., meteorological modeling, hydrodynamic modeling, idealized storm track variation, and limited variation in wind and pressure profiles.
7
BUILDING STRONG®
Classification of Uncertainty (cont.)
Treatment of uncertainty in present study:► Follows probabilistic seismic hazard analysis (PSHA).► Differences between a given numerical model and the natural
phenomenon is prevalent (error term) aleatory► It is in the selection and application of alternative data, methods,
and models that the uncertainty can be reduced epistemic► Epistemic uncertainty is quantified and propagated through logic
tree approach.► General study objectives regarding uncertainty:
• Identification of technically defensible data sources, models, and methods.• Assess whether estimates derived from different data, models, and methods
need to be carried forward for evaluation of epistemic uncertainty, discarding those not considered technically defensible.
8
BUILDING STRONG®
Task 2: Epistemic Uncertainty in SRR Models
Task Description► Data sources and methods used for the computation of site-
specific storm recurrence rate (SRR) models. ► Topics:
• Technically defensible data sources for use in site specific studies (e.g., NOAA’s HURDAT).
• Appropriate models for estimation of SRR, e.g., Gaussian kernel function (GKF), uniform kernel function (UKF), and Epanechnikovkernel function (EKF).
• Methods for screening historical data and assessing geographic variation in support of site-specific estimation of SRR (e.g., selection of historical period of record and TC binning by intensity).
• Investigate SRR aleatory uncertainty and whether SRR estimates derived from multiple datasets or methods need to be propagated.
9
BUILDING STRONG®
What is an Storm Recurrence Rate ?
SRR Definition► Measure of the frequency with which a particular location is expected to
be affected by TCs.► Typically expressed in units of storms per year per unit distance along
the shoreline (e.g., storms/yr/km).► Can also be stated as number of storms per year passing within a
radius of x km, e.g., SRR200 km (storms/yr).
10
BUILDING STRONG®
Task 2: Epistemic Uncertainty in SRR Models
Numerical Experiments► Uncertainty comparison and quantification of three models for the
estimation of SRR.• Uniform Kernel unction (capture zone approach)• Gaussian Kernel Function (Chouinard et al.1997)• Epanechnikov Kernel Function
► SRR variability related to selection optimal kernel size.► SRR variability arising from selection of the period of record.► SRR variability through the analysis of subsets of data through
bootstrap resampling.► Observation or measurement uncertainty in TC data.► Effect of data partition (by TC intensity) on SRR uncertainty.
11
BUILDING STRONG®
Models for Estimation of SRR
Comparison of Kernel Functions► UKF estimates tend to be unstable and highly sensitive to data clusters.► GKF exhibits highest smoothing while EKF curve is closer to UKF curve. ► GKF considers storms past the kernel size distance. Optimal kernel size
should be large enough to maximize use of data while avoiding sampling from multiple TC populations.
12
BUILDING STRONG®
Task 2: Epistemic Uncertainty in SRR Models
Findings► The GKF is a better method for estimating SRR compared to UKF
(capture zone).• GKF can consider larger number of storms than the UKF model. • For same ranges of optimal uniform and Gaussian kernel sizes, GKF
estimates exhibited reduced coefficient of variation (CV) when compared to UKF.
► The lowest SRR uncertainty where observed in North Carolina, Florida, Mississippi and Louisiana while the U.S. coast north of Virginia exhibited the largest uncertainties.
► Typically, larger samples result in a reduction of uncertainty and therefore in reduced sensitivity to model decisions.
13
BUILDING STRONG®
Task 2: Epistemic Uncertainty in SRR Models
Findings (cont.)► SRR200km for low, high, and critical intensity TCs.
• Low intensity (28-48 hPa), high intensity (48-68h Pa), critical intensity (≥68 hPa)
14
BUILDING STRONG®
Task 2: Epistemic Uncertainty in SRR Models
Findings (cont.)► Variation of SRR200 km with record length.
15
BUILDING STRONG®
Task 2: Epistemic Uncertainty in SRR Models
Findings (cont.)► Total uncertainty was calculated as
► In general, sampling uncertainty was the main contributor to total uncertainty, followed by period of record selection; observational uncertainty was the lowest contributor.
16
Type of UncertaintyPercent of Total
UncertaintyΔp ≥ 28 hPa
Percent of Total Uncertainty
28 hPa ≤ Δp < 48hPa
Percent of Total Uncertainty
48 hPa ≤ Δp < 68hPa
Percent of Total UncertaintyΔp ≥ 68 hPa
Sampling uncertainty 65 62 71 75
Period of record 19 12 12 7
Gaussian kernel size 15 14 3 4
Observational data 1 12 14 14
BUILDING STRONG®
Task 3: Data, Models and Methods for Defining Joint Probability of Storm Parameters
Task Description► Identification of technically defensible TC parameter data
sources, screening methods, and parameterization schemes for development of probability distribution.
► Topics:• Technically defensible data sources for use in site specific studies,
including observational, reanalysis, and synthetic data sources.• Data screening methods for development of probability distributions,
and evaluation criteria for selecting TCs from historical records or a synthetic datasets.
• Selection of probability distribution, associated uncertainties, parameter correlations, and adequacy of forcing parameters.
• Identification of alternate data and methods, and evaluate which of these need to be considered to account for epistemic uncertainty.
17
BUILDING STRONG®
Task 3: Data, Models and Methods for Defining Joint Probability of Storm Parameters
Example of Logic Tree Approach
18
BUILDING STRONG®
Task 3: Data, Models and Methods for Defining Joint Probability of Storm Parameters
Numerical Experiments► Basic unit of analysis: fitting of univariate distributions to TC
parameters: 𝜃𝜃,∆𝑝𝑝,𝑅𝑅𝑚𝑚𝑚𝑚𝑥𝑥,𝑉𝑉𝑡𝑡► The basic analysis (3,500+ fits) was performed to evaluate
different methods, models, and data represented in the logic tree branches:
• Parameterization: standard (Δp) or alternate (Wmax )• Data source: HURDAT2, GCM synthetics, EBTRK reanalysis• Landfalling or bypassing• Statistical models: parametric, non-parametric
► Analysis by intensity: all TCs, high intensity, and low intensity.► Assessment of fits: goodness-of-fit tests, RMSD, magnitude of
sampling uncertainty, visual inspection of plots.
19
BUILDING STRONG®
Task 3: Data, Models and Methods for Defining Joint Probability of Storm Parameters
Parametric Distributions► TC parameters were fit using Generalized Extreme Value,
Generalized Pareto, Gumbel, Normal, Lognormal, Weibull, Gamma, and Exponential distributions; Δp example:
►
20
BUILDING STRONG®
Task 3: Data, Models and Methods for Defining Joint Probability of Storm Parameters
Comparison of Rmax probability distributions
Vickery and Wadhera(2008) Stochastic Model
EBTRK (Demuth et al. 2006)
GCM Downscaling(Lin et al. 2012)
Similar curves resulting from the Vickery model and EBTRK reanalysis.
GCM plot suggests that extratropical transition of TCs is not being adequately represented.
21
BUILDING STRONG®
Task 3: Data, Models and Methods for Defining Joint Probability of Storm Parameters
Non-Parametric Distributions► TC Parameters were fit using the following kernel functions:
• Normal• Epanechnikov• Uniform• Triangular
Rmax data from EBTRK Reanalysis(Demuth et al. 2006)
22
BUILDING STRONG®
Task 3: Data, Models and Methods for Defining Joint Probability of Storm Parameters
General Findings► The most relevant factor in choosing distribution type was how
well it described the low-frequency tails. ► More than one statistical distribution could be valid for a given
TC parameter. ► When more than one distribution was viable, a comparison of the
fits usually did not reveal significant differences.► The sampling technique used for the generation of synthetic TCs
may lessen the significance of selecting a given probability distribution for large discretization intervals.
► The judgment of carry forward a given dataset, method, or model was found to be highly dependent on the location.
23
BUILDING STRONG®
Task 3: Data, Models and Methods for Defining Joint Probability of Storm Parameters
General Findings (cont.)► The main contributor to uncertainty associated with the
probability distributions was the quantity of data.• When TCs were classified as high and low intensity, uncertainty for high
intensities TCs increased with latitude.
High intensity lognormal distribution fit of Rmax EBTRK data for various Atlantic coast locations.
24
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Task Description► Capture full range of technically defensible data and methods for
generating synthetic storm sets required to fully characterize and propagate uncertainties in storm surge estimates.
► Topics:• Evaluation of discretization methods used to generate synthetic
storms for numerical storm surge modeling. • Effect of the refinement of the discretization of the parameter space
on uncertainty.• Analyze the applicability of each method over a wide range of
conditions and evaluate whether criteria can be established to assess situations where one method is superior to others.
• Evaluate the merits of studied approaches and analyze whether estimates derived from different methods need to be considered in the quantification of the epistemic uncertainty.
25
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Numerical Experiments► Use results from North Atlantic Coast Comprehensive Study
(Nadal-Caraballo et al. 2015) where hybrid JPM-OS was used.• Compare to JPM-OS-RS and JPM-OS-BQ
► Generate JPM “Reference” set or “Gold Standard” as basis for comparison with other methods and TC sets.
► Perform various Monte Carlo simulations for development of storm surge hazard curves:
• Monte Carlo Life-Cycle (MCLC)• Monte Carlo Integration (MCI)
► Develop storm surge hazard curve using TC parameter set from existing GCM downscaling study performed by Lin et al. 2012 for The Battery, NY.
26
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Reference Set► A large set of synthetic storms was generated to represent the
traditional JPM approach.► A Gaussian process metamodel (GPM) (Jia et al. 2016) was
used to develop tens of hundreds of TCs.• The GPM is conceptually similar to the RS approach where the
initial discretization of the joint probability distribution is refined by regression or interpolation of storm surge from additional TC parameter combinations.
• The GPM used in this study was trained using the 1050 synthetic TCs developed as part of the NACCS (Nadal-Caraballo et al. 2015).
► A total of 74,430 TCs were generated based on refined discretization and using unique parameter combinations.
27
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Reference Set – Hazard Curve
28
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Hybrid JPM-OS Methodology(NACCS)
► Uniform discretization:
∆p and θ► Bayesian
Quadrature (BQ)Rmax and Vt.
29
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Monte Carlo Life-Cycle► Univariate distributions of TC parameters were sampled for a
1,000,000-yr period, which resulted in 200,000+ TCs. ► No probability masses are required for the TCs since they are
sampled based on their likelihood of occurrence and the joint probability of their parameters.
► Responses were evaluated through the GPM previously developed for the JPM Reference set.
► The storm surge hazard curve consists of the resulting empirical distribution (Weibull plotting position).
► A bootstrap resampling procedure using replicated storm surge values with added discretized uncertainty was used to calculate mean hazard curve and to account for uncertainty.
30
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
MCLC – Hazard Curve
31
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Monte Carlo Integration (MCI) (Wyncoll and Gouldby 2015)► Probabilities are calculated as the percent of TCs with response
greater than a set of surge elevation bins. No probability masses are were used.
► 𝑃𝑃 𝐶𝐶 > 𝑐𝑐 ≈ 𝐿𝐿𝑐𝑐𝐿𝐿∗ λ, where 𝐿𝐿𝑐𝑐 is the number of Monte Carlo
realizations that exceed 𝑐𝑐, 𝐿𝐿 is the total number of Mote Carlo realizations, and λ is the sample intensity (storms/yr).
32
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
MCI – Hazard Curve
33
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
GCM Downscaling (Lin et. al 2012)► Storm surges are determined using GCM-driven
statistical/deterministic hurricane model with hydrodynamic surge models.
► Synthetic TCs tracks are generated according to large-scale atmospheric and ocean environments rather than historical TCs.
► The data set consists of 1,470 tracks out of an original number of 5,000 tracks modeled covering a time period from 1970-2010.
► The storm surge responses were simulated from the forcing parameters of GCM tracks and used as input to the GPM previously trained with NACCS results.
► Stochastic simulation technique (SST) consisting of combined empirical and GPD fits was applied to the storm surge values.
34
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Comparison of Hazard Curves► GCM Downscaling results published by Lin et al. 2012► TCs simulated from GCM forcing, using NACCS-trained GPM.
35
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
GCM Downscaling – Hazard Curve
36
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Comparison of Results
MethodAnnual Exceedance Probability (AEP)
10-2 10-3 10-4 10-5 10-6
JPM-Reference(74,430 TCs) 3.7 5.4 6.4 7.0 7.5
JPM-OS (1,050 TCs) 3.0 4.7 6.2 6.9 7.5
MCLC (211,997 TCs) 3.3 4.8 5.9 6.7 7.5
MCI (211,997 TCs) 3.3 4.8 5.9 6.7 8.3
GCM Downscaling (1,470 TCs) 1.8 3.1 4.1 4.9 5.5
All surge only, with uncertainty = max(20%, 0.61m)
37
Storm Surge (m) at The Battery, NY
BUILDING STRONG®
Task 4: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Comparison of Results
MethodAnnual Exceedance Probability (AEP)
10-2 10-3 10-4 10-5 10-6
JPM-Reference(74,430 TCs) - - - - -
JPM-OS (1,050 TCs) -18 -13 -3 -2 0
MCLC (211,997 TCs) -9 -10 -8 -4 0
MCI (211,997 TCs) -9 -10 -8 -5 10
GCM Downscaling (1,470 TCs) -50 -42 -35 -30 -26
All surge only, with uncertainty = max(20%, 0.61m)
38
Percentage Difference in Storm Surge at The Battery, NY
BUILDING STRONG®
Task 5: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Task Description (in progress)► Capture full range of technically defensible data and methods for
generating synthetic storm sets required to fully characterize and propagate uncertainties in storm surge estimates.
► Topics:• Evaluation of methods for distribution uncertainty. • Effect of neglecting to include the error term. • The error associated with exclusion, or simplified inclusion, of terms
from the JPM‐OS integral to reduce dimensionality• Errors due to the lack of skill of numerical meteorological and storm
surge modeling.• Evaluation of alternate methods for distributing the uncertainty in the
joint probability integral.
39
BUILDING STRONG®
Task 5: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Numerical Experiments► Compare different approaches to the characterization of
uncertainty• Constant uncertainty (e.g. 0.61m)• Proportional uncertainty (e.g. 20%)• Combined constant and proportional uncertainty [ e.g., min (20%, 0.61m) ]
► Two basic discretization approaches for the uncertainty will be tested for JPM and for Monte Carlo simulation methods.
• Representation of Gaussian distribution using X number discrete values and replicating storm surges X times, prior to performing JPM integration.
• Randomly sampling X values from the Gaussian distribution and repreatprocess stated above.
► The significance of different number of discrete values (or random samples) from the Gaussian distribution will be evaluated by comparing results using 30, 100, 300,1000, and 3000 values.
40
BUILDING STRONG®
Task 5: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Results – Characterization of Uncertainty
JPM-OS Uncertainty(Gaussian)
Annual Exceedance Probability (AEP)
10-2 10-3 10-4 10-5 10-6
max(20%, 0.61m) 3.0 4.7 6.2 6.9 7.5
Constant (0.61m) 3.1 4.7 6.2 6.9 7.5
Proportional (20%) 3.0 4.7 6.7 8.0 9.0
Storm Surge (m) at The Battery, NY
41
Percentage Difference in Storm Surge at The Battery, NYJPM-OS
Uncertainty(Gaussian)
Annual Exceedance Probability (AEP)
10-2 10-3 10-4 10-5 10-6
max(20%, 0.61m) - - - - -
Constant (0.61m) 4 0 0 0 0
Proportional (20%) 0 1 8 15 20
BUILDING STRONG®
Task 5: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Results – Characterization of Uncertainty
Uncertainty = min(20%, 0.61m)
Constant Uncertainty = 0.61 m
Proportional Uncertainty = 20%
42
BUILDING STRONG®
Task 5: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Results – Integration of Uncertainty
JPM-OS Uncertainty(Gaussian)
Annual Exceedance Probability (AEP)
10-2 10-3 10-4 10-5 10-6
Discrete (444 values) 3.0 4.7 6.2 6.9 7.5
Discrete (30 values) 3.0 4.7 6.2 7.0 NaN
Discrete (3,000 values) 3.0 4.7 6.2 6.9 7.4
Random (444 values) 3.1 4.8 6.3 7.1 7.7
Random (30 values) 2.9 4.7 6.1 7.5 NaN
Random (3000 values) 3.0 4.7 6.1 6.9 7.4SurgeStat “Redistribution”
(FEMA) 3.0 4.7 6.2 6.9 7.5
Storm Surge (m) at The Battery, NY
43
BUILDING STRONG®
Task 5: Data, Models, and Methods for Generating Synthetic Storm Simulation Sets
Results – Integration of Uncertainty
JPM-OS Uncertainty(Gaussian)
Annual Exceedance Probability (AEP)
10-2 10-3 10-4 10-5 10-6
Discrete (444 values) - - - - -
Discrete (30 values) 0 0 0 1 NaN
Discrete (3,000 values) 0 0 0 0 -1
Random (444 values) 2 2 1 2 2
Random (30 values) -3 0 -1 8 NaN
Random (3000 values) -1 -1 0 -1 -1SurgeStat “Redistribution”
(FEMA) 0 0 0 0 -1
Percentage Difference in Storm Surge at The Battery, NY
44
BUILDING STRONG®
References Chouinard, L., C. Liu, and C. Cooper. 1997. Model for Severity of Hurricanes in Gulf of Mexico.
Journal of Waterway, Port, Coastal, and Ocean Engineering 123 (3): 120–129. Demuth, J., M. DeMaria, and J.A. Knaff, 2006: Improvement of advanced microwave sounder unit
tropical cyclone intensity and size estimation algorithms. Journal of Applied Meteorology and Climatology, 45: 1573-1581.
Jia, Gaofeng, A. A. Taflanidis, N. C. Nadal-Caraballo, J. A. Melby, A. B. Kennedy, and J. M. Smith. 2016. Surrogate Modeling for Peak or Time-Dependent Storm Surge Prediction over an Extended Coastal Region Using an Existing Database of Synthetic Storms. Natural Hazards 81 (2): 909–938.
Lin, Ning, K Emanuel, M. Oppenheimer, and E. Vanmarcke. 2012. Physically Based Assessment of Hurricane Surge Threat under Climate Change. Nature Climate Change 2 (6): 462–467.
Nadal‐Caraballo, N.C., J.A. Melby, V.M. Gonzalez, and A.T. Cox. 2015. North Atlantic Coast Comprehensive Study – Coastal Storm Hazards from Virginia to Maine. ERDC/CHL TR-15-5. Vicksburg, MS: U.S. Army Engineer Research and Development Center.
Vickery, P.J., and D. Wadhera. 2008. Statistical Models of Holland Pressure Profile Parameter and Radius to Maximum Winds of Hurricanes from Flight-Level Pressure and H*Wind Data. Journal of Applied Meteorology and Climatology 47(10): 2497-2517.
Wyncoll, D., and B. Gouldby. 2015. Integrating a Multivariate Extreme Value Method within a System Flood Risk Analysis Model. Journal of Flood Risk Management 8 (2): 145–160.
45
BUILDING STRONG®
Contact Information
U.S. Army Engineer R&D CenterCoastal and Hydraulics Laboratory
Norberto C. Nadal-Caraballo, Ph.D.Phone: (601) 634-2008Email: [email protected]
U.S. Nuclear Regulatory CommissionJoseph F. Kanney, Ph.D.
Phone: (301) 980-8039Email: [email protected]
46