aug 18, 2014 jason su

19
JOURNAL CLUB: M. Pei et al., Shanghai Key Lab of MRI, East China Normal University and Weill Cornell Medical College “Algorithm for Fast Monoexponential Fitting Based on Auto-Regression on Linear Operations (ARLO) of Data.” Aug 18, 2014 Jason Su

Upload: katima

Post on 22-Jan-2016

22 views

Category:

Documents


0 download

DESCRIPTION

Aug 18, 2014 Jason Su. Motivation. Traditional fitting methods for exponentials have pros and cons Nonlinear LS ( Levenberg -Marquardt) – slow, may converge to local minimum Log-Linear – fast but sensitive to noise Can we improve upon them? Surprisingly, yes!. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Aug 18,  2014 Jason Su

JOURNAL CLUB:M. Pei et al., Shanghai Key Lab of MRI, East China

Normal University and Weill Cornell Medical College

“Algorithm for Fast Monoexponential Fitting Based on Auto-Regression on Linear Operations (ARLO) of

Data.”

Aug 18, 2014Jason Su

Page 2: Aug 18,  2014 Jason Su

Motivation

• Traditional fitting methods for exponentials have pros and cons– Nonlinear LS (Levenberg-Marquardt) – slow, may

converge to local minimum– Log-Linear – fast but sensitive to noise

• Can we improve upon them?– Surprisingly, yes!

Page 3: Aug 18,  2014 Jason Su

Background: Numerical Integration• Approximating the value of a

definite integral• Trapezoidal Rule: the area under a

2-pt linear interpolation of the interval

• Simpson’s Rule: the area under a 3-pt. quadratic interpolation of the interval

• Newton-Cotes formulas:

Page 4: Aug 18,  2014 Jason Su

Theory

• Log-Linear: linearize the signal equation with a nonlinear transformation to fit a line

• ARLO: integrate the signal equation to fit a linear approximation (Simpson’s rule)

• Assuming decay curve sampled linearly at intervals

Page 5: Aug 18,  2014 Jason Su

Theory

• An auto-regressive time-series• Find T2* to minimize the error between model

and data,

Page 6: Aug 18,  2014 Jason Su

Methods

• Rician noise compensation– Data truncation, only keep points with high SNR• Values > μ + 2σnoise in background

– Apply a bias correction based on a Bayesian model table look-up depending on the number of coils

Page 7: Aug 18,  2014 Jason Su

Methods

• Simulation to assess bias and variance– Fitting method vs T2* range, # channels, SNR– 10,000 trials with Rician noise

• In vivo– 1.5T, 8ch, 15 patients, 2D GRE, TR=27.4, α=20deg, TE = 1.3-

23.3ms (16 linearly sampled), liver– 3T, 8ch?, 2 volunteers, 3D GRE, α=20deg, 7/12 echoes with

6.5/4.1ms spacing, brain– 1.5T, 2D GRE, TR=19ms, α=35deg, TE=2.8-16.8ms (8 echoes),

heart with iron overload– Manual segmentation of liver and brain structures

• Statistical– Linear regression, Bland-Altman, and t-tests

Page 8: Aug 18,  2014 Jason Su

Results: Simulation

• LM and ARLO are effectively equivalent

• ARLO is generally equivalent to LM except at T2*=1.5ms

• Log-linear is sensitive to T2*, SNR, and channels

Page 9: Aug 18,  2014 Jason Su

Results: In Vivo, Liver ROI

• Computation time per voxel– 8.81 ± 1.00ms for LM– 0.57 ± 0.04ms for LL– 0.07 ± 0.02ms for ARLO

Page 10: Aug 18,  2014 Jason Su

Results: In Vivo, Whole Liver

Page 11: Aug 18,  2014 Jason Su

Results: In Vivo, Whole Liver

Page 12: Aug 18,  2014 Jason Su

Results: In Vivo, Brain

Page 13: Aug 18,  2014 Jason Su

Results: In Vivo, Brain

Page 14: Aug 18,  2014 Jason Su

Results: In Vivo, Heart

Page 15: Aug 18,  2014 Jason Su

Discussion

• ARLO is more robust than LL to noise with accuracy as good as LM at 10x the speed of LL– Noise is amplified by log-transform– ARLO is a single-variable linear regression, O(N)– LL is a two-variable linear regression, O(6N)– LM is nonlinear LS, O(N3)

• ARLO provides an effective linearization of the nonlinear estimation problem– Does not require an initial guess, immune to convergence

issues like in LM

Page 16: Aug 18,  2014 Jason Su

Discussion

• Simpson’s rule much better approximation than Trapezoidal– Higher order gave little improvement

• Could also use differentiation but not as good as integration in low SNR and need finer sampling

• Other applications:– Other exponential decay models like diffusion, T2, off-resonance

and T2*– T1 recovery “from data measured at various timing parameters

such as TR or TI”• Can also be adapted to multi-exponential fitting

Page 17: Aug 18,  2014 Jason Su

Discussion

• Limitations– Requires at least 3 data points vs 2 for LM and LL– Linear sampling of echo times– Results in minimum T2* of 1.5ms by ARLO• Probably due to poor protocol

Page 18: Aug 18,  2014 Jason Su

Thoughts

• Nonlinear sampling– Generally linear sampling is not ideal for experimental

design, are there approximations that don’t require this?– “Gaussian quadrature and Clenshaw–Curtis quadrature with

unequally spaced points (clustered at the endpoints of the integration interval) are stable and much more accurate”

• For protocols varying multiple parameters, we would integrate over multiple dimensions?– Higher-dimensional integral approximations?– Simpson’s in each dimension would be a lot of sample points

Page 19: Aug 18,  2014 Jason Su

Thoughts

• Seems important to have an operation that is equivalent to a linear combination of the acquired data– e.g. integral of exponential is difference of

exponentials• Consider SPGR: