Using the wild bootstrap to quantify uncertainty in diffusion tensor imaging

Hum Brain Mapp. 2008 Mar;29(3):346-62. doi: 10.1002/hbm.20395.

Abstract

Estimation of noise-induced variability in diffusion tensor imaging (DTI) is needed to objectively follow disease progression in therapeutic monitoring and to provide consistent readouts of pathophysiology. The noise variability of nonlinear quantities of the diffusion tensor (e.g., fractional anisotropy, fiber orientation, etc.) have been quantified using the bootstrap, in which the data are resampled from the experimental averages, yet this approach is only applicable to DTI scans that contain multiple averages from the same sampling direction. It has been shown that DTI acquisitions with a modest to large number of directions, in which each direction is only sampled once, outperform the multiple averages approach. These acquisitions resist the traditional (regular) bootstrap analysis though. In contrast to the regular bootstrap, the wild bootstrap method can be applied to such protocols in which there is only one observation per direction. Here, we compare and contrast the wild bootstrap with the regular bootstrap using Monte Carlo numerical simulations for a number of diffusion scenarios. The regular and wild bootstrap methods are applied to human DTI data and empirical distributions are obtained for fractional anisotropy and the diffusion tensor eigensystem. Spatial maps of the estimated variability in the diffusion tensor principal eigenvector are provided. The wild bootstrap method can provide empirical distributions for tensor-derived quantities, such as fractional anisotropy and principal eigenvector direction, even when the exact distributions are not easily derived.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Brain / physiology*
  • Brain Mapping*
  • Computer Simulation
  • Diffusion Magnetic Resonance Imaging*
  • Humans
  • Image Processing, Computer-Assisted*
  • Male
  • Models, Neurological*