I work on contemporary, computer-intensive, statistical methodology, with particular interests in the bootstrap and other resampling methods, parametric likelihood-based inference, approximation methods, spatial statistics and predictive inference.
According to legend, Baron Munchausen saved himself from drowning in quicksand by pulling himself up using only his bootstraps. The statistical bootstrap, which uses resampling from a given set of data to mimic the variability that produced the data in the first place, has a rather more dependable theoretical basis, and can be a highly effective procedure for estimation of error quantities in statistical problems. But when does it work? When is it needed? What are its properties? How can it be applied to complex data structures? Can we usefully and reliably bootstrap the bootstrap itself to provide accurate inference when confronted with small data samples?
A list of my publications is available here.