Events

Upcoming Events

Past Events

  • November 8, 2018 • 4:15 PM – 5:15 PM
  • October 19, 2018 • 1 PM – 2 PM
  • October 11, 2018 • 4:15 PM – 5:15 PM
  • October 5, 2018 • 1 PM – 2 PM

Title: Exact inference for errors-in-variables regressions.  Abstract: When some or all regression covariates are observed with error, the standard least squares estimator is inconsistent.  Several available techniques can correct this inconsistency, but all pay a price; some require additional observations be collected, and others lack validity for small samples.  We present a new technique, using inferential models, […]

  • September 27, 2018 • 4:15 PM – 5:15 PM
  • September 21, 2018 • 1 PM – 2 PM

Title: Testing for Trends in High-Dimensional Time Series.  Abstract: This talk considers statistical inference for trends of high-dimensional time series. Based on a modified $\mathcal{L}^2$ distance between parametric and nonparametric trend estimators, we propose a de-diagonalized quadratic form test statistic for testing patterns on trends, such as linear, quadratic, or parallel forms. We develop an […]

  • September 14, 2018 • 1 PM – 2 PM
  • Cupples I, Room 199

Title: Diffusion Approximation to Stochastic Mirror Descent with Statistical Applications Abstract: Stochastic gradient descent (SGD) is a popular algorithm that can handle extremely large data sets due to its low computational cost at each iteration and low memory requirement. Asymptotic distributional results of SGD are very well-known (Kushner and Yin, 2003). However, a major drawback […]

  • May 10, 2017 • 10 AM – 11 AM

Title: Spot volatility estimation using delta sequences

Abstract: We introduce a unifying class of nonparametric spot volatility estimators based on delta sequences and conceived to include many of the existing estimators in the field as special cases. The full limit theory is first derived when unevenly sampled observations under infill asymptotics and fixed time horizon are considered, and the state variable is assumed to follow a Brownian semimartingale. We then extend our class of estimators to include Poisson jumps or financial microstructure noise in the observed price process. This work makes different approaches (kernels, wavelets, Fourier) comparable. For example, we explicitly illustrate some drawbacks of the Fourier estimator. Specific delta sequences are applied to data from the S&P 500 stock index futures market.

Host: J. Figueroa-Lopez

  • April 21, 2017 • 3 PM – 4 PM

Title: Sufficient dimension reduction for multiple populations

Abstract: Two topics in the area of dimension reduction for multiple populations will be explored. We will first propose a link-free test for testing whether two (or more) multi-index models share identical indices via the sufficient dimension reduction approach. Test statistics are developed based upon sufficient dimension reduction methods. The asymptotic null distributions of our test statistics are derived. Next, we will propose a two-step dimension reduction method for multi-population data. Our method is the first one in the area which could conduct a joint analysis while still retaining the population specific effects. Though partial dimension reduction (Chiaromonte et al., 2002) can be adopted to deal with multi-population dimension reduction, it encloses the related directions for all populations, population-specific effects are ignored. On the other side, unlike the conditional analysis which is carried out separately within each individual population, our method makes use of the information across the multiple populations which greatly improve the estimation accuracy. Simulations and a real data example were given to illustrate our methodology.

Hosts: N. Lin and T. Kuffner

  • April 14, 2017 • 3 PM – 4 PM

Title: Assessment of multiple-chain importance sampling estimators

Abstract: In Bayesian data analysis, there is often the need to compare many different possible models and priors. If the data are highly informative for the model parameters, the choice of prior will have small effects on the posterior. Otherwise, if the data only provide indirect information of the parameters of interest, priors have to be chosen with care according to certain criteria, say, based on the Bayes Factor. It is a challenging computing problem to calculate various posterior quantities and Bayes Factors among the different Bayesian models. In this talk, we consider an importance sampling (IS) technique that efficiently combines Markov chain Monte Carlo (MCMC) samples from multiple posterior distributions. An important yet difficult problem for general MCMC estimators is assessing their standard errors. Such assessment is even more challenging for estimators that are constructed with multiple Markov chains. We provide an easy-to-implement tool to evaluate the standard errors of the multiple-chain IS estimators. The multiple-chain IS technique will be illustrated with two data analysis problems. One in Bayesian variable selection, the other in Bayesian spatial modeling.

Host: T. Kuffner

  • April 7, 2017 • 3 PM – 4 PM
  • Room 199

Title: On valid prior-free probabilistic inference.

Abstract: Using probabilities to describe uncertainty in a statistical inference problem is very reasonable approach. Getting probabilities is easy, but ensuring that they are scientifically meaningful/interpretable is not. Indeed, we all take for granted what it means for a probability to be “small” or “large”, but I argue that this is actually a practically important issue that requires serious care. Examples will be presented that highlight a fundamental but subtle issue concerning the interpretation of (default-prior) Bayesian posterior probabilities. In light of these concerns, perhaps we need to look beyond Bayes/probability to describe this kind of uncertainty. Towards this, I will introduce a new approach, called inferential models (IMs), built around the theory of random sets, which provides provably valid prior-free probabilistic inference under very general conditions. The IM construction and its key properties will be discussed, along with some examples and further insights.

Hosts: J. Figueroa-Lopez and T. Kuffner

  • March 31, 2017 • 3 PM – 4 PM
  • Room 199

Title: Topological Data Analysis for Functional Magnetic Resonance Imaging Data

Abstract: Topological data analysis (TDA) is a relatively new approach for the analysis of high-dimensional data of complex structure. Functional magnetic resonance imaging (fMRI) is one source of such data. fMRI, which provides a window into the working human brain, yields high-dimensional, noisy data with complex temporal and spatial correlation structures. In this talk, I will first give an overview of fMRI data, highlighting some of the challenges for statistical analysis and how those challenges have traditionally been handled. A major drawback of many of the standard approaches is that they are “massively univariate,” that is, they are performed at the level of the volume element, which has no physiological or scientific meaning. Such analysis paths furthermore induce a serious multiple testing problem. TDA is one modern attempt to move away from a data array perspective to fMRI analysis. The second part of the talk will give a gentle introduction to TDA, along with the results of initial attempts at application to fMRI data from a schizophrenia study.

Host: T. Kuffner

  • March 28, 2017 • 1 PM – 2 PM
  • Room 199

Title: Analysis of asynchronous longitudinal data with partially linear models

Abstract: We study partially linear models for asynchronous longitudinal data to incorporate nonlinear time trend effects. Local and global estimating equations are developed for estimating the parametric and nonparametric effects. We show that with a proper choice of the kernel bandwidth parameter, one can obtain consistent and asymptotically normal parameter estimates for the linear effects. Asymptotic properties of the estimated non- linear effects are established. Extensive simulation studies provide numerical support for the theoretical findings. Data from an HIV study are used to illustrate our methodology.

Host: T. Kuffner

  • March 23, 2017 • 4:15 PM – 5:15 PM
  • Room 199

Title: Optimization with nonconvex functions and nonconvex constraints

Abstract: Nonconvex optimization arises in many applications of high-dimensional statistics and data analysis, including medical imaging via computed tomography (CT) scans where the physical model for data acquisition is inherently nonconvex. While convex programs for structured signal recovery have been widely studied, comparatively little is known about the theoretical properties of nonconvex optimization methods. In this talk I will discuss two types of optimization problems where nonconvexity plays a key role: first, projected gradient descent over nonconvex constraints, where the local geometry of the constraint set is closely tied to its convergence behavior, and second, composite optimization problems, where we must simultaneously minimize multiple terms that may all be nonconvex and nondifferentiable. Image reconstruction results on real data from spectral CT scans, where undersampling poses a substantial challenge, demonstrate the benefit of working with nonconvex models.

Host: T. Kuffner

  • March 3, 2017 • 3 PM – 4 PM
  • Room 199

Title: Optimal block bootstrap estimation for sample quantiles under weak dependence

Abstract: When considering smooth functionals of dependent data, block bootstrap methods have enjoyed considerable success in theory and application. For nonsmooth functionals of dependent data, such as sample quantiles, the theory is less well-developed. In this talk, I will present a general theory of consistency and optimality for block bootstrap distribution estimation for sample quantiles under mild strong mixing assumptions. In contrast to existing results, we study the block bootstrap for varying numbers of blocks. This corresponds to a hybrid between the subsampling bootstrap and the moving block bootstrap (MBB). Examples of `time series’ models illustrate the benefits of optimally choosing the number of blocks.

Host: N. Lin

  • February 24, 2017 • 3 PM – 4 PM
  • Room 199

Title: An Investigation into the Human Face: Statistical Models for Manifold Data

Abstract: Three-dimensional surface imaging, through laser-scanning or stereo-photogrammetry, provides high-resolution data defining the surface shape of objects. Using a human face as this object, each image corresponds to an observation, a manifold, represented by a triangulated point cloud. In an anatomical setting this can provide invaluable quantitative information. Particular applications vary widely including success or failure of cosmetic/reconstructive plastic surgery, facial recognition, facial asymmetry, concepts of sexual dimorphism, and even the survival of mussels (food we consume) given climate change. However, the initial challenge is to characterize these complex surfaces, without laborious manual intervention. Surface curvature provides the key information in doing this, allowing for a creating of a surface “mask” replicable throughout all these objects. Once the full surface representation has been obtained, the new issue arises of how to best characterize and visualize the differences in shape. The issues involved with analysis of this data and multiple visualization methods will be discussed and illustrated.

Host: T. Kuffner

  • February 17, 2017 • 3 PM – 4 PM

Title: Asymptotic methods in financial mathematics

Abstract: Asymptotic analyses of financial problems have a wide spectrum of applications ranging from nonparametric estimation methods based on high-frequency data to near-expiration characterizations of option prices and implied volatilities, and to Monte Carlo based methods for path-dependent option. These methods are especially useful to study models with jumps due to the lack of tractable formulas and efficient numerical procedures. In this talk, I will discuss some recent advances in the area and illustrate their broad relevance in several contexts.

Hosts: N. Lin & T. Kuffner