A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.





A characterization of all single-integral, non-kernel divergence estimators.*

IEEE Transactions on Information Theory, 2019

In this work, we provide a characterization of the class of divergences that bypasses the use of nonparametric smoothing in the construction of the divergences, leading to minimum distance estimation free of issues such as dimensionality, smoothness, etc. that the usual kernel density estimators exhibit.

with Ayanendranath Basu [Paper link]

Extrapolating the profile of a finite population

Conference on Learning Theory, 2020

We find out the minimax rate of estimating the profile of a finite population in small sample regimes. Our method involves characterizing both upper and lower bounds via an infinite-dimensional optimization problem based on Wolfowitz minimum distance estimators.

with Yury Polyanskiy and Yihong Wu [Paper link]

Optimal prediction of Markov chains with and without spectral gap

NeurIPS, 2021

Analyzing a prediction problem on the first-order Markov chain, we demonstrate the optimal minimax rate when the states-space size and sample size are provided. We also investigate the effect of spectral gaps in the case of reversible chains to achieve the parametric rate of estimation.

with Yanjun Han and Yihong Wu [Paper link]