Home Page

Papers

Submissions

News

Editorial Board

Special Issues

Open Source Software

Proceedings (PMLR)

Data (DMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

Dimension-free Concentration Bounds on Hankel Matrices for Spectral Learning

François Denis, Mattias Gybels, Amaury Habrard; 17(31):1−32, 2016.

Abstract

Learning probabilistic models over strings is an important issue for many applications. Spectral methods propose elegant solutions to the problem of inferring weighted automata from finite samples of variable-length strings drawn from an unknown target distribution $p$. These methods rely on a singular value decomposition of a matrix $\v{H}_S$, called the empirical Hankel matrix, that records the frequencies of (some of) the observed strings $S$. The accuracy of the learned distribution depends both on the quantity of information embedded in $\v{H}_S$ and on the distance between $\v{H}_S$ and its mean $\v{H}_p$. Existing concentration bounds seem to indicate that the concentration over $\v{H}_p$ gets looser with its dimensions, suggesting that it might be necessary to bound the dimensions of $\v{H}_S$ for learning. We prove new dimension-free concentration bounds for classical Hankel matrices and several variants, based on prefixes or factors of strings, that are useful for learning. Experiments demonstrate that these bounds are tight and that they significantly improve existing (dimension-dependent) bounds. One consequence of these results is that the spectral learning approach remains consistent even if all the observations are recorded within the empirical matrix.

[abs][pdf][bib]       
© JMLR 2016. (edit, beta)

Mastodon