Amazon cover image
Image from Amazon.com
Image from Google Jackets

Estimation and Testing Under Sparsity [electronic resource] : École d'Été de Probabilités de Saint-Flour XLV – 2015 / by Sara van de Geer.

By: Contributor(s): Material type: TextTextSeries: École d'Été de Probabilités de Saint-Flour ; 2159Publisher: Cham : Springer International Publishing : Imprint: Springer, 2016Description: XIII, 274 p. online resourceContent type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9783319327747
Subject(s): Additional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification:
  • 519.2 23
LOC classification:
  • QA273.A1-274.9
  • QA274-274.9
Online resources:
Contents:
1 Introduction.- The Lasso.- 3 The square-root Lasso.- 4 The bias of the Lasso and worst possible sub-directions.- 5 Confidence intervals using the Lasso.- 6 Structured sparsity -- 7 General loss with norm-penalty -- 8 Empirical process theory for dual norms.- 9 Probability inequalities for matrices.- 10 Inequalities for the centred empirical risk and its derivative.- 11 The margin condition.- 12 Some worked-out examples.- 13 Brouwer’s fixed point theorem and sparsity.- 14 Asymptotically linear estimators of the precision matrix.- 15 Lower bounds for sparse quadratic forms.- 16 Symmetrization, contraction and concentration.- 17 Chaining including concentration.- 18 Metric structure of convex hulls.
In: Springer eBooksSummary: Taking the Lasso method as its starting point, this book describes the main ingredients needed to study general loss functions and sparsity-inducing regularizers. It also provides a semi-parametric approach to establishing confidence intervals and tests. Sparsity-inducing methods have proven to be very useful in the analysis of high-dimensional data. Examples include the Lasso and group Lasso methods, and the least squares method with other norm-penalties, such as the nuclear norm. The illustrations provided include generalized linear models, density estimation, matrix completion and sparse principal components. Each chapter ends with a problem section. The book can be used as a textbook for a graduate or PhD course.
Tags from this library: No tags from this library for this title. Log in to add tags.
No physical items for this record

1 Introduction.- The Lasso.- 3 The square-root Lasso.- 4 The bias of the Lasso and worst possible sub-directions.- 5 Confidence intervals using the Lasso.- 6 Structured sparsity -- 7 General loss with norm-penalty -- 8 Empirical process theory for dual norms.- 9 Probability inequalities for matrices.- 10 Inequalities for the centred empirical risk and its derivative.- 11 The margin condition.- 12 Some worked-out examples.- 13 Brouwer’s fixed point theorem and sparsity.- 14 Asymptotically linear estimators of the precision matrix.- 15 Lower bounds for sparse quadratic forms.- 16 Symmetrization, contraction and concentration.- 17 Chaining including concentration.- 18 Metric structure of convex hulls.

Taking the Lasso method as its starting point, this book describes the main ingredients needed to study general loss functions and sparsity-inducing regularizers. It also provides a semi-parametric approach to establishing confidence intervals and tests. Sparsity-inducing methods have proven to be very useful in the analysis of high-dimensional data. Examples include the Lasso and group Lasso methods, and the least squares method with other norm-penalties, such as the nuclear norm. The illustrations provided include generalized linear models, density estimation, matrix completion and sparse principal components. Each chapter ends with a problem section. The book can be used as a textbook for a graduate or PhD course.

There are no comments on this title.

to post a comment.
(C) Powered by Koha

Powered by Koha