000 04063nam a22006135i 4500
001 978-3-540-44507-4
003 DE-He213
005 20190213151129.0
007 cr nn 008mamaa
008 121227s2004 gw | s |||| 0|eng d
020 _a9783540445074
_9978-3-540-44507-4
024 7 _a10.1007/b99352
_2doi
050 4 _aQA273.A1-274.9
050 4 _aQA274-274.9
072 7 _aPBT
_2bicssc
072 7 _aMAT029000
_2bisacsh
072 7 _aPBT
_2thema
072 7 _aPBWL
_2thema
082 0 4 _a519.2
_223
100 1 _aCatoni, Olivier.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
245 1 0 _aStatistical Learning Theory and Stochastic Optimization
_h[electronic resource] :
_bEcole d’Eté de Probabilités de Saint-Flour XXXI - 2001 /
_cby Olivier Catoni ; edited by Jean Picard.
264 1 _aBerlin, Heidelberg :
_bSpringer Berlin Heidelberg :
_bImprint: Springer,
_c2004.
300 _aVIII, 284 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aLecture Notes in Mathematics,
_x0075-8434 ;
_v1851
505 0 _aUniversal Lossless Data Compression -- Links Between Data Compression and Statistical Estimation -- Non Cumulated Mean Risk -- Gibbs Estimators -- Randomized Estimators and Empirical Complexity -- Deviation Inequalities -- Markov Chains with Exponential Transitions -- References -- Index.
520 _aStatistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results.
650 0 _aDistribution (Probability theory.
650 0 _aMathematical statistics.
650 0 _aMathematical optimization.
650 0 _aArtificial intelligence.
650 0 _aMathematics.
650 0 _aNumerical analysis.
650 1 4 _aProbability Theory and Stochastic Processes.
_0http://scigraph.springernature.com/things/product-market-codes/M27004
650 2 4 _aStatistical Theory and Methods.
_0http://scigraph.springernature.com/things/product-market-codes/S11001
650 2 4 _aOptimization.
_0http://scigraph.springernature.com/things/product-market-codes/M26008
650 2 4 _aArtificial Intelligence.
_0http://scigraph.springernature.com/things/product-market-codes/I21000
650 2 4 _aInformation and Communication, Circuits.
_0http://scigraph.springernature.com/things/product-market-codes/M13038
650 2 4 _aNumerical Analysis.
_0http://scigraph.springernature.com/things/product-market-codes/M14050
700 1 _aPicard, Jean.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
710 2 _aSpringerLink (Online service)
773 0 _tSpringer eBooks
776 0 8 _iPrinted edition:
_z9783540225720
776 0 8 _iPrinted edition:
_z9783662203248
830 0 _aLecture Notes in Mathematics,
_x0075-8434 ;
_v1851
856 4 0 _uhttps://doi.org/10.1007/b99352
912 _aZDB-2-SMA
912 _aZDB-2-LNM
912 _aZDB-2-BAE
999 _c9622
_d9622