高维数据统计学

豆瓣
高维数据统计学

登录后可管理标记收藏。

ISBN: 9787519211677
作者: Peter Bühlmann / Sara van de Geer
出版社: 世界图书出版公司
发行时间: 2016 -5
丛书: Springer Series in Statistics 影印版
装订: 平装
价格: 95.00元
页数: 556

/ 10

0 个评分

评分人数不足
借阅或购买

方法、理论和应用

Statistics for High-Dimensional Data: Methods, Theory and Applications

Peter Bühlmann / Sara van de Geer   

简介

Peter Bühlmann在ETHZ是高维统计、因果推断方面的知名专家。《高维数据统计学》统计学的前沿之作。这本书所针对的高维数据,是理论研究的热点,在实际中也有着广泛的应用。这本书重点阐述了Lasso和其他L1方法的变体,也有boosting等内容。

contents

1 Introdriction
1.1 The framework
1.2 The possibiliues and challenges
1.3 About the book
1.3.1 Organization of the book
1.4 Some examples
1.4.1 Ptediction and biomarker discovery in genomics
2 Lasso for linear models
2.1 Organization of the chapter
2.2 Introduction and preliminaries
2.2.1 The Lasso estimator
2.3 Orthonormal design
2.4 Prediction
2.4.1 Practical aspects about the Lasso for prediction
2.4.2 Some results from asymptotic theory
2.5 Variable screening and ‖β—β0‖q—norms
2.5.1 Tuning parameter selection for variable screening
2.5.2 Motif regression for DNA binding sites
2.6 Variable selection
2.6.1 Neighborhood stability andirrepresentable condition
2.7 Key pfoperties and corresponding assumptions: a summary
2.8 The adaptive Lasso: a two—stage procedure
2.8.1 An illustration: simulated data and moLif regression
2.8.2 Orthonormal design
2.8.3 The adaptive Lasso: variable selection under weak conditions
2.8.4 Computation
2.8.5 Multi—step adaptive Lasso
2.8.6 Non—convex penalty functions
2.9 Thresholding the Lasso
2.10 The relaxed Lasso
2.11 Degrees of freedom of the Lasso
2.12 Path—following algorithms
2.12.1 Coordinatewise optimization and shooting algorithms
2.13 Elasric net: an extension
Problems
3 Generalizedlinear models and the Lasso
3.1 Organization of the chapter
3.2 Introduction and preliminafies
3.2.1 The Lasso estimator: penalizing the negauve log—likelihood
3.3 Important examples of generalized linear models
3.3.1 Binary response variable and logistic regression
3.3.2 Poisson regression
3.3.3 Multi—category response variable and mulunomial
distribution
Prohlems
4 The group Lasso
4.1 Organization of the cbaptef
4.2 Introduction and pfeliminaries
4.2.1 The group Lasso penalty
4.3 Factor variables as covariates
4.3.1 Prediction of splice sites in DNA sequences
4.4 Properties of the gfoup Lasso for generalized linear models
4.5 The generalized group Lasso penalty
4.5.1 Groupwise prediction penalty and parametrization invariance
4.6 The adaptive group Lasso
4.7 Algorithms for the group Lasso
4.7.1 Block coordinate descent
4.7.2 Block coordinate gradient descent
Problems
5 Additive models and many smooth univariate funchons
5.1 Organization of the chapter
5.2 Introduction and preliminaries
5.2.1 Penalized maximum likelihood for additive models
5.3 The sparsity—smoothness penalty
5.3.1 Orthogonal basis and diagonal smoorhing matrices
5.3.2 Natural cubic splines and Sobolev spaces
5.3.3 Computation
5.4 A sparsity—smoothness penaky of group Lasso type
5.4.1 Computationalalgorithm
5.4.2 Alternative approache.s
5.5 Numericalexamples
5.5.1 Simulated example
5.5.2 Motifregression
5.6 Prediction and variable selection
5.7 Generalized additive models
5.8 Linear model with varying coefficients
5.8.1 Properties for prediction
5.8.2 Multivariate linear model
5.9 Multitasklearning
Problems
6 Theory for the Lasso
6.1 Organization of this chapter
6.2 Least squares and the Lasso
6.2.1 Introducuon
6.2.2 The result assuming the truth is linear
6.2.3 Linear approximation of the truth
6.2.4 A further refinemem: handling smallish coefficients
6.3 The setup for general convex loss
6.4 The margin condition
6.5 Generalized linear model without prenalty
6.6 Consistency of the Lasso of generalloss
6.7 An oracle inequality
6.8 The eq—error for 1≤q≤2
6.8.1 eq Application to least squares assuming the truth is linear
6.8.2 Applicauon to general loss and a sparse approximation of the truth
6.9 The weighted Lasso
6.10 The adaptively weighted Lasso
6.11 Concave penalties
6.11.1 Sparsity oracle inequalities forleast squares with er—penalty
6.11.2 Proofs of this section (Secuon 6.11)
6.12 Compatibility and (random) matrices
6.13 On the compatibility condition
6.13.1 Direct bounds for the compatibility constant
6.13.2 Bounds using ‖βS‖21≤s‖βs‖22
6.13.3 Sets N containing S
6.13.4 Restrictedisometry
6.13.5 Sparse eigenvalues
6.13.6 Funher coherence nouons
6.13.7 An overview of the various eigenvalue fiavored constants
Problems
7 Variable selection with the Lasso
7.1 Introduction
7.2 Some results from literacure
7.3 Organization of this chapter
7.4 The bera—nun condition
7.5 The irrepresentable condition in the noiseless case
7.5.1 Definition of the irrepresentable condition
7.5.2 The KKT conditjons
7.5.3 Necessity and sufficiency for variable selection
7.5.4 The intepresemable condition implies the compatibility condition
7.5.5 The irrepresentable condition and restricted fegression
7.5.6 Selecting a superset of the true active set
7.5.7 The weighted,rrepresemable condition
7.5.8 The weighted irrepresentable condition and restricted regression
7.5.9 The weighted Lasso with "ideal" weights
7.6 Definition of the adaptive and thresholded Lasso
7.6.1 Definition of adaptive La.sso
7.6.2 Definition of the thresholded Lasso
7.6.3 Ordef symbols
7.7 A recollection of the results obtained in Chapter 6
7.8 The adaptive Lasso and thresholding: invoking sparse eigenvaLues
7.8.1 The conditions on the tuning parameters
7.8.2 The results
7.8.3 Comparison with the Lasso
7.8.4 Companson between adaptive and thresholded Lasso
7.8.5 Bounds for the number of false negatives
7.8.6 Imposing beta—min conditions
7.9 The adapove Lasso without invoking sparse eigenvalues
7.9.1 The condition on the tuning parameter
7.9.2 The results
7.10 Some concluding remarks
7.11 Technical complements for the noiseless case without sparse eigenvalues
7.11.1 Prediction error for the noiseless (weighted) Lasso
7.11.2 The number of false positives of the noiseless (weighted) Lasso
7.11.3 Thresholding the noiseless irutial estimator
7.11.4 The noiseless adaptive Lasso
7.12 Technical complements for the noisy case without sparse eigenvalues
7.1.3 Selection with concave penalties
Problems
8 Theory for e1/e2—penalty procedures
8.1 Introduction
8.2 Organization and notation of this chapter
8.3 Regression with group structure
8.3.1 The loss function and penalty
8.3.2 The empirical process
8.3.3 The group Lasso compatibility condition
8.3.4 A group Lasso sparsity oracle inequality
8.3.5 Extensions
8.4 High—dimensional additive model
8.4.1 The loss function and penalty
8.4.2 The empirical process
8.4.3 The smoothed Lasso comparibility condioon
8.4.4 A smoothed group Lasso sparsity oracle inequality
8.4.5 On the choice of the penalty
8.5 Linear model with time—varying coefficients
8.5.1 The loss function and penalty
8.5.2 The empiricalprocess
8.5.3 The compatibility condition for the time—varying coefficients model
8.5.4 A sparsity oracle inequality for the time—varying coefficients model.
8.6 Multivaniate linear model and multitask learning
8.6.1 Theloss function and penalty
8.6.2 The empirical process
8.6.3 The multitask compatibility concLition
8.6.4 A multitask sparsity oracle inequality
8.7 The approximation condition for the smoothed group Lasso
8.7.1 Sobolev smoothness
8.7.2 Diagonalizedsmoothness
Problems
……
9 Non convex loss functions and e1—regularization
10 Stable solutions
11 P—values for linear models and beyond
12 Boosting and greedy algorithms
13 Graphicalmodejing
14 Probability and moment inequalities
Author Index
Index
References

其它版本 (2)
短评
评论
笔记