top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Algorithmic learning in a random world / / Vladimir Vovk, Alexander Gammerman, and Glenn Shafer
Algorithmic learning in a random world / / Vladimir Vovk, Alexander Gammerman, and Glenn Shafer
Autore Vovk Vladimir <1960->
Edizione [2nd ed.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer International Publishing, , [2022]
Descrizione fisica 1 online resource (490 pages)
Disciplina 518.1
Soggetto topico Algorithms
Algorithms - Study and teaching
Teoria de la predicció
Algorismes
Processos estocàstics
Soggetto genere / forma Llibres electrònics
ISBN 3-031-06649-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Contents -- Preface to the Second Edition -- Preface to the First Edition -- Notation and Abbreviations -- Sets, Bags, and Sequences -- Stochastics -- Machine Learning -- Programming -- Confidence Prediction -- Other Notations -- Abbreviations -- 1 Introduction -- 1.1 Machine Learning -- 1.1.1 Learning Under Randomness -- 1.1.2 Learning Under Unconstrained Randomness -- 1.2 A Shortcoming of Statistical Learning Theory -- 1.2.1 The Hold-Out Estimate of Confidence -- 1.2.2 The Contribution of This Book -- 1.3 The Online Framework -- 1.3.1 Online Learning -- 1.3.2 Online/Offline Compromises -- 1.3.3 One-Off and Offline Learning -- 1.3.4 Induction, Transduction, and the Online Framework -- 1.4 Conformal Prediction -- 1.4.1 Nested Prediction Sets -- 1.4.2 Validity -- 1.4.3 Efficiency -- 1.4.4 Conditionality -- 1.4.5 Flexibility of Conformal Predictors -- 1.5 Probabilistic Prediction Under Unconstrained Randomness -- 1.5.1 Universally Consistent Probabilistic Predictor -- 1.5.2 Probabilistic Prediction Using a Finite Dataset -- 1.5.3 Venn Prediction -- 1.5.4 Conformal Predictive Distributions -- 1.6 Beyond Randomness -- 1.6.1 Testing Randomness -- 1.6.2 Online Compression Models -- 1.7 Context -- Part I Set Prediction -- 2 Conformal Prediction: General Case and Regression -- 2.1 Confidence Predictors -- 2.1.1 Assumptions -- 2.1.2 Simple Predictors and Confidence Predictors -- 2.1.3 Validity -- 2.1.4 Randomized Confidence Predictors -- 2.1.5 Confidence Predictors Over a Finite Horizon -- 2.1.6 One-Off and Offline Confidence Predictors -- 2.2 Conformal Predictors -- 2.2.1 Bags -- 2.2.2 Nonconformity and Conformity -- 2.2.3 p-Values -- 2.2.4 Definition of Conformal Predictors -- 2.2.5 Validity -- 2.2.6 Smoothed Conformal Predictors -- 2.2.7 Finite-Horizon Conformal Prediction -- 2.2.8 One-Off and Offline Conformal Predictors.
2.2.9 General Schemes for Defining Nonconformity -- Conformity to a Bag -- Conformity to a Property -- 2.2.10 Deleted Conformity Measures -- 2.3 Conformalized Ridge Regression -- 2.3.1 Least Squares and Ridge Regression -- 2.3.2 Basic CRR -- 2.3.3 Two Modifications -- 2.3.4 Dual Form Ridge Regression -- 2.4 Conformalized Nearest Neighbours Regression -- 2.5 Efficiency of Conformalized Ridge Regression -- 2.5.1 Hard and Soft Models -- 2.5.2 Bayesian Ridge Regression -- 2.5.3 Efficiency of CRR -- 2.6 Are There Other Ways to Achieve Validity? -- 2.7 Conformal Transducers -- 2.7.1 Definitions and Properties of Validity -- 2.7.2 Normalized Confidence Predictors and Confidence Transducers -- 2.8 Proofs -- 2.8.1 Proof of Theorem 2.2 -- 2.8.2 Proof of Theorem 2.7 -- Regularizing the Rays in Upper CRR -- Proof Proper -- 2.8.3 Proof of Theorem 2.10 -- 2.9 Context -- 2.9.1 Exchangeability vs Randomness -- 2.9.2 Conformal Prediction -- 2.9.3 Two Equivalent Definitions of Nonconformity Measures -- 2.9.4 The Two Meanings of Conformity in Conformal Prediction -- 2.9.5 Examples of Nonconformity Measures -- 2.9.6 Kernel Methods -- 2.9.7 Burnaev-Wasserman Programme -- 2.9.8 Completeness Results -- 3 Conformal Prediction: Classification and General Case -- 3.1 Criteria of Efficiency for Conformal Prediction -- 3.1.1 Basic Criteria -- 3.1.2 Other Prior Criteria -- 3.1.3 Observed Criteria -- 3.1.4 Idealised Setting -- 3.1.5 Conditionally Proper Criteria of Efficiency -- 3.1.6 Criteria of Efficiency that Are not Conditionally Proper -- 3.1.7 Discussion -- 3.2 More Ways of Computing Nonconformity Scores -- 3.2.1 Nonconformity Scores from Nearest Neighbours -- 3.2.2 Nonconformity Scores from Support Vector Machines -- 3.2.3 Reducing Classification Problems to the Binary Case -- 3.3 Weak Teachers -- 3.3.1 Imperfectly Taught Predictors -- 3.3.2 Weak Validity.
3.3.3 Strong Validity -- 3.3.4 Iterated Logarithm Validity -- 3.3.5 Efficiency -- 3.4 Proofs -- 3.4.1 Proofs for Sect.3.1 -- Proof of Theorem 3.1 -- Proof of Theorem 3.2 -- Proof of Theorem 3.3 -- Proof of Theorem 3.4 -- 3.4.2 Proofs for Sect.3.3 -- Proof of Theorem 3.7, Part I -- Proof of Theorem 3.7, Part II -- Proof of Theorem 3.9 -- Proof of Theorem 3.13 -- 3.5 Context -- 3.5.1 Criteria of Efficiency -- 3.5.2 Examples of Nonconformity Measures -- 3.5.3 Universal Predictors -- 3.5.4 Weak Teachers -- 4 Modifications of Conformal Predictors -- 4.1 The Topics of This Chapter -- 4.2 Inductive Conformal Predictors -- 4.2.1 Inductive Conformal Predictors in the Online Mode -- 4.2.2 Inductive Conformal Predictors in the Offline and Semi-Online Modes -- 4.2.3 The General Scheme for Defining Nonconformity -- 4.2.4 Normalization and Hyper-Parameter Selection -- 4.3 Further Ways of Computing Nonconformity Scores -- 4.3.1 Nonconformity Measures Considered Earlier -- 4.3.2 De-Bayesing -- 4.3.3 Neural Networks and Other Multiclass Scoring Classifiers -- 4.3.4 Decision Trees and Random Forests -- 4.3.5 Binary Scoring Classifiers -- 4.3.6 Logistic Regression -- 4.3.7 Regression and Bootstrap -- 4.3.8 Training Inductive Conformal Predictors -- 4.4 Cross-Conformal Prediction -- 4.4.1 Definition of Cross-Conformal Predictors -- 4.4.2 Computational Efficiency -- 4.4.3 Validity and Lack Thereof for Cross-Conformal Predictors -- 4.5 Transductive Conformal Predictors -- 4.5.1 Definition -- 4.5.2 Validity -- 4.6 Conditional Conformal Predictors -- 4.6.1 One-Off Conditional Conformal Predictors -- 4.6.2 Mondrian Conformal Predictors and Transducers -- 4.6.3 Using Mondrian Conformal Transducers for Prediction -- 4.6.4 Generality of Mondrian Taxonomies -- 4.6.5 Conformal Prediction -- 4.6.6 Inductive Conformal Prediction -- 4.6.7 Label-Conditional Conformal Prediction.
4.6.8 Object-Conditional Conformal Prediction -- 4.7 Training-Conditional Validity -- 4.7.1 Conditional Validity -- 4.7.2 Training-Conditional Validity of Inductive Conformal Predictors -- 4.8 Context -- 4.8.1 Computationally Efficient Hedged Prediction -- 4.8.2 Specific Learning Algorithms and Nonconformity Measures -- 4.8.3 Training Conformal Predictors -- 4.8.4 Cross-Conformal Predictors and Alternative Approaches -- 4.8.5 Transductive Conformal Predictors -- 4.8.6 Conditional Conformal Predictors -- Part II Probabilistic Prediction -- 5 Impossibility Results -- 5.1 Introduction -- 5.2 Diverse Datasets -- 5.3 Impossibility of Estimation of Probabilities -- 5.3.1 Binary Case -- 5.3.2 Multiclass Case -- 5.4 Proof of Theorem 5.2 -- 5.4.1 Probability Estimators and Statistical Tests -- 5.4.2 Complete Statistical Tests -- 5.4.3 Restatement of the Theorem in Terms of Statistical Tests -- 5.4.4 The Proof of the Theorem -- 5.5 Context -- 5.5.1 More Advanced Results -- 5.5.2 Density Estimation, Regression Estimation, and Regression with Deterministic Objects -- 5.5.3 Universal Probabilistic Predictors -- 5.5.4 Algorithmic Randomness Perspective -- 6 Probabilistic Classification: Venn Predictors -- 6.1 Introduction -- 6.2 Venn Predictors -- 6.2.1 Validity of One-Off Venn Predictors -- 6.2.2 Are There Other Ways to Achieve Perfect Calibration? -- 6.2.3 Venn Prediction with Binary Labels and No Objects -- 6.3 A Universal Venn Predictor -- 6.4 Venn-Abers Predictors -- 6.4.1 Full Venn-Abers Predictors -- 6.4.2 Inductive Venn-Abers Predictors -- 6.4.3 Probabilistic Predictors Derived from Venn Predictors -- 6.4.4 Cross Venn-Abers Predictors -- 6.4.5 Merging Multiprobability Predictions into a Probabilistic Prediction -- 6.5 Proofs -- 6.5.1 Proof of Theorem 6.4 -- 6.5.2 PAVA and the Proof of Lemma 6.6 -- 6.5.3 Proof of Proposition 6.7 -- 6.6 Context.
6.6.1 Risk and Uncertainty -- 6.6.2 John Venn, Frequentist Probability, and the Problem of the Reference Class -- 6.6.3 Online Venn Predictors Are Calibrated -- 6.6.4 Isotonic Regression -- 7 Probabilistic Regression: Conformal Predictive Systems -- 7.1 Introduction -- 7.2 Conformal Predictive Systems -- 7.2.1 Basic Definitions -- 7.2.2 Properties of Validity -- 7.2.3 Simplest Example: Monotonic Conformity Measures -- 7.2.4 Criterion of Being a CPS -- 7.3 Least Squares Prediction Machine -- 7.3.1 Three Kinds of LSPM -- 7.3.2 The Studentized LSPM in an Explicit Form -- 7.3.3 The Offline Version of the Studentized LSPM -- 7.3.4 The Ordinary LSPM -- 7.3.5 Asymptotic Efficiency of the LSPM -- 7.3.6 Illustrations -- 7.4 Kernel Ridge Regression Prediction Machine -- 7.4.1 Explicit Forms of the KRRPM -- 7.4.2 Limitation of the KRRPM -- 7.5 Nearest Neighbours Prediction Machine -- 7.6 Universal Conformal Predictive Systems -- 7.6.1 Definitions -- 7.6.2 Universal Conformal Predictive Systems -- 7.6.3 Universal Deterministic Predictive Systems -- 7.7 Applications to Decision Making -- 7.7.1 A Standard Problem of Decision Making -- 7.7.2 Examples -- 7.7.3 Asymptotically Efficient Decision Making -- 7.7.4 Dangers of Overfitting -- 7.8 Computationally Efficient Versions -- 7.8.1 Inductive Conformal Predictive Systems -- 7.8.2 Cross-Conformal Predictive Distributions -- 7.8.3 Practical Aspects -- 7.8.4 Beyond Randomness -- 7.9 Proofs and Calculations -- 7.9.1 Proofs for Sect.7.2 -- Proof of Lemma 7.1 -- Proof of Proposition 7.2 -- 7.9.2 Proofs for Sect.7.3 -- Proof of Proposition 7.4 -- Proof of Proposition 7.5 -- Proof of Proposition 7.6 -- Proof of Proposition 7.7 -- Proof of Proposition 7.8 -- Computations for the Studentized LSPM -- The Ordinary LSPM -- Proof of (7.22) -- 7.9.3 Proof of Theorem 7.16 -- 7.9.4 Proofs for Sect.7.8 -- Proof of Proposition 7.17.
Proof of Proposition 7.18.
Record Nr. UNISA-996503551103316
Vovk Vladimir <1960->  
Cham, Switzerland : , : Springer International Publishing, , [2022]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Algorithmic learning in a random world / / Vladimir Vovk, Alexander Gammerman, and Glenn Shafer
Algorithmic learning in a random world / / Vladimir Vovk, Alexander Gammerman, and Glenn Shafer
Autore Vovk Vladimir <1960->
Edizione [2nd ed.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer International Publishing, , [2022]
Descrizione fisica 1 online resource (490 pages)
Disciplina 518.1
Soggetto topico Algorithms
Algorithms - Study and teaching
Teoria de la predicció
Algorismes
Processos estocàstics
Soggetto genere / forma Llibres electrònics
ISBN 3-031-06649-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Contents -- Preface to the Second Edition -- Preface to the First Edition -- Notation and Abbreviations -- Sets, Bags, and Sequences -- Stochastics -- Machine Learning -- Programming -- Confidence Prediction -- Other Notations -- Abbreviations -- 1 Introduction -- 1.1 Machine Learning -- 1.1.1 Learning Under Randomness -- 1.1.2 Learning Under Unconstrained Randomness -- 1.2 A Shortcoming of Statistical Learning Theory -- 1.2.1 The Hold-Out Estimate of Confidence -- 1.2.2 The Contribution of This Book -- 1.3 The Online Framework -- 1.3.1 Online Learning -- 1.3.2 Online/Offline Compromises -- 1.3.3 One-Off and Offline Learning -- 1.3.4 Induction, Transduction, and the Online Framework -- 1.4 Conformal Prediction -- 1.4.1 Nested Prediction Sets -- 1.4.2 Validity -- 1.4.3 Efficiency -- 1.4.4 Conditionality -- 1.4.5 Flexibility of Conformal Predictors -- 1.5 Probabilistic Prediction Under Unconstrained Randomness -- 1.5.1 Universally Consistent Probabilistic Predictor -- 1.5.2 Probabilistic Prediction Using a Finite Dataset -- 1.5.3 Venn Prediction -- 1.5.4 Conformal Predictive Distributions -- 1.6 Beyond Randomness -- 1.6.1 Testing Randomness -- 1.6.2 Online Compression Models -- 1.7 Context -- Part I Set Prediction -- 2 Conformal Prediction: General Case and Regression -- 2.1 Confidence Predictors -- 2.1.1 Assumptions -- 2.1.2 Simple Predictors and Confidence Predictors -- 2.1.3 Validity -- 2.1.4 Randomized Confidence Predictors -- 2.1.5 Confidence Predictors Over a Finite Horizon -- 2.1.6 One-Off and Offline Confidence Predictors -- 2.2 Conformal Predictors -- 2.2.1 Bags -- 2.2.2 Nonconformity and Conformity -- 2.2.3 p-Values -- 2.2.4 Definition of Conformal Predictors -- 2.2.5 Validity -- 2.2.6 Smoothed Conformal Predictors -- 2.2.7 Finite-Horizon Conformal Prediction -- 2.2.8 One-Off and Offline Conformal Predictors.
2.2.9 General Schemes for Defining Nonconformity -- Conformity to a Bag -- Conformity to a Property -- 2.2.10 Deleted Conformity Measures -- 2.3 Conformalized Ridge Regression -- 2.3.1 Least Squares and Ridge Regression -- 2.3.2 Basic CRR -- 2.3.3 Two Modifications -- 2.3.4 Dual Form Ridge Regression -- 2.4 Conformalized Nearest Neighbours Regression -- 2.5 Efficiency of Conformalized Ridge Regression -- 2.5.1 Hard and Soft Models -- 2.5.2 Bayesian Ridge Regression -- 2.5.3 Efficiency of CRR -- 2.6 Are There Other Ways to Achieve Validity? -- 2.7 Conformal Transducers -- 2.7.1 Definitions and Properties of Validity -- 2.7.2 Normalized Confidence Predictors and Confidence Transducers -- 2.8 Proofs -- 2.8.1 Proof of Theorem 2.2 -- 2.8.2 Proof of Theorem 2.7 -- Regularizing the Rays in Upper CRR -- Proof Proper -- 2.8.3 Proof of Theorem 2.10 -- 2.9 Context -- 2.9.1 Exchangeability vs Randomness -- 2.9.2 Conformal Prediction -- 2.9.3 Two Equivalent Definitions of Nonconformity Measures -- 2.9.4 The Two Meanings of Conformity in Conformal Prediction -- 2.9.5 Examples of Nonconformity Measures -- 2.9.6 Kernel Methods -- 2.9.7 Burnaev-Wasserman Programme -- 2.9.8 Completeness Results -- 3 Conformal Prediction: Classification and General Case -- 3.1 Criteria of Efficiency for Conformal Prediction -- 3.1.1 Basic Criteria -- 3.1.2 Other Prior Criteria -- 3.1.3 Observed Criteria -- 3.1.4 Idealised Setting -- 3.1.5 Conditionally Proper Criteria of Efficiency -- 3.1.6 Criteria of Efficiency that Are not Conditionally Proper -- 3.1.7 Discussion -- 3.2 More Ways of Computing Nonconformity Scores -- 3.2.1 Nonconformity Scores from Nearest Neighbours -- 3.2.2 Nonconformity Scores from Support Vector Machines -- 3.2.3 Reducing Classification Problems to the Binary Case -- 3.3 Weak Teachers -- 3.3.1 Imperfectly Taught Predictors -- 3.3.2 Weak Validity.
3.3.3 Strong Validity -- 3.3.4 Iterated Logarithm Validity -- 3.3.5 Efficiency -- 3.4 Proofs -- 3.4.1 Proofs for Sect.3.1 -- Proof of Theorem 3.1 -- Proof of Theorem 3.2 -- Proof of Theorem 3.3 -- Proof of Theorem 3.4 -- 3.4.2 Proofs for Sect.3.3 -- Proof of Theorem 3.7, Part I -- Proof of Theorem 3.7, Part II -- Proof of Theorem 3.9 -- Proof of Theorem 3.13 -- 3.5 Context -- 3.5.1 Criteria of Efficiency -- 3.5.2 Examples of Nonconformity Measures -- 3.5.3 Universal Predictors -- 3.5.4 Weak Teachers -- 4 Modifications of Conformal Predictors -- 4.1 The Topics of This Chapter -- 4.2 Inductive Conformal Predictors -- 4.2.1 Inductive Conformal Predictors in the Online Mode -- 4.2.2 Inductive Conformal Predictors in the Offline and Semi-Online Modes -- 4.2.3 The General Scheme for Defining Nonconformity -- 4.2.4 Normalization and Hyper-Parameter Selection -- 4.3 Further Ways of Computing Nonconformity Scores -- 4.3.1 Nonconformity Measures Considered Earlier -- 4.3.2 De-Bayesing -- 4.3.3 Neural Networks and Other Multiclass Scoring Classifiers -- 4.3.4 Decision Trees and Random Forests -- 4.3.5 Binary Scoring Classifiers -- 4.3.6 Logistic Regression -- 4.3.7 Regression and Bootstrap -- 4.3.8 Training Inductive Conformal Predictors -- 4.4 Cross-Conformal Prediction -- 4.4.1 Definition of Cross-Conformal Predictors -- 4.4.2 Computational Efficiency -- 4.4.3 Validity and Lack Thereof for Cross-Conformal Predictors -- 4.5 Transductive Conformal Predictors -- 4.5.1 Definition -- 4.5.2 Validity -- 4.6 Conditional Conformal Predictors -- 4.6.1 One-Off Conditional Conformal Predictors -- 4.6.2 Mondrian Conformal Predictors and Transducers -- 4.6.3 Using Mondrian Conformal Transducers for Prediction -- 4.6.4 Generality of Mondrian Taxonomies -- 4.6.5 Conformal Prediction -- 4.6.6 Inductive Conformal Prediction -- 4.6.7 Label-Conditional Conformal Prediction.
4.6.8 Object-Conditional Conformal Prediction -- 4.7 Training-Conditional Validity -- 4.7.1 Conditional Validity -- 4.7.2 Training-Conditional Validity of Inductive Conformal Predictors -- 4.8 Context -- 4.8.1 Computationally Efficient Hedged Prediction -- 4.8.2 Specific Learning Algorithms and Nonconformity Measures -- 4.8.3 Training Conformal Predictors -- 4.8.4 Cross-Conformal Predictors and Alternative Approaches -- 4.8.5 Transductive Conformal Predictors -- 4.8.6 Conditional Conformal Predictors -- Part II Probabilistic Prediction -- 5 Impossibility Results -- 5.1 Introduction -- 5.2 Diverse Datasets -- 5.3 Impossibility of Estimation of Probabilities -- 5.3.1 Binary Case -- 5.3.2 Multiclass Case -- 5.4 Proof of Theorem 5.2 -- 5.4.1 Probability Estimators and Statistical Tests -- 5.4.2 Complete Statistical Tests -- 5.4.3 Restatement of the Theorem in Terms of Statistical Tests -- 5.4.4 The Proof of the Theorem -- 5.5 Context -- 5.5.1 More Advanced Results -- 5.5.2 Density Estimation, Regression Estimation, and Regression with Deterministic Objects -- 5.5.3 Universal Probabilistic Predictors -- 5.5.4 Algorithmic Randomness Perspective -- 6 Probabilistic Classification: Venn Predictors -- 6.1 Introduction -- 6.2 Venn Predictors -- 6.2.1 Validity of One-Off Venn Predictors -- 6.2.2 Are There Other Ways to Achieve Perfect Calibration? -- 6.2.3 Venn Prediction with Binary Labels and No Objects -- 6.3 A Universal Venn Predictor -- 6.4 Venn-Abers Predictors -- 6.4.1 Full Venn-Abers Predictors -- 6.4.2 Inductive Venn-Abers Predictors -- 6.4.3 Probabilistic Predictors Derived from Venn Predictors -- 6.4.4 Cross Venn-Abers Predictors -- 6.4.5 Merging Multiprobability Predictions into a Probabilistic Prediction -- 6.5 Proofs -- 6.5.1 Proof of Theorem 6.4 -- 6.5.2 PAVA and the Proof of Lemma 6.6 -- 6.5.3 Proof of Proposition 6.7 -- 6.6 Context.
6.6.1 Risk and Uncertainty -- 6.6.2 John Venn, Frequentist Probability, and the Problem of the Reference Class -- 6.6.3 Online Venn Predictors Are Calibrated -- 6.6.4 Isotonic Regression -- 7 Probabilistic Regression: Conformal Predictive Systems -- 7.1 Introduction -- 7.2 Conformal Predictive Systems -- 7.2.1 Basic Definitions -- 7.2.2 Properties of Validity -- 7.2.3 Simplest Example: Monotonic Conformity Measures -- 7.2.4 Criterion of Being a CPS -- 7.3 Least Squares Prediction Machine -- 7.3.1 Three Kinds of LSPM -- 7.3.2 The Studentized LSPM in an Explicit Form -- 7.3.3 The Offline Version of the Studentized LSPM -- 7.3.4 The Ordinary LSPM -- 7.3.5 Asymptotic Efficiency of the LSPM -- 7.3.6 Illustrations -- 7.4 Kernel Ridge Regression Prediction Machine -- 7.4.1 Explicit Forms of the KRRPM -- 7.4.2 Limitation of the KRRPM -- 7.5 Nearest Neighbours Prediction Machine -- 7.6 Universal Conformal Predictive Systems -- 7.6.1 Definitions -- 7.6.2 Universal Conformal Predictive Systems -- 7.6.3 Universal Deterministic Predictive Systems -- 7.7 Applications to Decision Making -- 7.7.1 A Standard Problem of Decision Making -- 7.7.2 Examples -- 7.7.3 Asymptotically Efficient Decision Making -- 7.7.4 Dangers of Overfitting -- 7.8 Computationally Efficient Versions -- 7.8.1 Inductive Conformal Predictive Systems -- 7.8.2 Cross-Conformal Predictive Distributions -- 7.8.3 Practical Aspects -- 7.8.4 Beyond Randomness -- 7.9 Proofs and Calculations -- 7.9.1 Proofs for Sect.7.2 -- Proof of Lemma 7.1 -- Proof of Proposition 7.2 -- 7.9.2 Proofs for Sect.7.3 -- Proof of Proposition 7.4 -- Proof of Proposition 7.5 -- Proof of Proposition 7.6 -- Proof of Proposition 7.7 -- Proof of Proposition 7.8 -- Computations for the Studentized LSPM -- The Ordinary LSPM -- Proof of (7.22) -- 7.9.3 Proof of Theorem 7.16 -- 7.9.4 Proofs for Sect.7.8 -- Proof of Proposition 7.17.
Proof of Proposition 7.18.
Record Nr. UNINA-9910635392203321
Vovk Vladimir <1960->  
Cham, Switzerland : , : Springer International Publishing, , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Time series analysis : forecasting and control / / George E.P. Box, Gwilym M. Jenkins, Gregory C. Reinsel
Time series analysis : forecasting and control / / George E.P. Box, Gwilym M. Jenkins, Gregory C. Reinsel
Autore Box George E. P
Edizione [4th ed.]
Pubbl/distr/stampa Hoboken, N.J., : John Wiley, c2008
Descrizione fisica 1 online resource (781 p.)
Disciplina 519.5/5
519.55
Altri autori (Persone) JenkinsGwilym M
ReinselGregory C
Collana Wiley series in probability and statistics
Soggetto topico Anàlisi de sèries temporals
Teoria de la predicció
Sistemes de control per retroacció
Control automàtic
Models matemàtics
Time-series analysis
Prediction theory
Transfer functions
Feedback control systems - Mathematical models
ISBN 1-118-61919-6
1-118-61906-4
1-118-21087-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto pt. 1. Stochastic models and their forecasting -- pt. 2. Stochastic model building -- pt. 3. Transfer function and multivariate model building -- pt. Design of discrete control schemes -- pt. 5. Charts and tables -- pt. 6. Exercises and problems.
Record Nr. UNINA-9910141180203321
Box George E. P  
Hoboken, N.J., : John Wiley, c2008
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui