top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
An introduction to bootstrap methods with applications to R / / Michael R. Chernick, Robert A. LaBudde
An introduction to bootstrap methods with applications to R / / Michael R. Chernick, Robert A. LaBudde
Autore Chernick Michael R.
Edizione [1st edition]
Pubbl/distr/stampa Hoboken, New Jersey : , : Wiley, , 2011
Descrizione fisica 1 online resource (236 p.)
Disciplina 519.5/4
Soggetto topico Bootstrap (Statistics)
R (Computer program language)
Soggetto genere / forma Electronic books.
ISBN 1-118-62541-2
1-118-62545-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover ; Title Page ; Copyright ; Contents ; Preface ; Acknowledgments ; List of Tables ; 1: INTRODUCTION ; 1.1 Historical Background ; 1.2 Definition and Relationship to the Delta Method and Other Resampling Methods ; 1.2.1 Jackknife ; 1.2.2 Delta Method ; 1.2.3 Cross Validation ; 1.2.4 Subsampling ; 1.3 Wide Range of Applications ; 1.4 The Bootstrap and the R Language System ; 1.5 Historical Notes ; 1.6 Exercises ; References ; 2: ESTIMATION; 2.1 Estimating Bias ; 2.1.1 Bootstrap Adjustment ; 2.1.2 Error Rate Estimation in Discriminant Analysis
2.1.3 Simple Example of Linear Discrimination and Bootstrap Error Rate Estimation 2.1.4 Patch Data Example ; 2.2 Estimating Location ; 2.2.1 Estimating a Mean ; 2.2.2 Estimating a Median ; 2.3 Estimating Dispersion ; 2.3.1 Estimating an Estimate's Standard Error ; 2.3.2 Estimating Interquartile Range ; 2.4 Linear Regression ; 2.4.1 Overview ; 2.4.2 Bootstrapping Residuals ; 2.4.3 Bootstrapping Pairs (response and Predictor Vector) ; 2.4.4 Heteroscedasticity of Variance: the Wild Bootstrap ; 2.4.5 a Special Class of Linear Regression Models: Multivariable Fractional Polynomials
2.5 Nonlinear Regression 2.5.1 Examples of Nonlinear Models ; 2.5.2 a Quasi Optical Experiment ; 2.6 Nonparametric Regression ; 2.6.1 Examples of Nonparametric Regression Models ; 2.6.2 Bootstrap Bagging ; 2.7 Historical Notes ; 2.8 Exercises ; References ; 3: CONFIDENCE INTERVALS ; 3.1 Subsampling, Typical Value Theorem, and Efron's Percentile Method ; 3.2 Bootstrap-t ; 3.3 Iterated Bootstrap ; 3.4 Bias Corrected (BC) Bootstrap ; 3.5 Bca and Abc ; 3.6 Tilted Bootstrap ; 3.7 Variance Estimation with Small Sample Sizes ; 3.8 Historical Notes ; 3.9 Exercises ; References ; 4: HYPOTHESIS TESTING
4.1 Relationship to Confidence Intervals 4.2 Why Test Hypotheses Differently? ; 4.3 Tendril Dx Example ; 4.4 Klingenberg Example: Binary Dose-response ; 4.5 Historical Notes ; 4.6 Exercises ; References ; 5: TIME SERIES; 5.1 Forecasting Methods ; 5.2 Time Domain Models ; 5.3 Can Bootstrapping Improve Prediction Intervals? ; 5.4 Model Based Methods ; 5.4.1 Bootstrapping Stationary Autoregressive Processes ; 5.4.2 Bootstrapping Explosive Autoregressive Processes ; 5.4.3 Bootstrapping Unstable Autoregressive Processes ; 5.4.4 Bootstrapping Stationary Arma Processes
5.5 Block Bootstrapping for Stationary Time Series 5.6 Dependent Wild Bootstrap (DWB) ; 5.7 Frequency-based Approaches for Stationary Time Series ; 5.8 Sieve Bootstrap ; 5.9 Historical Notes ; 5.10 Exercises ; References ; 6: BOOTSTRAP VARIANTS; 6.1 Bayesian Bootstrap ; 6.2 Smoothed Bootstrap ; 6.3 Parametric Bootstrap ; 6.4 Double Bootstrap ; 6.5 the M-out-of-n Bootstrap ; 6.6 the Wild Bootstrap ; 6.7 Historical Notes ; 6.8 Exercise ; References ; 7: CHAPTER SPECIAL TOPICS; 7.1 Spatial Data ; 7.1.1 Kriging ; 7.1.2 Asymptotics for Spatial Data ; 7.1.3 Block Bootstrap on Regular Grids
7.1.4 Block Bootstrap on Irregular Grids
Record Nr. UNINA-9910464196903321
Chernick Michael R.  
Hoboken, New Jersey : , : Wiley, , 2011
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
An introduction to bootstrap methods with applications to R / / Michael R. Chernick, Robert A. LaBudde
An introduction to bootstrap methods with applications to R / / Michael R. Chernick, Robert A. LaBudde
Autore Chernick Michael R.
Edizione [1st edition]
Pubbl/distr/stampa Hoboken, New Jersey : , : Wiley, , 2011
Descrizione fisica 1 online resource (236 p.)
Disciplina 519.5/4
Collana New York Academy of Sciences
Soggetto topico Bootstrap (Statistics)
R (Computer program language)
ISBN 1-118-62541-2
1-118-62545-5
Classificazione MAT029000
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover ; Title Page ; Copyright ; Contents ; Preface ; Acknowledgments ; List of Tables ; 1: INTRODUCTION ; 1.1 Historical Background ; 1.2 Definition and Relationship to the Delta Method and Other Resampling Methods ; 1.2.1 Jackknife ; 1.2.2 Delta Method ; 1.2.3 Cross Validation ; 1.2.4 Subsampling ; 1.3 Wide Range of Applications ; 1.4 The Bootstrap and the R Language System ; 1.5 Historical Notes ; 1.6 Exercises ; References ; 2: ESTIMATION; 2.1 Estimating Bias ; 2.1.1 Bootstrap Adjustment ; 2.1.2 Error Rate Estimation in Discriminant Analysis
2.1.3 Simple Example of Linear Discrimination and Bootstrap Error Rate Estimation 2.1.4 Patch Data Example ; 2.2 Estimating Location ; 2.2.1 Estimating a Mean ; 2.2.2 Estimating a Median ; 2.3 Estimating Dispersion ; 2.3.1 Estimating an Estimate's Standard Error ; 2.3.2 Estimating Interquartile Range ; 2.4 Linear Regression ; 2.4.1 Overview ; 2.4.2 Bootstrapping Residuals ; 2.4.3 Bootstrapping Pairs (response and Predictor Vector) ; 2.4.4 Heteroscedasticity of Variance: the Wild Bootstrap ; 2.4.5 a Special Class of Linear Regression Models: Multivariable Fractional Polynomials
2.5 Nonlinear Regression 2.5.1 Examples of Nonlinear Models ; 2.5.2 a Quasi Optical Experiment ; 2.6 Nonparametric Regression ; 2.6.1 Examples of Nonparametric Regression Models ; 2.6.2 Bootstrap Bagging ; 2.7 Historical Notes ; 2.8 Exercises ; References ; 3: CONFIDENCE INTERVALS ; 3.1 Subsampling, Typical Value Theorem, and Efron's Percentile Method ; 3.2 Bootstrap-t ; 3.3 Iterated Bootstrap ; 3.4 Bias Corrected (BC) Bootstrap ; 3.5 Bca and Abc ; 3.6 Tilted Bootstrap ; 3.7 Variance Estimation with Small Sample Sizes ; 3.8 Historical Notes ; 3.9 Exercises ; References ; 4: HYPOTHESIS TESTING
4.1 Relationship to Confidence Intervals 4.2 Why Test Hypotheses Differently? ; 4.3 Tendril Dx Example ; 4.4 Klingenberg Example: Binary Dose-response ; 4.5 Historical Notes ; 4.6 Exercises ; References ; 5: TIME SERIES; 5.1 Forecasting Methods ; 5.2 Time Domain Models ; 5.3 Can Bootstrapping Improve Prediction Intervals? ; 5.4 Model Based Methods ; 5.4.1 Bootstrapping Stationary Autoregressive Processes ; 5.4.2 Bootstrapping Explosive Autoregressive Processes ; 5.4.3 Bootstrapping Unstable Autoregressive Processes ; 5.4.4 Bootstrapping Stationary Arma Processes
5.5 Block Bootstrapping for Stationary Time Series 5.6 Dependent Wild Bootstrap (DWB) ; 5.7 Frequency-based Approaches for Stationary Time Series ; 5.8 Sieve Bootstrap ; 5.9 Historical Notes ; 5.10 Exercises ; References ; 6: BOOTSTRAP VARIANTS; 6.1 Bayesian Bootstrap ; 6.2 Smoothed Bootstrap ; 6.3 Parametric Bootstrap ; 6.4 Double Bootstrap ; 6.5 the M-out-of-n Bootstrap ; 6.6 the Wild Bootstrap ; 6.7 Historical Notes ; 6.8 Exercise ; References ; 7: CHAPTER SPECIAL TOPICS; 7.1 Spatial Data ; 7.1.1 Kriging ; 7.1.2 Asymptotics for Spatial Data ; 7.1.3 Block Bootstrap on Regular Grids
7.1.4 Block Bootstrap on Irregular Grids
Record Nr. UNINA-9910789481603321
Chernick Michael R.  
Hoboken, New Jersey : , : Wiley, , 2011
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
An introduction to bootstrap methods with applications to R / / Michael R. Chernick, Robert A. LaBudde
An introduction to bootstrap methods with applications to R / / Michael R. Chernick, Robert A. LaBudde
Autore Chernick Michael R.
Edizione [1st edition]
Pubbl/distr/stampa Hoboken, New Jersey : , : Wiley, , 2011
Descrizione fisica 1 online resource (236 p.)
Disciplina 519.5/4
Collana New York Academy of Sciences
Soggetto topico Bootstrap (Statistics)
R (Computer program language)
ISBN 1-118-62541-2
1-118-62545-5
Classificazione MAT029000
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover ; Title Page ; Copyright ; Contents ; Preface ; Acknowledgments ; List of Tables ; 1: INTRODUCTION ; 1.1 Historical Background ; 1.2 Definition and Relationship to the Delta Method and Other Resampling Methods ; 1.2.1 Jackknife ; 1.2.2 Delta Method ; 1.2.3 Cross Validation ; 1.2.4 Subsampling ; 1.3 Wide Range of Applications ; 1.4 The Bootstrap and the R Language System ; 1.5 Historical Notes ; 1.6 Exercises ; References ; 2: ESTIMATION; 2.1 Estimating Bias ; 2.1.1 Bootstrap Adjustment ; 2.1.2 Error Rate Estimation in Discriminant Analysis
2.1.3 Simple Example of Linear Discrimination and Bootstrap Error Rate Estimation 2.1.4 Patch Data Example ; 2.2 Estimating Location ; 2.2.1 Estimating a Mean ; 2.2.2 Estimating a Median ; 2.3 Estimating Dispersion ; 2.3.1 Estimating an Estimate's Standard Error ; 2.3.2 Estimating Interquartile Range ; 2.4 Linear Regression ; 2.4.1 Overview ; 2.4.2 Bootstrapping Residuals ; 2.4.3 Bootstrapping Pairs (response and Predictor Vector) ; 2.4.4 Heteroscedasticity of Variance: the Wild Bootstrap ; 2.4.5 a Special Class of Linear Regression Models: Multivariable Fractional Polynomials
2.5 Nonlinear Regression 2.5.1 Examples of Nonlinear Models ; 2.5.2 a Quasi Optical Experiment ; 2.6 Nonparametric Regression ; 2.6.1 Examples of Nonparametric Regression Models ; 2.6.2 Bootstrap Bagging ; 2.7 Historical Notes ; 2.8 Exercises ; References ; 3: CONFIDENCE INTERVALS ; 3.1 Subsampling, Typical Value Theorem, and Efron's Percentile Method ; 3.2 Bootstrap-t ; 3.3 Iterated Bootstrap ; 3.4 Bias Corrected (BC) Bootstrap ; 3.5 Bca and Abc ; 3.6 Tilted Bootstrap ; 3.7 Variance Estimation with Small Sample Sizes ; 3.8 Historical Notes ; 3.9 Exercises ; References ; 4: HYPOTHESIS TESTING
4.1 Relationship to Confidence Intervals 4.2 Why Test Hypotheses Differently? ; 4.3 Tendril Dx Example ; 4.4 Klingenberg Example: Binary Dose-response ; 4.5 Historical Notes ; 4.6 Exercises ; References ; 5: TIME SERIES; 5.1 Forecasting Methods ; 5.2 Time Domain Models ; 5.3 Can Bootstrapping Improve Prediction Intervals? ; 5.4 Model Based Methods ; 5.4.1 Bootstrapping Stationary Autoregressive Processes ; 5.4.2 Bootstrapping Explosive Autoregressive Processes ; 5.4.3 Bootstrapping Unstable Autoregressive Processes ; 5.4.4 Bootstrapping Stationary Arma Processes
5.5 Block Bootstrapping for Stationary Time Series 5.6 Dependent Wild Bootstrap (DWB) ; 5.7 Frequency-based Approaches for Stationary Time Series ; 5.8 Sieve Bootstrap ; 5.9 Historical Notes ; 5.10 Exercises ; References ; 6: BOOTSTRAP VARIANTS; 6.1 Bayesian Bootstrap ; 6.2 Smoothed Bootstrap ; 6.3 Parametric Bootstrap ; 6.4 Double Bootstrap ; 6.5 the M-out-of-n Bootstrap ; 6.6 the Wild Bootstrap ; 6.7 Historical Notes ; 6.8 Exercise ; References ; 7: CHAPTER SPECIAL TOPICS; 7.1 Spatial Data ; 7.1.1 Kriging ; 7.1.2 Asymptotics for Spatial Data ; 7.1.3 Block Bootstrap on Regular Grids
7.1.4 Block Bootstrap on Irregular Grids
Record Nr. UNINA-9910810818703321
Chernick Michael R.  
Hoboken, New Jersey : , : Wiley, , 2011
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Introduction to linear models and statistical inference [[electronic resource] /] / Steven J. Janke, Frederick Tinsley
Introduction to linear models and statistical inference [[electronic resource] /] / Steven J. Janke, Frederick Tinsley
Autore Janke Steven J. <1947->
Pubbl/distr/stampa Hoboken, NJ, : Wiley, c2005
Descrizione fisica 1 online resource (600 p.)
Disciplina 519.5/4
519.54
Altri autori (Persone) TinsleyFrederick <1951->
Soggetto topico Linear models (Statistics)
ISBN 1-280-27754-8
9786610277544
0-470-31534-2
0-471-74011-X
0-471-74010-1
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Introduction to Linear Models and Statistical Inference; Contents; INTRODUCTION: STATISTICAL QUESTIONS; 1. DATA: PLOTS AND LOCATION; 1.1 Plot the Data; 1.2 Measures of Location: Single Observations; 1.3 Measures of Location: Paired Observations; 1.4 Robust Measures of Location: Paired Observations; 1.5 Linear Algebra for Least Squares (Optional); Exercises; 2. DATA: DISPERSION AND CORRELATION; 2.1 Measures of Dispersion: Single Observations; 2.2 Measures of Dispersion: Paired Observations; 2.3 Robust Measures of Dispersion: Paired Observations; 2.4 Analysis of Variance
2.5 Measures of Linear Relationship2.6 Analysis of Variance using Linear Algebra (Optional); Exercises; 3. RANDOM VARIABLES: PROBABILITY AND DENSITY; 3.1 Random Variables; 3.2 Probability; 3.3 Finding Probabilities; 3.4 Densities: Discrete Random Variables; 3.5 Densities: Continuous Random Variables; 3.6 Binomial Random Variables; 3.7 Normal Random Variables; Exercises; 4. RANDOM VARIABLES: EXPECTATION AND VARIANCE; 4.1 Expectation of a Random Variable; 4.2 Properties of Expectation; 4.3 Independent Random Variables; 4.4 Variance of a Random Variable; 4.5 Correlation Coefficient
4.6 Properties of Normal Random Variables4.7 Linear Algebra for Random Vectors (Optional); Exercises; 5. STATISTICAL INFERENCE; 5.1 Populations and Samples; 5.2 Unbiases Estimators; 5.3 Distribution of X; 5.4 Confidence Intervals; 5.5 Hypothesis Testing; 5.6 General Inference Problem; 5.7 The Runs Test for Randomness; 5.8 Testing for Normality; 5.9 Linear Algebra for Inference (Optional); Exercises; 6. SIMPLE LINEAR MODELS; 6.1 Basics of the Simple Linear Model; 6.2 Estimators for the Simple Linear Model; 6.3 Inference for the Slope; 6.4 Testing the Hypothesis b = 0
6.5 Coefficient of Determination6.6 Inference for the Intercept; 6.7 Inference for the Variance; 6.8 Prediction Intervals; 6.9 Regression Through the Origin; 6.10 Earthquake Example; 6.11 Linear Algebra: The Simple Linear Model (Optional); Exercises; 7. LINEAR MODEL DIAGNOSTICS; 7.1 Residual Plots; 7.2 Standardized Residuals; 7.3 Testing Assumption 1: Is X a Valid Predictor?; 7.4 Testing Assumption 2: Does E(ei) = 0 for all i?; 7.5 Testing Assumption 2: Does Var(ei) = s2 for all i?; 7.6 Testing Assumption 3: Are the Errors Independent?; 7.7 Testing Assumption 4: Are the Errors Normal?
7.8 Distribution of the Residuals7.9 Linear Algebra for Residuals (Optional); Exercises; 8. LINEAR MODELS: TWO INDEPENDENT VARIABLES; 8.1 Calculating Parameters; 8.2 Analysis of Variance; 8.3 The Effects of Independent Variables; 8.4 Inference for the Bivariate Linear Model; 8.5 Diagnostics for the Bivariate Linear Model; 8.6 Linear Algebra: Bivariate Linear Model (Optional); Exercises; 9. LINEAR MODELS: SEVERAL INDEPENDENT VARIABLES; 9.1 A Multivariate Example; 9.2 Analysis of Variance; 9.3 Inference for the Multivariate Linear Model; 9.4 Selecting Predictors
9.5 Diagnostics for the Multivariate Model
Record Nr. UNINA-9910143571903321
Janke Steven J. <1947->  
Hoboken, NJ, : Wiley, c2005
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Introduction to statistics through resampling methods and R [[electronic resource] /] / Phillip I. Good
Introduction to statistics through resampling methods and R [[electronic resource] /] / Phillip I. Good
Autore Good Phillip I
Edizione [2nd ed.]
Pubbl/distr/stampa Hoboken, N.J., : John Wiley & Sons, Inc., 2013
Descrizione fisica 1 online resource (224 p.)
Disciplina 519.5/4
Soggetto topico Resampling (Statistics)
R (Computer program language)
ISBN 1-118-49759-7
1-118-49756-2
1-283-95001-4
1-118-49757-0
Classificazione MAT029000
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover; Title page; Copyright page; Contents; Preface; Chapter 1: Variation; 1.1 Variation; 1.2 Collecting Data; 1.2.1 A Worked-Through Example; 1.3 Summarizing Your Data; 1.3.1 Learning to Use R; 1.4 Reporting Your Results; 1.4.1 Picturing Data; 1.4.2 Better Graphics; 1.5 Types of Data; 1.5.1 Depicting Categorical Data; 1.6 Displaying Multiple Variables; 1.6.1 Entering Multiple Variables; 1.6.2 From Observations to Questions; 1.7 Measures of Location; 1.7.1 Which Measure of Location?; *1.7.2 The Geometric Mean; 1.7.3 Estimating Precision; 1.7.4 Estimating with the Bootstrap
1.8 Samples and Populations1.8.1 Drawing a Random Sample; *1.8.2 Using Data That Are Already in Spreadsheet Form; 1.8.3 Ensuring the Sample Is Representative; 1.9 Summary and Review; Chapter 2: Probability; 2.1 Probability; 2.1.1 Events and Outcomes; 2.1.2 Venn Diagrams; 2.2 Binomial Trials; 2.2.1 Permutations and Rearrangements; *2.2.2 Programming Your Own Functions in R; 2.2.3 Back to the Binomial; 2.2.4 The Problem Jury; *2.3 Conditional Probability; 2.3.1 Market Basket Analysis; 2.3.2 Negative Results; 2.4 Independence; 2.5 Applications to Genetics; 2.6 Summary and Review
Chapter 3: Two Naturally Occurring Probability Distributions3.1 Distribution of Values; 3.1.1 Cumulative Distribution Function; 3.1.2 Empirical Distribution Function; 3.2 Discrete Distributions; 3.3 The Binomial Distribution; *3.3.1 Expected Number of Successes in n Binomial Trials; 3.3.2 Properties of the Binomial; 3.4 Measuring Population Dispersion and Sample Precision; 3.5 Poisson: Events Rare in Time and Space; 3.5.1 Applying the Poisson; 3.5.2 Comparing Empirical and Theoretical Poisson Distributions; 3.5.3 Comparing Two Poisson Processes; 3.6 Continuous Distributions
3.6.1 The Exponential Distribution3.7 Summary and Review; Chapter 4: Estimation and the Normal Distribution; 4.1 Point Estimates; 4.2 Properties of the Normal Distribution; 4.2.1 Student's t-Distribution; 4.2.2 Mixtures of Normal Distributions; 4.3 Using Confidence Intervals to Test Hypotheses; 4.3.1 Should We Have Used the Bootstrap?; 4.3.2 The Bias-Corrected and Accelerated Nonparametric Bootstrap; 4.3.3 The Parametric Bootstrap; 4.4 Properties of Independent Observations; 4.5 Summary and Review; Chapter 5: Testing Hypotheses; 5.1 Testing a Hypothesis; 5.1.1 Analyzing the Experiment
5.1.2 Two Types of Errors5.2 Estimating Effect Size; 5.2.1 Effect Size and Correlation; 5.2.2 Using Confidence Intervals to Test Hypotheses; 5.3 Applying the t-Test to Measurements; 5.3.1 Two-Sample Comparison; 5.3.2 Paired t-Test; 5.4 Comparing Two Samples; 5.4.1 What Should We Measure?; 5.4.2 Permutation Monte Carlo; 5.4.3 One- vs. Two-Sided Tests; 5.4.4 Bias-Corrected Nonparametric Bootstrap; 5.5 Which Test Should We Use?; 5.5.1 p-Values and Significance Levels; 5.5.2 Test Assumptions; 5.5.3 Robustness; 5.5.4 Power of a Test Procedure; 5.6 Summary and Review
Chapter 6: Designing an Experiment or Survey
Record Nr. UNINA-9910141528803321
Good Phillip I  
Hoboken, N.J., : John Wiley & Sons, Inc., 2013
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Introduction to statistics through resampling methods and R [[electronic resource] /] / Phillip I. Good
Introduction to statistics through resampling methods and R [[electronic resource] /] / Phillip I. Good
Autore Good Phillip I
Edizione [2nd ed.]
Pubbl/distr/stampa Hoboken, N.J., : John Wiley & Sons, Inc., 2013
Descrizione fisica 1 online resource (224 p.)
Disciplina 519.5/4
Soggetto topico Resampling (Statistics)
R (Computer program language)
ISBN 1-118-49759-7
1-118-49756-2
1-283-95001-4
1-118-49757-0
Classificazione MAT029000
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover; Title page; Copyright page; Contents; Preface; Chapter 1: Variation; 1.1 Variation; 1.2 Collecting Data; 1.2.1 A Worked-Through Example; 1.3 Summarizing Your Data; 1.3.1 Learning to Use R; 1.4 Reporting Your Results; 1.4.1 Picturing Data; 1.4.2 Better Graphics; 1.5 Types of Data; 1.5.1 Depicting Categorical Data; 1.6 Displaying Multiple Variables; 1.6.1 Entering Multiple Variables; 1.6.2 From Observations to Questions; 1.7 Measures of Location; 1.7.1 Which Measure of Location?; *1.7.2 The Geometric Mean; 1.7.3 Estimating Precision; 1.7.4 Estimating with the Bootstrap
1.8 Samples and Populations1.8.1 Drawing a Random Sample; *1.8.2 Using Data That Are Already in Spreadsheet Form; 1.8.3 Ensuring the Sample Is Representative; 1.9 Summary and Review; Chapter 2: Probability; 2.1 Probability; 2.1.1 Events and Outcomes; 2.1.2 Venn Diagrams; 2.2 Binomial Trials; 2.2.1 Permutations and Rearrangements; *2.2.2 Programming Your Own Functions in R; 2.2.3 Back to the Binomial; 2.2.4 The Problem Jury; *2.3 Conditional Probability; 2.3.1 Market Basket Analysis; 2.3.2 Negative Results; 2.4 Independence; 2.5 Applications to Genetics; 2.6 Summary and Review
Chapter 3: Two Naturally Occurring Probability Distributions3.1 Distribution of Values; 3.1.1 Cumulative Distribution Function; 3.1.2 Empirical Distribution Function; 3.2 Discrete Distributions; 3.3 The Binomial Distribution; *3.3.1 Expected Number of Successes in n Binomial Trials; 3.3.2 Properties of the Binomial; 3.4 Measuring Population Dispersion and Sample Precision; 3.5 Poisson: Events Rare in Time and Space; 3.5.1 Applying the Poisson; 3.5.2 Comparing Empirical and Theoretical Poisson Distributions; 3.5.3 Comparing Two Poisson Processes; 3.6 Continuous Distributions
3.6.1 The Exponential Distribution3.7 Summary and Review; Chapter 4: Estimation and the Normal Distribution; 4.1 Point Estimates; 4.2 Properties of the Normal Distribution; 4.2.1 Student's t-Distribution; 4.2.2 Mixtures of Normal Distributions; 4.3 Using Confidence Intervals to Test Hypotheses; 4.3.1 Should We Have Used the Bootstrap?; 4.3.2 The Bias-Corrected and Accelerated Nonparametric Bootstrap; 4.3.3 The Parametric Bootstrap; 4.4 Properties of Independent Observations; 4.5 Summary and Review; Chapter 5: Testing Hypotheses; 5.1 Testing a Hypothesis; 5.1.1 Analyzing the Experiment
5.1.2 Two Types of Errors5.2 Estimating Effect Size; 5.2.1 Effect Size and Correlation; 5.2.2 Using Confidence Intervals to Test Hypotheses; 5.3 Applying the t-Test to Measurements; 5.3.1 Two-Sample Comparison; 5.3.2 Paired t-Test; 5.4 Comparing Two Samples; 5.4.1 What Should We Measure?; 5.4.2 Permutation Monte Carlo; 5.4.3 One- vs. Two-Sided Tests; 5.4.4 Bias-Corrected Nonparametric Bootstrap; 5.5 Which Test Should We Use?; 5.5.1 p-Values and Significance Levels; 5.5.2 Test Assumptions; 5.5.3 Robustness; 5.5.4 Power of a Test Procedure; 5.6 Summary and Review
Chapter 6: Designing an Experiment or Survey
Record Nr. UNINA-9910809729403321
Good Phillip I  
Hoboken, N.J., : John Wiley & Sons, Inc., 2013
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
M-Statistics : Optimal Statistical Inference for a Small Sample
M-Statistics : Optimal Statistical Inference for a Small Sample
Autore Demidenko Eugene
Edizione [1st ed.]
Pubbl/distr/stampa Newark : , : John Wiley & Sons, Incorporated, , 2023
Descrizione fisica 1 online resource (243 pages)
Disciplina 519.5/4
Soggetto topico Mathematical statistics
ISBN 1-119-89182-5
1-119-89180-9
1-119-89181-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover -- Title Page -- Copyright -- Contents -- Preface -- Chapter 1 Limitations of classic statistics and motivation -- 1.1 Limitations of classic statistics -- 1.1.1 Mean -- 1.1.2 Unbiasedness -- 1.1.3 Limitations of equal‐tail statistical inference -- 1.2 The rationale for a new statistical theory -- 1.3 Motivating example: normal variance -- 1.3.1 Confidence interval for the normal variance -- 1.3.2 Hypothesis testing for the variance -- 1.3.3 MC and MO estimators of the variance -- 1.3.4 Sample size determination for variance -- 1.4 Neyman‐Pearson lemma and its extensions -- 1.4.1 Introduction -- 1.4.2 Two lemmas -- References -- Chapter 2 Maximum concentration statistics -- 2.1 Assumptions -- 2.2 Short confidence interval and MC estimator -- 2.3 Density level test -- 2.4 Efficiency and the sufficient statistic -- 2.5 Parameter is positive or belongs to a finite interval -- 2.5.1 Parameter is positive -- 2.5.2 Parameter belongs to a finite interval -- References -- Chapter 3 Mode statistics -- 3.1 Unbiased test -- 3.2 Unbiased CI and MO estimator -- 3.3 Cumulative information and the sufficient statistic -- References -- Chapter 4 P‐value and duality -- 4.1 P‐value for the double‐sided hypothesis -- 4.1.1 General definition -- 4.1.2 P‐value for normal variance -- 4.2 The overall powerful test -- 4.3 Duality: converting the CI into a hypothesis test -- 4.4 Bypassing assumptions -- 4.5 Overview -- References -- Chapter 5 M‐statistics for major statistical parameters -- 5.1 Exact statistical inference for standard deviation -- 5.1.1 MC‐statistics -- 5.1.2 MC‐statistics on the log scale -- 5.1.3 MO‐statistics -- 5.1.4 Computation of the p‐value -- 5.2 Pareto distribution -- 5.2.1 Confidence intervals -- 5.2.2 Hypothesis testing -- 5.3 Coefficient of variation for lognormal distribution -- 5.4 Statistical testing for two variances.
5.4.1 Computation of the p‐value -- 5.4.2 Optimal sample size -- 5.5 Inference for two‐sample exponential distribution -- 5.5.1 Unbiased statistical test -- 5.5.2 Confidence intervals -- 5.5.3 The MC estimator of ν -- 5.6 Effect size and coefficient of variation -- 5.6.1 Effect size -- 5.6.2 Coefficient of variation -- 5.6.3 Double‐sided hypothesis tests -- 5.6.4 Multivariate ES -- 5.7 Binomial probability -- 5.7.1 The MCL estimator -- 5.7.2 The MCL2 estimator -- 5.7.3 The MCL2 estimator of pn -- 5.7.4 Confidence interval on the double‐log scale -- 5.7.5 Equal‐tail and unbiased tests -- 5.8 Poisson rate -- 5.8.1 Two‐sided short CI on the log scale -- 5.8.2 Two‐sided tests and p‐value -- 5.8.3 The MCL estimator of the rate parameter -- 5.9 Meta‐analysis model -- 5.9.1 CI and MCL estimator -- 5.10 M‐statistics for the correlation coefficient -- 5.10.1 MC and MO estimators -- 5.10.2 Equal‐tail and unbiased tests -- 5.10.3 Power function and p‐value -- 5.10.4 Confidence intervals -- 5.11 The square multiple correlation coefficient -- 5.11.1 Unbiased statistical test -- 5.11.2 Computation of p‐value -- 5.11.3 Confidence intervals -- 5.11.4 The two‐sided CI on the log scale -- 5.11.5 The MCL estimator -- 5.12 Coefficient of determination for linear model -- 5.12.1 CoD and multiple correlation coefficient -- 5.12.2 Unbiased test -- 5.12.3 The MCL estimator for CoD -- References -- Chapter 6 Multidimensional parameter -- 6.1 Density level test -- 6.2 Unbiased test -- 6.3 Confidence region dual to the DL test -- 6.4 Unbiased confidence region -- 6.5 Simultaneous inference for normal mean and standard deviation -- 6.5.1 Statistical test -- 6.5.2 Confidence region -- 6.6 Exact confidence inference for parameters of the beta distribution -- 6.6.1 Statistical tests -- 6.6.2 Confidence regions -- 6.7 Two‐sample binomial probability -- 6.7.1 Hypothesis testing.
6.7.2 Confidence region -- 6.8 Exact and profile statistical inference for nonlinear regression -- 6.8.1 Statistical inference for the whole parameter -- 6.8.2 Statistical inference for an individual parameter of interest via profiling -- References -- Index -- EULA.
Record Nr. UNINA-9910830302503321
Demidenko Eugene  
Newark : , : John Wiley & Sons, Incorporated, , 2023
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
M-Statistics : Optimal Statistical Inference for a Small Sample
M-Statistics : Optimal Statistical Inference for a Small Sample
Autore Demidenko Eugene
Edizione [1st ed.]
Pubbl/distr/stampa Newark : , : John Wiley & Sons, Incorporated, , 2023
Descrizione fisica 1 online resource (243 pages)
Disciplina 519.5/4
Soggetto topico Mathematical statistics
ISBN 1-119-89182-5
1-119-89180-9
1-119-89181-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover -- Title Page -- Copyright -- Contents -- Preface -- Chapter 1 Limitations of classic statistics and motivation -- 1.1 Limitations of classic statistics -- 1.1.1 Mean -- 1.1.2 Unbiasedness -- 1.1.3 Limitations of equal‐tail statistical inference -- 1.2 The rationale for a new statistical theory -- 1.3 Motivating example: normal variance -- 1.3.1 Confidence interval for the normal variance -- 1.3.2 Hypothesis testing for the variance -- 1.3.3 MC and MO estimators of the variance -- 1.3.4 Sample size determination for variance -- 1.4 Neyman‐Pearson lemma and its extensions -- 1.4.1 Introduction -- 1.4.2 Two lemmas -- References -- Chapter 2 Maximum concentration statistics -- 2.1 Assumptions -- 2.2 Short confidence interval and MC estimator -- 2.3 Density level test -- 2.4 Efficiency and the sufficient statistic -- 2.5 Parameter is positive or belongs to a finite interval -- 2.5.1 Parameter is positive -- 2.5.2 Parameter belongs to a finite interval -- References -- Chapter 3 Mode statistics -- 3.1 Unbiased test -- 3.2 Unbiased CI and MO estimator -- 3.3 Cumulative information and the sufficient statistic -- References -- Chapter 4 P‐value and duality -- 4.1 P‐value for the double‐sided hypothesis -- 4.1.1 General definition -- 4.1.2 P‐value for normal variance -- 4.2 The overall powerful test -- 4.3 Duality: converting the CI into a hypothesis test -- 4.4 Bypassing assumptions -- 4.5 Overview -- References -- Chapter 5 M‐statistics for major statistical parameters -- 5.1 Exact statistical inference for standard deviation -- 5.1.1 MC‐statistics -- 5.1.2 MC‐statistics on the log scale -- 5.1.3 MO‐statistics -- 5.1.4 Computation of the p‐value -- 5.2 Pareto distribution -- 5.2.1 Confidence intervals -- 5.2.2 Hypothesis testing -- 5.3 Coefficient of variation for lognormal distribution -- 5.4 Statistical testing for two variances.
5.4.1 Computation of the p‐value -- 5.4.2 Optimal sample size -- 5.5 Inference for two‐sample exponential distribution -- 5.5.1 Unbiased statistical test -- 5.5.2 Confidence intervals -- 5.5.3 The MC estimator of ν -- 5.6 Effect size and coefficient of variation -- 5.6.1 Effect size -- 5.6.2 Coefficient of variation -- 5.6.3 Double‐sided hypothesis tests -- 5.6.4 Multivariate ES -- 5.7 Binomial probability -- 5.7.1 The MCL estimator -- 5.7.2 The MCL2 estimator -- 5.7.3 The MCL2 estimator of pn -- 5.7.4 Confidence interval on the double‐log scale -- 5.7.5 Equal‐tail and unbiased tests -- 5.8 Poisson rate -- 5.8.1 Two‐sided short CI on the log scale -- 5.8.2 Two‐sided tests and p‐value -- 5.8.3 The MCL estimator of the rate parameter -- 5.9 Meta‐analysis model -- 5.9.1 CI and MCL estimator -- 5.10 M‐statistics for the correlation coefficient -- 5.10.1 MC and MO estimators -- 5.10.2 Equal‐tail and unbiased tests -- 5.10.3 Power function and p‐value -- 5.10.4 Confidence intervals -- 5.11 The square multiple correlation coefficient -- 5.11.1 Unbiased statistical test -- 5.11.2 Computation of p‐value -- 5.11.3 Confidence intervals -- 5.11.4 The two‐sided CI on the log scale -- 5.11.5 The MCL estimator -- 5.12 Coefficient of determination for linear model -- 5.12.1 CoD and multiple correlation coefficient -- 5.12.2 Unbiased test -- 5.12.3 The MCL estimator for CoD -- References -- Chapter 6 Multidimensional parameter -- 6.1 Density level test -- 6.2 Unbiased test -- 6.3 Confidence region dual to the DL test -- 6.4 Unbiased confidence region -- 6.5 Simultaneous inference for normal mean and standard deviation -- 6.5.1 Statistical test -- 6.5.2 Confidence region -- 6.6 Exact confidence inference for parameters of the beta distribution -- 6.6.1 Statistical tests -- 6.6.2 Confidence regions -- 6.7 Two‐sample binomial probability -- 6.7.1 Hypothesis testing.
6.7.2 Confidence region -- 6.8 Exact and profile statistical inference for nonlinear regression -- 6.8.1 Statistical inference for the whole parameter -- 6.8.2 Statistical inference for an individual parameter of interest via profiling -- References -- Index -- EULA.
Record Nr. UNINA-9910840711803321
Demidenko Eugene  
Newark : , : John Wiley & Sons, Incorporated, , 2023
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Modes of parametric statistical inference [[electronic resource] /] / Seymour Geisser with the assistance of Wesley Johnson
Modes of parametric statistical inference [[electronic resource] /] / Seymour Geisser with the assistance of Wesley Johnson
Autore Geisser Seymour
Pubbl/distr/stampa Hoboken, N.J., : Wiley-Interscience, c2006
Descrizione fisica 1 online resource (218 p.)
Disciplina 519.5/4
519.54
Altri autori (Persone) JohnsonWesley O
Collana Wiley series in probability and statistics
Soggetto topico Probabilities
Mathematical statistics
Distribution (Probability theory)
Soggetto genere / forma Electronic books.
ISBN 1-280-28810-8
9786610288106
0-470-24458-5
0-471-74313-5
0-471-74312-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto 4.2 Remarks on Size4.3 Uniformly Most Powerful Tests; 4.4 Neyman-Pearson Fundamental Lemma; 4.5 Monotone Likelihood Ratio Property; 4.6 Decision Theory; 4.7 Two-Sided Tests; References; 5. Unbiased and Invariant Tests; 5.1 Unbiased Tests; 5.2 Admissibility and Tests Similar on the Boundary; 5.3 Neyman Structure and Completeness; 5.4 Invariant Tests; 5.5 Locally Best Tests; 5.6 Test Construction; 5.7 Remarks on N-P Theory; 5.8 Further Remarks on N-P Theory; 5.9 Law of the Iterated Logarithm (LIL); 5.10 Sequential Analysis; 5.11 Sequential Probability Ratio Test (SPRT); References
6. Elements of Bayesianism6.1 Bayesian Testing; 6.2 Testing a Composite vs. a Composite; 6.3 Some Remarks on Priors for the Binomial; 6.4 Coherence; 6.5 Model Selection; References; 7. Theories of Estimation; 7.1 Elements of Point Estimation; 7.2 Point Estimation; 7.3 Estimation Error Bounds; 7.4 Efficiency and Fisher Information; 7.5 Interpretations of Fisher Information; 7.6 The Information Matrix; 7.7 Sufficiency; 7.8 The Blackwell-Rao Result; 7.9 Bayesian Sufficiency; 7.10 Maximum Likelihood Estimation; 7.11 Consistency of the MLE; 7.12 Asymptotic Normality and "Efficiency" of the MLE
7.13 Sufficiency PrinciplesReferences; 8. Set and Interval Estimation; 8.1 Confidence Intervals (Sets); 8.2 Criteria for Confidence Intervals; 8.3 Conditioning; 8.4 Bayesian Intervals (Sets); 8.5 Highest Probability Density (HPD) Intervals; 8.6 Fiducial Inference; 8.7 Relation Between Fiducial and Bayesian Distributions; 8.8 Several Parameters; 8.9 The Fisher-Behrens Problem; 8.10 Confidence Solutions; 8.11 The Fieller-Creasy Problem; References; References; Index
Record Nr. UNINA-9910145035503321
Geisser Seymour  
Hoboken, N.J., : Wiley-Interscience, c2006
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Modes of parametric statistical inference [[electronic resource] /] / Seymour Geisser with the assistance of Wesley Johnson
Modes of parametric statistical inference [[electronic resource] /] / Seymour Geisser with the assistance of Wesley Johnson
Autore Geisser Seymour
Pubbl/distr/stampa Hoboken, N.J., : Wiley-Interscience, c2006
Descrizione fisica 1 online resource (218 p.)
Disciplina 519.5/4
519.54
Altri autori (Persone) JohnsonWesley O
Collana Wiley series in probability and statistics
Soggetto topico Probabilities
Mathematical statistics
Distribution (Probability theory)
ISBN 1-280-28810-8
9786610288106
0-470-24458-5
0-471-74313-5
0-471-74312-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto 4.2 Remarks on Size4.3 Uniformly Most Powerful Tests; 4.4 Neyman-Pearson Fundamental Lemma; 4.5 Monotone Likelihood Ratio Property; 4.6 Decision Theory; 4.7 Two-Sided Tests; References; 5. Unbiased and Invariant Tests; 5.1 Unbiased Tests; 5.2 Admissibility and Tests Similar on the Boundary; 5.3 Neyman Structure and Completeness; 5.4 Invariant Tests; 5.5 Locally Best Tests; 5.6 Test Construction; 5.7 Remarks on N-P Theory; 5.8 Further Remarks on N-P Theory; 5.9 Law of the Iterated Logarithm (LIL); 5.10 Sequential Analysis; 5.11 Sequential Probability Ratio Test (SPRT); References
6. Elements of Bayesianism6.1 Bayesian Testing; 6.2 Testing a Composite vs. a Composite; 6.3 Some Remarks on Priors for the Binomial; 6.4 Coherence; 6.5 Model Selection; References; 7. Theories of Estimation; 7.1 Elements of Point Estimation; 7.2 Point Estimation; 7.3 Estimation Error Bounds; 7.4 Efficiency and Fisher Information; 7.5 Interpretations of Fisher Information; 7.6 The Information Matrix; 7.7 Sufficiency; 7.8 The Blackwell-Rao Result; 7.9 Bayesian Sufficiency; 7.10 Maximum Likelihood Estimation; 7.11 Consistency of the MLE; 7.12 Asymptotic Normality and "Efficiency" of the MLE
7.13 Sufficiency PrinciplesReferences; 8. Set and Interval Estimation; 8.1 Confidence Intervals (Sets); 8.2 Criteria for Confidence Intervals; 8.3 Conditioning; 8.4 Bayesian Intervals (Sets); 8.5 Highest Probability Density (HPD) Intervals; 8.6 Fiducial Inference; 8.7 Relation Between Fiducial and Bayesian Distributions; 8.8 Several Parameters; 8.9 The Fisher-Behrens Problem; 8.10 Confidence Solutions; 8.11 The Fieller-Creasy Problem; References; References; Index
Record Nr. UNINA-9910830797003321
Geisser Seymour  
Hoboken, N.J., : Wiley-Interscience, c2006
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui