Introduction to statistics through resampling methods and Microsoft Office Excel [[electronic resource] /] / Phillip I. Good |
Autore | Good Phillip I |
Pubbl/distr/stampa | Hoboken, N.J., : Wiley-Interscience, c2005 |
Descrizione fisica | 1 online resource (245 p.) |
Disciplina |
519.52
519.54 |
Soggetto topico | Resampling (Statistics) |
Soggetto genere / forma | Electronic books. |
ISBN |
1-280-27720-3
9786610277209 0-470-32514-3 0-471-74177-9 0-471-74176-0 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
INTRODUCTION TO STATISTICS THROUGH RESAMPLING METHODS AND MICROSOFT OFFICE EXCEL®; Contents; Preface; 1. Variation (or What Statistics Is All About); 1.1. Variation; 1.2. Collecting Data; 1.3. Summarizing Your Data; 1.3.1 Learning to Use Excel; 1.4. Reporting Your Results: the Classroom Data; 1.4.1 Picturing Data; 1.4.2 Displaying Multiple Variables; 1.4.3 Percentiles of the Distribution; 1.5. Types of Data; 1.5.1 Depicting Categorical Data; 1.5.2 From Observations to Questions; 1.6. Measures of Location; 1.6.1 Which Measure of Location?; 1.6.2 The Bootstrap; 1.7. Samples and Populations
1.7.1 Drawing a Random Sample1.7.2 Ensuring the Sample is Representative; 1.8. Variation-Within and Between; 1.9. Summary and Review; 2. Probability; 2.1. Probability; 2.1.1 Events and Outcomes; 2.1.2 Venn Diagrams; 2.2. Binomial; 2.2.1 Permutations and Rearrangements; 2.2.2 Back to the Binomial; 2.2.3 The Problem Jury; 2.2.4 Properties of the Binomial; 2.2.5 Multinomial; 2.3. Conditional Probability; 2.3.1 Market Basket Analysis; 2.3.2 Negative Results; 2.4. Independence; 2.5. Applications to Genetics; 2.6. Summary and Review; 3. Distributions; 3.1. Distribution of Values 3.1.1 Cumulative Distribution Function3.1.2 Empirical Distribution Function; 3.2. Discrete Distributions; 3.3. Poisson: Events Rare in Time and Space; 3.3.1 Applying the Poisson; 3.3.2 Comparing Empirical and Theoretical Poisson Distributions; 3.4. Continuous Distributions; 3.4.1 The Exponential Distribution; 3.4.2 The Normal Distribution; 3.4.3 Mixtures of Normal Distributions; 3.5. Properties of Independent Observations; 3.6. Testing a Hypothesis; 3.6.1 Analyzing the Experiment; 3.6.2 Two Types of Errors; 3.7. Estimating Effect Size; 3.7.1 Confidence Interval for Difference in Means 3.7.2 Are Two Variables Correlated?3.7.3 Using Confidence Intervals to Test Hypotheses; 3.8. Summary and Review; 4. Testing Hypotheses; 4.1. One-Sample Problems; 4.1.1 Percentile Bootstrap; 4.1.2 Parametric Bootstrap; 4.1.3 Student's t; 4.2. Comparing Two Samples; 4.2.1 Comparing Two Poisson Distributions; 4.2.2 What Should We Measure?; 4.2.3 Permutation Monte Carlo; 4.2.4 Two-Sample t-Test; 4.3. Which Test Should We Use?; 4.3.1 p Values and Significance Levels; 4.3.2 Test Assumptions; 4.3.3 Robustness; 4.3.4 Power of a Test Procedure; 4.3.5 Testing for Correlation; 4.4. Summary and Review 5. Designing an Experiment or Survey5.1. The Hawthorne Effect; 5.1.1 Crafting an Experiment; 5.2. Designing an Experiment or Survey; 5.2.1 Objectives; 5.2.2 Sample from the Right Population; 5.2.3 Coping with Variation; 5.2.4 Matched Pairs; 5.2.5 The Experimental Unit; 5.2.6 Formulate Your Hypotheses; 5.2.7 What Are You Going to Measure?; 5.2.8 Random Representative Samples; 5.2.9 Treatment Allocation; 5.2.10 Choosing a Random Sample; 5.2.11 Ensuring that Your Observations are Independent; 5.3. How Large a Sample?; 5.3.1 Samples of Fixed Size; Known Distribution; Almost Normal Data Bootstrap |
Record Nr. | UNINA-9910143402803321 |
Good Phillip I | ||
Hoboken, N.J., : Wiley-Interscience, c2005 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Introduction to statistics through resampling methods and Microsoft Office Excel [[electronic resource] /] / Phillip I. Good |
Autore | Good Phillip I |
Pubbl/distr/stampa | Hoboken, N.J., : Wiley-Interscience, c2005 |
Descrizione fisica | 1 online resource (245 p.) |
Disciplina |
519.52
519.54 |
Soggetto topico | Resampling (Statistics) |
ISBN |
1-280-27720-3
9786610277209 0-470-32514-3 0-471-74177-9 0-471-74176-0 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
INTRODUCTION TO STATISTICS THROUGH RESAMPLING METHODS AND MICROSOFT OFFICE EXCEL®; Contents; Preface; 1. Variation (or What Statistics Is All About); 1.1. Variation; 1.2. Collecting Data; 1.3. Summarizing Your Data; 1.3.1 Learning to Use Excel; 1.4. Reporting Your Results: the Classroom Data; 1.4.1 Picturing Data; 1.4.2 Displaying Multiple Variables; 1.4.3 Percentiles of the Distribution; 1.5. Types of Data; 1.5.1 Depicting Categorical Data; 1.5.2 From Observations to Questions; 1.6. Measures of Location; 1.6.1 Which Measure of Location?; 1.6.2 The Bootstrap; 1.7. Samples and Populations
1.7.1 Drawing a Random Sample1.7.2 Ensuring the Sample is Representative; 1.8. Variation-Within and Between; 1.9. Summary and Review; 2. Probability; 2.1. Probability; 2.1.1 Events and Outcomes; 2.1.2 Venn Diagrams; 2.2. Binomial; 2.2.1 Permutations and Rearrangements; 2.2.2 Back to the Binomial; 2.2.3 The Problem Jury; 2.2.4 Properties of the Binomial; 2.2.5 Multinomial; 2.3. Conditional Probability; 2.3.1 Market Basket Analysis; 2.3.2 Negative Results; 2.4. Independence; 2.5. Applications to Genetics; 2.6. Summary and Review; 3. Distributions; 3.1. Distribution of Values 3.1.1 Cumulative Distribution Function3.1.2 Empirical Distribution Function; 3.2. Discrete Distributions; 3.3. Poisson: Events Rare in Time and Space; 3.3.1 Applying the Poisson; 3.3.2 Comparing Empirical and Theoretical Poisson Distributions; 3.4. Continuous Distributions; 3.4.1 The Exponential Distribution; 3.4.2 The Normal Distribution; 3.4.3 Mixtures of Normal Distributions; 3.5. Properties of Independent Observations; 3.6. Testing a Hypothesis; 3.6.1 Analyzing the Experiment; 3.6.2 Two Types of Errors; 3.7. Estimating Effect Size; 3.7.1 Confidence Interval for Difference in Means 3.7.2 Are Two Variables Correlated?3.7.3 Using Confidence Intervals to Test Hypotheses; 3.8. Summary and Review; 4. Testing Hypotheses; 4.1. One-Sample Problems; 4.1.1 Percentile Bootstrap; 4.1.2 Parametric Bootstrap; 4.1.3 Student's t; 4.2. Comparing Two Samples; 4.2.1 Comparing Two Poisson Distributions; 4.2.2 What Should We Measure?; 4.2.3 Permutation Monte Carlo; 4.2.4 Two-Sample t-Test; 4.3. Which Test Should We Use?; 4.3.1 p Values and Significance Levels; 4.3.2 Test Assumptions; 4.3.3 Robustness; 4.3.4 Power of a Test Procedure; 4.3.5 Testing for Correlation; 4.4. Summary and Review 5. Designing an Experiment or Survey5.1. The Hawthorne Effect; 5.1.1 Crafting an Experiment; 5.2. Designing an Experiment or Survey; 5.2.1 Objectives; 5.2.2 Sample from the Right Population; 5.2.3 Coping with Variation; 5.2.4 Matched Pairs; 5.2.5 The Experimental Unit; 5.2.6 Formulate Your Hypotheses; 5.2.7 What Are You Going to Measure?; 5.2.8 Random Representative Samples; 5.2.9 Treatment Allocation; 5.2.10 Choosing a Random Sample; 5.2.11 Ensuring that Your Observations are Independent; 5.3. How Large a Sample?; 5.3.1 Samples of Fixed Size; Known Distribution; Almost Normal Data Bootstrap |
Record Nr. | UNINA-9910831196003321 |
Good Phillip I | ||
Hoboken, N.J., : Wiley-Interscience, c2005 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Introduction to statistics through resampling methods and Microsoft Office Excel / / Phillip I. Good |
Autore | Good Phillip I |
Pubbl/distr/stampa | Hoboken, N.J., : Wiley-Interscience, c2005 |
Descrizione fisica | 1 online resource (245 p.) |
Disciplina | 519.5/4 |
Soggetto topico | Resampling (Statistics) |
ISBN |
1-280-27720-3
9786610277209 0-470-32514-3 0-471-74177-9 0-471-74176-0 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
INTRODUCTION TO STATISTICS THROUGH RESAMPLING METHODS AND MICROSOFT OFFICE EXCEL®; Contents; Preface; 1. Variation (or What Statistics Is All About); 1.1. Variation; 1.2. Collecting Data; 1.3. Summarizing Your Data; 1.3.1 Learning to Use Excel; 1.4. Reporting Your Results: the Classroom Data; 1.4.1 Picturing Data; 1.4.2 Displaying Multiple Variables; 1.4.3 Percentiles of the Distribution; 1.5. Types of Data; 1.5.1 Depicting Categorical Data; 1.5.2 From Observations to Questions; 1.6. Measures of Location; 1.6.1 Which Measure of Location?; 1.6.2 The Bootstrap; 1.7. Samples and Populations
1.7.1 Drawing a Random Sample1.7.2 Ensuring the Sample is Representative; 1.8. Variation-Within and Between; 1.9. Summary and Review; 2. Probability; 2.1. Probability; 2.1.1 Events and Outcomes; 2.1.2 Venn Diagrams; 2.2. Binomial; 2.2.1 Permutations and Rearrangements; 2.2.2 Back to the Binomial; 2.2.3 The Problem Jury; 2.2.4 Properties of the Binomial; 2.2.5 Multinomial; 2.3. Conditional Probability; 2.3.1 Market Basket Analysis; 2.3.2 Negative Results; 2.4. Independence; 2.5. Applications to Genetics; 2.6. Summary and Review; 3. Distributions; 3.1. Distribution of Values 3.1.1 Cumulative Distribution Function3.1.2 Empirical Distribution Function; 3.2. Discrete Distributions; 3.3. Poisson: Events Rare in Time and Space; 3.3.1 Applying the Poisson; 3.3.2 Comparing Empirical and Theoretical Poisson Distributions; 3.4. Continuous Distributions; 3.4.1 The Exponential Distribution; 3.4.2 The Normal Distribution; 3.4.3 Mixtures of Normal Distributions; 3.5. Properties of Independent Observations; 3.6. Testing a Hypothesis; 3.6.1 Analyzing the Experiment; 3.6.2 Two Types of Errors; 3.7. Estimating Effect Size; 3.7.1 Confidence Interval for Difference in Means 3.7.2 Are Two Variables Correlated?3.7.3 Using Confidence Intervals to Test Hypotheses; 3.8. Summary and Review; 4. Testing Hypotheses; 4.1. One-Sample Problems; 4.1.1 Percentile Bootstrap; 4.1.2 Parametric Bootstrap; 4.1.3 Student's t; 4.2. Comparing Two Samples; 4.2.1 Comparing Two Poisson Distributions; 4.2.2 What Should We Measure?; 4.2.3 Permutation Monte Carlo; 4.2.4 Two-Sample t-Test; 4.3. Which Test Should We Use?; 4.3.1 p Values and Significance Levels; 4.3.2 Test Assumptions; 4.3.3 Robustness; 4.3.4 Power of a Test Procedure; 4.3.5 Testing for Correlation; 4.4. Summary and Review 5. Designing an Experiment or Survey5.1. The Hawthorne Effect; 5.1.1 Crafting an Experiment; 5.2. Designing an Experiment or Survey; 5.2.1 Objectives; 5.2.2 Sample from the Right Population; 5.2.3 Coping with Variation; 5.2.4 Matched Pairs; 5.2.5 The Experimental Unit; 5.2.6 Formulate Your Hypotheses; 5.2.7 What Are You Going to Measure?; 5.2.8 Random Representative Samples; 5.2.9 Treatment Allocation; 5.2.10 Choosing a Random Sample; 5.2.11 Ensuring that Your Observations are Independent; 5.3. How Large a Sample?; 5.3.1 Samples of Fixed Size; Known Distribution; Almost Normal Data Bootstrap |
Record Nr. | UNINA-9910878086203321 |
Good Phillip I | ||
Hoboken, N.J., : Wiley-Interscience, c2005 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Introduction to statistics through resampling methods and R [[electronic resource] /] / Phillip I. Good |
Autore | Good Phillip I |
Edizione | [2nd ed.] |
Pubbl/distr/stampa | Hoboken, N.J., : John Wiley & Sons, Inc., 2013 |
Descrizione fisica | 1 online resource (224 p.) |
Disciplina | 519.5/4 |
Soggetto topico |
Resampling (Statistics)
R (Computer program language) |
ISBN |
1-118-49759-7
1-118-49756-2 1-283-95001-4 1-118-49757-0 |
Classificazione | MAT029000 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Cover; Title page; Copyright page; Contents; Preface; Chapter 1: Variation; 1.1 Variation; 1.2 Collecting Data; 1.2.1 A Worked-Through Example; 1.3 Summarizing Your Data; 1.3.1 Learning to Use R; 1.4 Reporting Your Results; 1.4.1 Picturing Data; 1.4.2 Better Graphics; 1.5 Types of Data; 1.5.1 Depicting Categorical Data; 1.6 Displaying Multiple Variables; 1.6.1 Entering Multiple Variables; 1.6.2 From Observations to Questions; 1.7 Measures of Location; 1.7.1 Which Measure of Location?; *1.7.2 The Geometric Mean; 1.7.3 Estimating Precision; 1.7.4 Estimating with the Bootstrap
1.8 Samples and Populations1.8.1 Drawing a Random Sample; *1.8.2 Using Data That Are Already in Spreadsheet Form; 1.8.3 Ensuring the Sample Is Representative; 1.9 Summary and Review; Chapter 2: Probability; 2.1 Probability; 2.1.1 Events and Outcomes; 2.1.2 Venn Diagrams; 2.2 Binomial Trials; 2.2.1 Permutations and Rearrangements; *2.2.2 Programming Your Own Functions in R; 2.2.3 Back to the Binomial; 2.2.4 The Problem Jury; *2.3 Conditional Probability; 2.3.1 Market Basket Analysis; 2.3.2 Negative Results; 2.4 Independence; 2.5 Applications to Genetics; 2.6 Summary and Review Chapter 3: Two Naturally Occurring Probability Distributions3.1 Distribution of Values; 3.1.1 Cumulative Distribution Function; 3.1.2 Empirical Distribution Function; 3.2 Discrete Distributions; 3.3 The Binomial Distribution; *3.3.1 Expected Number of Successes in n Binomial Trials; 3.3.2 Properties of the Binomial; 3.4 Measuring Population Dispersion and Sample Precision; 3.5 Poisson: Events Rare in Time and Space; 3.5.1 Applying the Poisson; 3.5.2 Comparing Empirical and Theoretical Poisson Distributions; 3.5.3 Comparing Two Poisson Processes; 3.6 Continuous Distributions 3.6.1 The Exponential Distribution3.7 Summary and Review; Chapter 4: Estimation and the Normal Distribution; 4.1 Point Estimates; 4.2 Properties of the Normal Distribution; 4.2.1 Student's t-Distribution; 4.2.2 Mixtures of Normal Distributions; 4.3 Using Confidence Intervals to Test Hypotheses; 4.3.1 Should We Have Used the Bootstrap?; 4.3.2 The Bias-Corrected and Accelerated Nonparametric Bootstrap; 4.3.3 The Parametric Bootstrap; 4.4 Properties of Independent Observations; 4.5 Summary and Review; Chapter 5: Testing Hypotheses; 5.1 Testing a Hypothesis; 5.1.1 Analyzing the Experiment 5.1.2 Two Types of Errors5.2 Estimating Effect Size; 5.2.1 Effect Size and Correlation; 5.2.2 Using Confidence Intervals to Test Hypotheses; 5.3 Applying the t-Test to Measurements; 5.3.1 Two-Sample Comparison; 5.3.2 Paired t-Test; 5.4 Comparing Two Samples; 5.4.1 What Should We Measure?; 5.4.2 Permutation Monte Carlo; 5.4.3 One- vs. Two-Sided Tests; 5.4.4 Bias-Corrected Nonparametric Bootstrap; 5.5 Which Test Should We Use?; 5.5.1 p-Values and Significance Levels; 5.5.2 Test Assumptions; 5.5.3 Robustness; 5.5.4 Power of a Test Procedure; 5.6 Summary and Review Chapter 6: Designing an Experiment or Survey |
Record Nr. | UNINA-9910141528803321 |
Good Phillip I | ||
Hoboken, N.J., : John Wiley & Sons, Inc., 2013 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Introduction to statistics through resampling methods and R / / Phillip I. Good |
Autore | Good Phillip I |
Edizione | [2nd ed.] |
Pubbl/distr/stampa | Hoboken, N.J., : John Wiley & Sons, Inc., 2013 |
Descrizione fisica | 1 online resource (224 p.) |
Disciplina | 519.5/4 |
Soggetto topico |
Resampling (Statistics)
R (Computer program language) |
ISBN |
1-118-49759-7
1-118-49756-2 1-283-95001-4 1-118-49757-0 |
Classificazione | MAT029000 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Cover; Title page; Copyright page; Contents; Preface; Chapter 1: Variation; 1.1 Variation; 1.2 Collecting Data; 1.2.1 A Worked-Through Example; 1.3 Summarizing Your Data; 1.3.1 Learning to Use R; 1.4 Reporting Your Results; 1.4.1 Picturing Data; 1.4.2 Better Graphics; 1.5 Types of Data; 1.5.1 Depicting Categorical Data; 1.6 Displaying Multiple Variables; 1.6.1 Entering Multiple Variables; 1.6.2 From Observations to Questions; 1.7 Measures of Location; 1.7.1 Which Measure of Location?; *1.7.2 The Geometric Mean; 1.7.3 Estimating Precision; 1.7.4 Estimating with the Bootstrap
1.8 Samples and Populations1.8.1 Drawing a Random Sample; *1.8.2 Using Data That Are Already in Spreadsheet Form; 1.8.3 Ensuring the Sample Is Representative; 1.9 Summary and Review; Chapter 2: Probability; 2.1 Probability; 2.1.1 Events and Outcomes; 2.1.2 Venn Diagrams; 2.2 Binomial Trials; 2.2.1 Permutations and Rearrangements; *2.2.2 Programming Your Own Functions in R; 2.2.3 Back to the Binomial; 2.2.4 The Problem Jury; *2.3 Conditional Probability; 2.3.1 Market Basket Analysis; 2.3.2 Negative Results; 2.4 Independence; 2.5 Applications to Genetics; 2.6 Summary and Review Chapter 3: Two Naturally Occurring Probability Distributions3.1 Distribution of Values; 3.1.1 Cumulative Distribution Function; 3.1.2 Empirical Distribution Function; 3.2 Discrete Distributions; 3.3 The Binomial Distribution; *3.3.1 Expected Number of Successes in n Binomial Trials; 3.3.2 Properties of the Binomial; 3.4 Measuring Population Dispersion and Sample Precision; 3.5 Poisson: Events Rare in Time and Space; 3.5.1 Applying the Poisson; 3.5.2 Comparing Empirical and Theoretical Poisson Distributions; 3.5.3 Comparing Two Poisson Processes; 3.6 Continuous Distributions 3.6.1 The Exponential Distribution3.7 Summary and Review; Chapter 4: Estimation and the Normal Distribution; 4.1 Point Estimates; 4.2 Properties of the Normal Distribution; 4.2.1 Student's t-Distribution; 4.2.2 Mixtures of Normal Distributions; 4.3 Using Confidence Intervals to Test Hypotheses; 4.3.1 Should We Have Used the Bootstrap?; 4.3.2 The Bias-Corrected and Accelerated Nonparametric Bootstrap; 4.3.3 The Parametric Bootstrap; 4.4 Properties of Independent Observations; 4.5 Summary and Review; Chapter 5: Testing Hypotheses; 5.1 Testing a Hypothesis; 5.1.1 Analyzing the Experiment 5.1.2 Two Types of Errors5.2 Estimating Effect Size; 5.2.1 Effect Size and Correlation; 5.2.2 Using Confidence Intervals to Test Hypotheses; 5.3 Applying the t-Test to Measurements; 5.3.1 Two-Sample Comparison; 5.3.2 Paired t-Test; 5.4 Comparing Two Samples; 5.4.1 What Should We Measure?; 5.4.2 Permutation Monte Carlo; 5.4.3 One- vs. Two-Sided Tests; 5.4.4 Bias-Corrected Nonparametric Bootstrap; 5.5 Which Test Should We Use?; 5.5.1 p-Values and Significance Levels; 5.5.2 Test Assumptions; 5.5.3 Robustness; 5.5.4 Power of a Test Procedure; 5.6 Summary and Review Chapter 6: Designing an Experiment or Survey |
Record Nr. | UNINA-9910809729403321 |
Good Phillip I | ||
Hoboken, N.J., : John Wiley & Sons, Inc., 2013 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Introductory applied statistics : with resampling methods & R / / Bruce Blaine |
Autore | Blaine Bruce |
Pubbl/distr/stampa | Cham, Switzerland : , : Springer, , [2023] |
Descrizione fisica | 1 online resource (197 pages) |
Disciplina | 610 |
Soggetto topico |
R (Computer program language)
Resampling (Statistics) Statistics - Data processing Estadística Matemàtica discreta R (Llenguatge de programació) |
Soggetto genere / forma | Llibres electrònics |
ISBN |
9783031277412
9783031277405 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Record Nr. | UNINA-9910720061803321 |
Blaine Bruce | ||
Cham, Switzerland : , : Springer, , [2023] | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
The jackknife, the bootstrap, and other resampling plans / Bradley Efron |
Autore | Efron, Bradley |
Pubbl/distr/stampa | Philadelphia, Pa. : SIAM (Society for Industrial and Applied Mathematics), 1982 |
Descrizione fisica | vii, 92 p. : ill. ; 25 cm. |
Disciplina | 519.52 |
Collana | CBMS-NSF Regional conference series in applied mathematics ; 38 |
Soggetto topico |
Bootstrap (Statistics)
Error analysis (Mathematics) Estimation theory Jackknife (Statistics) Resampling (Statistics) |
ISBN | 0898711797 |
Classificazione | AMS 62D05 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Record Nr. | UNISALENTO-991001048589707536 |
Efron, Bradley | ||
Philadelphia, Pa. : SIAM (Society for Industrial and Applied Mathematics), 1982 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. del Salento | ||
|
Mathematical statistics with resampling and R / / Laura Chihara, Tim Hesterberg |
Autore | Chihara Laura <1957-> |
Edizione | [1st edition] |
Pubbl/distr/stampa | Hoboken, New Jersey : , : Wiley, , 2011 |
Descrizione fisica | 1 online resource (723 p.) |
Disciplina | 310 |
Soggetto topico |
Resampling (Statistics)
Statistics |
Soggetto genere / forma | Electronic books. |
ISBN |
1-118-62575-7
1-118-51895-0 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Cover; Half Title page; Title page; Copyright page; Preface; Acknowledgments; Chapter 1: Data and Case Studies; 1.1 Case Study: Flight Delays; 1.2 Case Study: Birth Weights of Babies; 1.3 Case Study: Verizon Repair Times; 1.4 Sampling; 1.5 Parameters and Statistics; 1.6 Case Study: General Social Survey; 1.7 Sample Surveys; 1.8 Case Study: Beer and Hot Wings; 1.9 Case Study: Black Spruce Seedlings; 1.10 Studies; 1.11 Exercises; Chapter 2: Exploratory Data Analysis; 2.1 Basic Plots; 2.2 Numeric Summaries; 2.3 Boxplots; 2.4 Quantiles and Normal Quantile Plots
2.5 Empirical Cumulative Distribution Functions2.6 Scatter Plots; 2.7 Skewness and Kurtosis; 2.8 Exercises; Chapter 3: Hypothesis Testing; 3.1 Introduction to Hypothesis Testing; 3.2 Hypotheses; 3.3 Permutation Tests; 3.4 Contingency Tables; 3.5 Chi-Square Test of Independence; 3.6 Test of Homogeneity; 3.7 Goodness-of-Fit: All Parameters Known; 3.8 Goodness-of-Fit: Some Parameters Estimated; 3.9 Exercises; Chapter 4: Sampling Distributions; 4.1 Sampling Distributions; 4.2 Calculating Sampling Distributions; 4.3 The Central Limit Theorem; 4.4 Exercises; Chapter 5: The Bootstrap 5.1 Introduction to the Bootstrap5.2 The Plug-in Principle; 5.3 Bootstrap Percentile Intervals; 5.4 Two Sample Bootstrap; 5.5 Other Statistics; 5.6 Bias; 5.7 Monte Carlo Sampling: The "Second Bootstrap Principle"; 5.8 Accuracy of Bootstrap Distributions; 5.9 How Many Bootstrap Samples are Needed?; 5.10 Exercises; Chapter 6: Estimation; 6.1 Maximum Likelihood Estimation; 6.2 Method of Moments; 6.3 Properties of Estimators; 6.4 Exercises; Chapter 7: Classical Inference: Confidence Intervals; 7.1 Confidence Intervals for Means; 7.2 Confidence Intervals in General 7.3 One-Sided Confidence Intervals7.4 Confidence Intervals for Proportions; 7.5 Bootstrap t Confidence Intervals; 7.6 Exercises; Chapter 8: Classical Inference: Hypothesis Testing; 8.1 Hypothesis Tests for Means and Proportions; 8.2 Type I and Type Ii Errors; 8.3 More on Testing; 8.4 Likelihood Ratio Tests; 8.5 Exercises; Chapter 9: Regression; 9.1 Covariance; 9.2 Correlation; 9.3 Least-Squares Regression; 9.4 The Simple Linear Model; 9.5 Resampling Correlation and Regression; 9.6 Logistic Regression; 9.7 Exercises; Chapter 10: Bayesian Methods; 10.1 Bayes'Theorem 10.2 Binomial Data, Discrete Prior Distributions10.3 Binomial Data, Continuous Prior Distributions; 10.4 Continuous Data; 10.5 Sequential Data; 10.6 Exercises; Chapter 11: Additional Topics; 11.1 Smoothed Bootstrap; 11.2 Parametric Bootstrap; 11.3 The Delta Method; 11.4 Stratified Sampling; 11.5 Computational Issues in Bayesian Analysis; 11.6 Monte Carlo Integration; 11.7 Importance Sampling; 11.8 Exercises; Appendix A: Review of Probability; A.1 Basic Probability; A.2 Mean and Variance; A.3 The Mean of A Sample of Random Variables; A.4 The Law of Averages; A.5 The Normal Distribution A.6 Sums of Normal Random Variables |
Record Nr. | UNINA-9910461263703321 |
Chihara Laura <1957-> | ||
Hoboken, New Jersey : , : Wiley, , 2011 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Mathematical Statistics with Resampling and R |
Autore | Chihara Laura M |
Edizione | [1st ed.] |
Pubbl/distr/stampa | Somerset : , : John Wiley & Sons, Incorporated, , 2011 |
Descrizione fisica | 1 online resource (434 pages) |
Disciplina | 310 |
Altri autori (Persone) | HesterbergTim C |
Soggetto topico |
Mathematics
Resampling (Statistics) Statistics |
Soggetto genere / forma | Electronic books. |
ISBN |
9781118518953
9781118029855 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Cover -- Title Page -- Copyright -- Contents -- Preface -- Acknowledgments -- 1: Data and Case Studies -- 1.1 Case Study: Flight Delays -- 1.2 Case Study: Birth Weights of Babies -- 1.3 Case Study: Verizon Repair Times -- 1.4 Sampling -- 1.5 Parameters and Statistics -- 1.6 Case Study: General Social Survey -- 1.7 Sample Surveys -- 1.8 Case Study: Beer and Hot Wings -- 1.9 Case Study: Black Spruce Seedlings -- 1.10 Studies -- 1.11 Exercises -- 2: Exploratory Data Analysis -- 2.1 Basic Plots -- 2.2 Numeric Summaries -- 2.2.1 Center -- 2.2.2 Spread -- 2.2.3 Shape -- 2.3 Boxplots -- 2.4 Quantiles and Normal Quantile Plots -- 2.5 Empirical Cumulative Distribution Functions -- 2.6 Scatter Plots -- 2.7 Skewness and Kurtosis -- 2.8 Exercises -- 3: Hypothesis Testing -- 3.1 Introduction to Hypothesis Testing -- 3.2 Hypotheses -- 3.3 Permutation Tests -- 3.3.1 Implementation Issues -- 3.3.2 One-sided and Two-sided Tests -- 3.3.3 Other Statistics -- 3.3.4 Assumptions -- 3.4 Contingency Tables -- 3.4.1 Permutation Test for Independence -- 3.4.2 Chi-square Reference Distribution -- 3.5 Chi-square Test of Independence -- 3.6 Test of Homogeneity -- 3.7 Goodness-of-fit: All Parameters Known -- 3.8 Goodness-of-fit: Some Parameters Estimated -- 3.9 Exercises -- 4: Sampling Distributions -- 4.1 Sampling Distributions -- 4.2 Calculating Sampling Distributions -- 4.3 The Central Limit Theorem -- 4.3.1 Clt for Binomial Data -- 4.3.2 Continuity Correction for Discrete Random Variables -- 4.3.3 Accuracy of the Central Limit Theorem -- 4.3.4 Clt for Samplingwithout Replacement -- 4.4 Exercises -- 5: The Bootstrap -- 5.1 Introduction to the Bootstrap -- 5.2 The Plug-in Principle -- 5.2.1 Estimating the Population Distribution -- 5.2.2 How Useful Is the Bootstrap Distribution? -- 5.3 Bootstrap Percentile Intervals -- 5.4 Two Sample Bootstrap.
5.4.1 The Two Independent Populations Assumption -- 5.5 Other Statistics -- 5.6 Bias -- 5.7 Monte Carlo Sampling: the "second Bootstrap Principle" -- 5.8 Accuracy of Bootstrap Distributions -- 5.8.1 Sample Mean: Large Sample Size -- 5.8.2 Sample Mean: Small Sample Size -- 5.8.3 Sample Median -- 5.9 How Many Bootstrap Samples Are Needed? -- 5.10 Exercises -- 6: Estimation -- 6.1 Maximum Likelihood Estimation -- 6.1.1 Maximum Likelihood for Discrete Distributions -- 6.1.2 Maximum Likelihood for Continuous Distributions -- 6.1.3 Maximum Likelihood for Multiple Parameters -- 6.2 Method of Moments -- 6.3 Properties of Estimators -- 6.3.1 Unbiasedness -- 6.3.2 Efficiency -- 6.3.3 Mean Square Error -- 6.3.4 Consistency -- 6.3.5 Transformation Invariance -- 6.4 Exercises -- 7: Classical Inference: Confidence Intervals -- 7.1 Confidence Intervals for Means -- 7.1.1 Confidence Intervals for a Mean, σ Known -- 7.1.2 Confidence Intervals for a Mean, σ Unknown -- 7.1.3 Confidence Intervals for a Difference in Means -- 7.2 Confidence Intervals in General -- 7.2.1 Location and Scale Parameters -- 7.3 One-sided Confidence Intervals -- 7.4 Confidence Intervals for Proportions -- 7.4.1 The Agresti-Coull Interval for a Proportion -- 7.4.2 Confidence Interval for the Difference of Proportions -- 7.5 Bootstrap t Confidence Intervals -- 7.5.1 Comparing Bootstrap t and Formula t Confidence Intervals -- 7.6 Exercises -- 8: Classical Inference: Hypothesis Testing -- 8.1 Hypothesis Tests for Means and Proportions -- 8.1.1 One Population -- 8.1.2 Comparing Two Populations -- 8.2 Type I and Type Ii Errors -- 8.2.1 Type I Errors -- 8.2.2 Type II Errors and Power -- 8.3 More on Testing -- 8.3.1 on Significance -- 8.3.2 Adjustments for Multiple Testing -- 8.3.3 P-values Versus Critical Regions -- 8.4 Likelihood Ratio Tests -- 8.4.1 Simple Hypotheses and the Neyman-pearson Lemma. 8.4.2 Generalized Likelihood Ratio Tests -- 8.5 Exercises -- 9: Regression -- 9.1 Covariance -- 9.2 Correlation -- 9.3 Least-squares Regression -- 9.3.1 Regression Toward the Mean -- 9.3.2 Variation -- 9.3.3 Diagnostics -- 9.3.4 Multiple Regression -- 9.4 The Simple Linear Model -- 9.4.1 Inference for α and ß -- 9.4.2 Inference for the Response -- 9.4.3 Comments About Assumptions for the Linear Model -- 9.5 Resampling Correlation and Regression -- 9.5.1 Permutation Tests -- 9.5.2 Bootstrap Case Study: Bushmeat -- 9.6 Logistic Regression -- 9.6.1 Inference for Logistic Regression -- 9.7 Exercises -- 10: Bayesian Methods -- 10.1 Bayes' Theorem -- 10.2 Binomial Data, Discrete Prior Distributions -- 10.3 Binomial Data, Continuous Prior Distributions -- 10.4 Continuous Data -- 10.5 Sequential Data -- 10.6 Exercises -- 11: Additional Topics -- 11.1 Smoothed Bootstrap -- 11.1.1 Kernel Density Estimate -- 11.2 Parametric Bootstrap -- 11.3 The Delta Method -- 11.4 Stratified Sampling -- 11.5 Computational Issues in Bayesian Analysis -- 11.6 Monte Carlo Integration -- 11.7 Importance Sampling -- 11.7.1 Ratio Estimate for Importance Sampling -- 11.7.2 Importance Sampling in Bayesian Applications -- 11.8 Exercises -- Appendix A: Review of Probability -- A.1 Basic Probability -- A.2 Mean and Variance -- A.3 The Mean of a Sample of Random Variables -- A.4 The Law of Averages -- A.5 The Normal Distribution -- A.6 Sums of Normal Random Variables -- A.7 Higher Moments and the Moment Generating Function -- Appendix B: Probability Distributions -- B.1 The Bernoulli and Binomial Distributions -- B.2 The Multinomial Distribution -- B.3 The Geometric Distribution -- B.4 The Negative Binomial Distribution -- B.5 The Hypergeometric Distribution -- B.6 The Poisson Distribution -- B.7 The Uniform Distribution -- B.8 The Exponential Distribution -- B.9 The Gamma Distribution. B.10 The Chi-square Distribution -- B.11 The Student's t Distribution -- B.12 The Beta Distribution -- B.13 The f Distribution -- B.14 Exercises -- Appendix C: Distributions Quick Reference -- Solutions to Odd-numbered Exercises -- Bibliography -- Index. |
Record Nr. | UNINA-9910795981403321 |
Chihara Laura M | ||
Somerset : , : John Wiley & Sons, Incorporated, , 2011 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|
Mathematical Statistics with Resampling and R |
Autore | Chihara Laura M |
Edizione | [1st ed.] |
Pubbl/distr/stampa | Somerset : , : John Wiley & Sons, Incorporated, , 2011 |
Descrizione fisica | 1 online resource (434 pages) |
Disciplina | 310 |
Altri autori (Persone) | HesterbergTim C |
Soggetto topico |
Mathematics
Resampling (Statistics) Statistics |
ISBN |
9781118518953
9781118029855 |
Formato | Materiale a stampa |
Livello bibliografico | Monografia |
Lingua di pubblicazione | eng |
Nota di contenuto |
Cover -- Title Page -- Copyright -- Contents -- Preface -- Acknowledgments -- 1: Data and Case Studies -- 1.1 Case Study: Flight Delays -- 1.2 Case Study: Birth Weights of Babies -- 1.3 Case Study: Verizon Repair Times -- 1.4 Sampling -- 1.5 Parameters and Statistics -- 1.6 Case Study: General Social Survey -- 1.7 Sample Surveys -- 1.8 Case Study: Beer and Hot Wings -- 1.9 Case Study: Black Spruce Seedlings -- 1.10 Studies -- 1.11 Exercises -- 2: Exploratory Data Analysis -- 2.1 Basic Plots -- 2.2 Numeric Summaries -- 2.2.1 Center -- 2.2.2 Spread -- 2.2.3 Shape -- 2.3 Boxplots -- 2.4 Quantiles and Normal Quantile Plots -- 2.5 Empirical Cumulative Distribution Functions -- 2.6 Scatter Plots -- 2.7 Skewness and Kurtosis -- 2.8 Exercises -- 3: Hypothesis Testing -- 3.1 Introduction to Hypothesis Testing -- 3.2 Hypotheses -- 3.3 Permutation Tests -- 3.3.1 Implementation Issues -- 3.3.2 One-sided and Two-sided Tests -- 3.3.3 Other Statistics -- 3.3.4 Assumptions -- 3.4 Contingency Tables -- 3.4.1 Permutation Test for Independence -- 3.4.2 Chi-square Reference Distribution -- 3.5 Chi-square Test of Independence -- 3.6 Test of Homogeneity -- 3.7 Goodness-of-fit: All Parameters Known -- 3.8 Goodness-of-fit: Some Parameters Estimated -- 3.9 Exercises -- 4: Sampling Distributions -- 4.1 Sampling Distributions -- 4.2 Calculating Sampling Distributions -- 4.3 The Central Limit Theorem -- 4.3.1 Clt for Binomial Data -- 4.3.2 Continuity Correction for Discrete Random Variables -- 4.3.3 Accuracy of the Central Limit Theorem -- 4.3.4 Clt for Samplingwithout Replacement -- 4.4 Exercises -- 5: The Bootstrap -- 5.1 Introduction to the Bootstrap -- 5.2 The Plug-in Principle -- 5.2.1 Estimating the Population Distribution -- 5.2.2 How Useful Is the Bootstrap Distribution? -- 5.3 Bootstrap Percentile Intervals -- 5.4 Two Sample Bootstrap.
5.4.1 The Two Independent Populations Assumption -- 5.5 Other Statistics -- 5.6 Bias -- 5.7 Monte Carlo Sampling: the "second Bootstrap Principle" -- 5.8 Accuracy of Bootstrap Distributions -- 5.8.1 Sample Mean: Large Sample Size -- 5.8.2 Sample Mean: Small Sample Size -- 5.8.3 Sample Median -- 5.9 How Many Bootstrap Samples Are Needed? -- 5.10 Exercises -- 6: Estimation -- 6.1 Maximum Likelihood Estimation -- 6.1.1 Maximum Likelihood for Discrete Distributions -- 6.1.2 Maximum Likelihood for Continuous Distributions -- 6.1.3 Maximum Likelihood for Multiple Parameters -- 6.2 Method of Moments -- 6.3 Properties of Estimators -- 6.3.1 Unbiasedness -- 6.3.2 Efficiency -- 6.3.3 Mean Square Error -- 6.3.4 Consistency -- 6.3.5 Transformation Invariance -- 6.4 Exercises -- 7: Classical Inference: Confidence Intervals -- 7.1 Confidence Intervals for Means -- 7.1.1 Confidence Intervals for a Mean, σ Known -- 7.1.2 Confidence Intervals for a Mean, σ Unknown -- 7.1.3 Confidence Intervals for a Difference in Means -- 7.2 Confidence Intervals in General -- 7.2.1 Location and Scale Parameters -- 7.3 One-sided Confidence Intervals -- 7.4 Confidence Intervals for Proportions -- 7.4.1 The Agresti-Coull Interval for a Proportion -- 7.4.2 Confidence Interval for the Difference of Proportions -- 7.5 Bootstrap t Confidence Intervals -- 7.5.1 Comparing Bootstrap t and Formula t Confidence Intervals -- 7.6 Exercises -- 8: Classical Inference: Hypothesis Testing -- 8.1 Hypothesis Tests for Means and Proportions -- 8.1.1 One Population -- 8.1.2 Comparing Two Populations -- 8.2 Type I and Type Ii Errors -- 8.2.1 Type I Errors -- 8.2.2 Type II Errors and Power -- 8.3 More on Testing -- 8.3.1 on Significance -- 8.3.2 Adjustments for Multiple Testing -- 8.3.3 P-values Versus Critical Regions -- 8.4 Likelihood Ratio Tests -- 8.4.1 Simple Hypotheses and the Neyman-pearson Lemma. 8.4.2 Generalized Likelihood Ratio Tests -- 8.5 Exercises -- 9: Regression -- 9.1 Covariance -- 9.2 Correlation -- 9.3 Least-squares Regression -- 9.3.1 Regression Toward the Mean -- 9.3.2 Variation -- 9.3.3 Diagnostics -- 9.3.4 Multiple Regression -- 9.4 The Simple Linear Model -- 9.4.1 Inference for α and ß -- 9.4.2 Inference for the Response -- 9.4.3 Comments About Assumptions for the Linear Model -- 9.5 Resampling Correlation and Regression -- 9.5.1 Permutation Tests -- 9.5.2 Bootstrap Case Study: Bushmeat -- 9.6 Logistic Regression -- 9.6.1 Inference for Logistic Regression -- 9.7 Exercises -- 10: Bayesian Methods -- 10.1 Bayes' Theorem -- 10.2 Binomial Data, Discrete Prior Distributions -- 10.3 Binomial Data, Continuous Prior Distributions -- 10.4 Continuous Data -- 10.5 Sequential Data -- 10.6 Exercises -- 11: Additional Topics -- 11.1 Smoothed Bootstrap -- 11.1.1 Kernel Density Estimate -- 11.2 Parametric Bootstrap -- 11.3 The Delta Method -- 11.4 Stratified Sampling -- 11.5 Computational Issues in Bayesian Analysis -- 11.6 Monte Carlo Integration -- 11.7 Importance Sampling -- 11.7.1 Ratio Estimate for Importance Sampling -- 11.7.2 Importance Sampling in Bayesian Applications -- 11.8 Exercises -- Appendix A: Review of Probability -- A.1 Basic Probability -- A.2 Mean and Variance -- A.3 The Mean of a Sample of Random Variables -- A.4 The Law of Averages -- A.5 The Normal Distribution -- A.6 Sums of Normal Random Variables -- A.7 Higher Moments and the Moment Generating Function -- Appendix B: Probability Distributions -- B.1 The Bernoulli and Binomial Distributions -- B.2 The Multinomial Distribution -- B.3 The Geometric Distribution -- B.4 The Negative Binomial Distribution -- B.5 The Hypergeometric Distribution -- B.6 The Poisson Distribution -- B.7 The Uniform Distribution -- B.8 The Exponential Distribution -- B.9 The Gamma Distribution. B.10 The Chi-square Distribution -- B.11 The Student's t Distribution -- B.12 The Beta Distribution -- B.13 The f Distribution -- B.14 Exercises -- Appendix C: Distributions Quick Reference -- Solutions to Odd-numbered Exercises -- Bibliography -- Index. |
Record Nr. | UNINA-9910809142703321 |
Chihara Laura M | ||
Somerset : , : John Wiley & Sons, Incorporated, , 2011 | ||
Materiale a stampa | ||
Lo trovi qui: Univ. Federico II | ||
|