12803nam 22006615 450 991029996830332120200703031733.03-319-10461-610.1007/978-3-319-10461-4(CKB)3710000000269883(SSID)ssj0001372672(PQKBManifestationID)11782335(PQKBTitleCode)TC0001372672(PQKBWorkID)11311560(PQKB)11545132(DE-He213)978-3-319-10461-4(MiAaPQ)EBC6312387(MiAaPQ)EBC5579503(Au-PeEL)EBL5579503(OCoLC)896126358(PPN)182094308(EXLCZ)99371000000026988320141031d2014 u| 0engurnn|008mamaatxtccrIntroductory Statistical Inference with the Likelihood Function /by Charles A. Rohde1st ed. 2014.Cham :Springer International Publishing :Imprint: Springer,2014.1 online resource (XVI, 332 p. 12 illus.) Bibliographic Level Mode of Issuance: Monograph3-319-10460-8 Intro -- Preface -- Contents -- 1 Introduction -- 1.1 Introductory Example -- 1.1.1 Likelihood Approach -- 1.1.2 Bayesian Approach -- 1.1.3 Frequentist Approach -- 1.1.4 Comments on the Example -- 1.1.5 Choice of k and α -- 1.2 What Is Statistics? -- 1.2.1 General Setup -- 1.2.2 Scope of Statistical Inference -- 2 The Statistical Approach -- 2.1 The Setup -- 2.2 Approaches to Statistical Inference -- 2.3 Types of Statistical Inference -- 2.4 Statistics and Combinants -- 2.4.1 Statistics and Sampling Distributions -- 2.4.2 Combinants -- 2.4.3 Frequentist Inference -- 2.4.4 Bayesian Inference -- 2.4.5 Likelihood Inference -- 2.5 Exercises -- 3 Estimation -- 3.1 Frequentist Concepts -- 3.1.1 Bias, Unbiasedness, and Standard Errors -- 3.1.2 Consistency -- 3.1.3 Mean Square Error -- 3.1.4 Asymptotic Distributions -- 3.1.5 Efficiency -- 3.1.6 Equivariance -- 3.2 Bayesian and Likelihood Paradigms -- 3.3 Exercises -- 4 Interval Estimation -- 4.1 The Problem -- 4.2 Frequentist Approach -- 4.2.1 Importance of the Long Run -- 4.2.2 Application to the Normal Distribution -- 4.3 Pivots -- 4.4 Likelihood Intervals -- 4.5 Bayesian Approach -- 4.6 Objective Bayes -- 4.7 Comparing the Intervals -- 4.8 Interval Estimation Example -- 4.9 Exercises -- 5 Hypothesis Testing -- 5.1 Law of Likelihood -- 5.2 Neyman-Pearson Theory -- 5.2.1 Introduction -- 5.2.2 Neyman-Pearson Lemma -- 5.2.3 Using the Neyman-Pearson Lemma -- 5.2.4 Uniformly Most Powerful Tests -- 5.2.5 A Complication -- 5.2.6 Comments and Examples -- 5.2.7 A Different Criterion -- 5.2.8 Inductive Behavior -- 5.2.9 Still Another Criterion -- 5.3 p-Values -- 5.4 Duality of Confidence Intervals and Tests -- 5.5 Composite Hypotheses -- 5.5.1 One-Way Analysis of Variance -- 5.6 The Multiple Testing Problem -- 5.7 Exercises -- 6 Standard Practice of Statistics -- 6.1 Introduction.6.2 Frequentist Statistical Procedures -- 6.2.1 Estimation -- 6.2.2 Confidence Intervals -- 6.2.3 Hypothesis Testing -- 6.2.4 Significance Tests -- 7 Maximum Likelihood: Basic Results -- 7.1 Basic Properties -- 7.2 Consistency of Maximum Likelihood -- 7.3 General Results on the Score Function -- 7.4 General Maximum Likelihood -- 7.5 Cramer-Rao Inequality -- 7.6 Summary Properties of Maximum Likelihood -- 7.7 Multiparameter Case -- 7.8 Maximum Likelihood in the Multivariate Normal -- 7.9 Multinomial -- 8 Linear Models -- 8.1 Introduction -- 8.2 Basic Results -- 8.2.1 The Fitted Values and the Residuals -- 8.3 The Basic ``Regression'' Model -- 8.3.1 Adding Covariates -- 8.3.2 Interpretation of Regression Coefficients -- 8.3.3 Added Sum of Squares -- 8.3.4 Identity of Regression Coefficients -- 8.3.5 Likelihood and Bayesian Results -- 8.4 Interpretation of the Coefficients -- 8.5 Factors as Covariates -- 8.6 Exercises -- 9 Other Estimation Methods -- 9.1 Estimation Using Empirical Distributions -- 9.1.1 Empirical Distribution Functions -- 9.1.2 Statistical Functionals -- 9.1.3 Linear Statistical Functionals -- 9.1.4 Quantiles -- 9.1.5 Confidence Intervals for Quantiles -- 9.2 Method of Moments -- 9.2.1 Technical Details of the Method of Moments -- 9.2.2 Application to the Normal Distribution -- 9.3 Estimating Functions -- 9.3.1 General Linear Model -- 9.3.2 Maximum Likelihood -- 9.3.3 Method of Moments -- 9.3.4 Generalized Linear Models -- 9.3.5 Quasi-Likelihood -- 9.3.6 Generalized Estimating Equations -- 9.4 Generalized Method of Moments -- 9.5 The Bootstrap -- 9.5.1 Basic Ideas -- 9.5.2 Simulation Background -- 9.5.3 Variance Estimation Using the Bootstrap -- 9.6 Confidence Intervals Using the Bootstrap -- 9.6.1 Normal Interval -- 9.6.2 Pivotal Interval -- 9.6.3 Percentile Interval -- 9.6.4 Parametric Version -- 9.6.5 Dangers of the Bootstrap.9.6.6 The Number of Possible Bootstrap Samples -- 10 Decision Theory -- 10.1 Introduction -- 10.1.1 Actions, Losses, and Risks -- 10.2 Admissibility -- 10.3 Bayes Risk and Bayes Rules -- 10.4 Examples of Bayes Rules -- 10.5 Stein's Result -- 10.6 Exercises -- 11 Sufficiency -- 11.1 Families of Distributions -- 11.1.1 Introduction to Sufficiency -- 11.1.2 Rationale for Sufficiency -- 11.1.3 Factorization Criterion -- 11.1.4 Sketch of Proof of the Factorization Criterion -- 11.1.5 Properties of Sufficient Statistics -- 11.1.6 Minimal Sufficient Statistics -- 11.2 Importance of Sufficient Statistics in Inference -- 11.2.1 Frequentist Statistics -- 11.2.2 Bayesian Inference -- 11.2.3 Likelihood Inference -- 11.3 Alternative Proof of Factorization Theorem -- 11.4 Exercises -- 12 Conditionality -- 12.1 Ancillarity -- 12.2 Problems with Conditioning -- 13 Statistical Principles -- 13.1 Introduction -- 13.1.1 Birnbaum's Formulation -- 13.1.2 Framework and Notation -- 13.1.3 Mathematical Equivalence -- 13.1.4 Irrelevant Noise -- 13.2 Likelihood Principle -- 13.3 Equivalence of Likelihood and Irrelevant Noise Plus Mathematical Equivalence -- 13.4 Sufficiency, Conditionality, and Likelihood Principles -- 13.5 Fundamental Result -- 13.6 Stopping Rules -- 13.6.1 Comments -- 13.6.2 Jeffreys/Lindley Paradox -- 13.6.3 Randomization -- 13.6.4 Permutation or Randomization Tests -- 13.6.4.1 Lindley's Example on Permutation Tests -- 14 Bayesian Inference -- 14.1 Frequentist vs Bayesian -- 14.2 The Bayesian Model for Inference -- 14.3 Why Bayesian? Exchangeability -- 14.4 Stable Estimation -- 14.5 Bayesian Consistency -- 14.6 Relation to Maximum Likelihood -- 14.7 Priors -- 14.7.1 Different Types of Priors -- 14.7.1.1 Conjugate Priors -- 14.7.2 Vague Priors -- 14.7.2.1 Jeffrey's Priors -- 14.7.2.2 Reference Priors -- 14.7.2.3 Subjective Priors.15 Bayesian Statistics: Computation -- 15.1 Computation -- 15.1.1 Monte Carlo Integration -- 15.1.2 Importance Sampling -- 15.1.3 Markov Chain Monte Carlo -- 15.1.4 The Gibbs Sampler -- 15.1.5 Software -- 16 Bayesian Inference: Miscellaneous -- 16.1 Bayesian Updating -- 16.2 Bayesian Prediction -- 16.3 Stopping Rules in Bayesian Inference -- 16.4 Nuisance Parameters -- 16.5 Summing Up -- 17 Pure Likelihood Methods -- 17.1 Introduction -- 17.2 Misleading Statistical Evidence -- 17.2.1 Weak Statistical Evidence -- 17.2.2 Sample Size -- 17.3 Birnbaum's Confidence Concept -- 17.4 Combining Evidence -- 17.5 Exercises -- 18 Pure Likelihood Methods and Nuisance Parameters -- 18.1 Nuisance Parameters -- 18.1.1 Introduction -- 18.1.2 Neyman-Scott Problem -- 18.2 Elimination Methods -- 18.3 Evidence in the Presence of Nuisance Parameters -- 18.3.1 Orthogonal Parameters -- 18.3.2 Orthogonal Reparametrizations -- 18.4 Varieties of Likelihood -- 18.5 Information Loss -- 18.6 Marginal Likelihood -- 18.7 Conditional Likelihood -- 18.7.1 Estimated Likelihoods -- 18.8 Profile Likelihood -- 18.8.1 Introduction -- 18.8.2 Misleading Evidence Using Profile Likelihoods -- 18.8.3 General Linear Model Likelihood Functions -- 18.8.4 Using Profile Likelihoods -- 18.8.5 Profile Likelihoods for Unknown Variance -- 18.9 Computation of Profile Likelihoods -- 18.10 Summary -- 19 Other Inference Methods and Concepts -- 19.1 Fiducial Probability and Inference -- 19.1.1 Good's Example -- 19.1.2 Edward's Example -- 19.2 Confidence Distributions -- 19.2.1 Bootstrap Connections -- 19.2.2 Likelihood Connections -- 19.2.3 Confidence Curves -- 19.3 P-Values Again -- 19.3.1 Sampling Distribution of P-Values -- 19.4 Severe Testing -- 19.5 Cornfield on Testing and Confidence Intervals -- 20 Finite Population Sampling -- 20.1 Introduction -- 20.2 Populations and Samples.20.3 Principal Types of Sampling Methods -- 20.4 Simple Random Sampling -- 20.5 Horvitz-Thompson Estimator -- 20.5.1 Basu's Elephant -- 20.5.2 An Unmentioned Assumption -- 20.6 Prediction Approach -- 20.6.1 Proof of the Prediction Result -- 20.7 Stratified Sampling -- 20.7.1 Basic Results -- 20.8 Cluster Sampling -- 20.9 Practical Considerations -- 20.9.1 Sampling Frame Problems -- 20.9.2 Nonresponse -- 20.9.3 Sampling Errors -- 20.9.4 Non-sampling Errors -- 20.10 Role of Finite Population Sampling in Modern Statistics -- 21 Appendix: Probability and Mathematical Concepts -- 21.1 Probability Models -- 21.1.1 Definitions -- 21.1.2 Properties of Probability -- 21.1.3 Continuity Properties of Probability Measures -- 21.1.4 Conditional Probability -- 21.1.5 Properties of Conditional Probability -- 21.1.6 Finite and Denumerable Sample Spaces -- 21.1.6.1 Random Sampling from a Population -- 21.1.6.2 Combinatorics -- 21.1.6.3 Two Important Discrete Probability Models -- 21.1.7 Independence -- 21.1.7.1 Independent Trial Models -- 21.1.7.2 Bernoulli Trial Models -- 21.1.7.3 Results on Bernoulli Trial Models -- 21.1.7.4 Multinomial Trial Models -- 21.2 Random Variables and Probability Distributions -- 21.2.1 Measurable Functions -- 21.2.2 Random Variables:Definitions -- 21.2.3 Distribution Functions -- 21.2.3.1 Properties of Distribution Functions -- 21.2.4 Discrete Random Variables -- 21.2.4.1 Results and Examples-Discrete Random Variables -- 21.2.5 Continuous Random Variables -- 21.2.5.1 Properties and Examples of Continuous Random Variables -- 21.2.6 Functions of Random Variables -- 21.3 Random Vectors -- 21.3.1 Definitions -- 21.3.1.1 Properties of Random Vectors -- 21.3.2 Discrete and Continuous Random Vectors -- 21.3.3 Marginal Distributions -- 21.3.4 The Multinomial Distribution -- 21.3.4.1 Multinomial Results -- 21.3.5 Independence of Random Variables.21.3.5.1 Properties of Independent Random Variables.This textbook covers the fundamentals of statistical inference and statistical theory including Bayesian and frequentist approaches and methodology possible without excessive emphasis on the underlying mathematics. This book is about some of the basic principles of statistics that are necessary to understand and evaluate methods for analyzing complex data sets. The likelihood function is used for pure likelihood inference throughout the book. There is also coverage of severity and finite population sampling. The material was developed from an introductory statistical theory course taught by the author at the Johns Hopkins University’s Department of Biostatistics. Students and instructors in public health programs will benefit from the likelihood modeling approach that is used throughout the text. This will also appeal to epidemiologists and psychometricians.  After a brief introduction, there are chapters on estimation, hypothesis testing, and maximum likelihood modeling. The book concludes with sections on Bayesian computation and inference. An appendix contains unique coverage of the interpretation of probability, and coverage of probability and mathematical concepts.Statistics Statistics for Life Sciences, Medicine, Health Scienceshttps://scigraph.springernature.com/ontologies/product-market-codes/S17030Statistical Theory and Methodshttps://scigraph.springernature.com/ontologies/product-market-codes/S11001Statistics, generalhttps://scigraph.springernature.com/ontologies/product-market-codes/S0000XStatistics .Statistics for Life Sciences, Medicine, Health Sciences.Statistical Theory and Methods.Statistics, general.519.5Rohde Charles Aauthttp://id.loc.gov/vocabulary/relators/aut525081MiAaPQMiAaPQMiAaPQBOOK9910299968303321Introductory statistical inference with the likelihood function822772UNINA