LEADER 06340nam 22006615 450 001 9910300145603321 005 20250413114354.0 010 $a4-431-54321-X 010 $a9784431543213$b(eBook) 010 $z9784431543206$b(print) 024 7 $a10.1007/978-4-431-54321-3 035 $a(CKB)3710000000025712 035 $a(EBL)1538802 035 $a(SSID)ssj0001049510 035 $a(PQKBManifestationID)11682087 035 $a(PQKBTitleCode)TC0001049510 035 $a(PQKBWorkID)11019572 035 $a(PQKB)11656704 035 $a(MiAaPQ)EBC1538802 035 $a(DE-He213)978-4-431-54321-3 035 $a(PPN)176126023 035 $a(EXLCZ)993710000000025712 100 $a20131008d2014 u| 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aLearning Regression Analysis by Simulation /$fby Kunio Takezawa 205 $a1st ed. 2014. 210 1$aTokyo :$cSpringer Japan :$cImprint: Springer,$d2014. 215 $a1 online resource (xii, 300 pages) $cillustrations 311 08$a4-431-54320-1 320 $aIncludes bibliographical references and index. 327 $aPreface; Acknowledgments; Contents; Chapter 1: Linear Algebra; 1.1 Starting Up and Executing R; 1.2 Vectors; 1.3 Matrices; 1.4 Addition of Two Matrices; 1.5 Multiplying Two Matrices; 1.6 Identity and Inverse Matrices; 1.7 Simultaneous Equations; 1.8 Diagonalization of a Symmetric Matrix; 1.9 Quadratic Forms; References; Chapter 2: Distributions and Tests; 2.1 Sampling and Random Variables; 2.2 Probability Distribution; 2.3 Normal Distribution and the Central Limit Theorem; 2.4 Interval Estimation by t Distribution; 2.5 t-Test 327 $a2.6 Interval Estimation of Population Varianceand the ?2 Distribution2.7 F Distribution and F-Test; 2.8 Wilcoxon Signed-Rank Sum Test; References; Chapter 3: Simple Regression; 3.1 Derivation of Regression Coefficients; 3.2 Exchange Between Predictor Variable and Target Variable; 3.3 Regression to the Mean; 3.4 Confidence Interval of Regression Coefficients in Simple Regression; 3.5 t-Test in Simple Regression; 3.6 F-Test on Simple Regression; 3.7 Selection Between Constant and Nonconstant Regression Equations; 3.8 Prediction Error of Simple Regression; 3.9 Weighted Regression 327 $a3.10 Least Squares Method and Prediction ErrorReferences; Chapter 4: Multiple Regression; 4.1 Derivation of Regression Coefficients; 4.2 Test on Multiple Regression; 4.3 Prediction Error on Multiple Regression; 4.4 Notes on Model Selection Using Prediction Error; 4.5 Polynomial Regression; 4.6 Variance of Regression Coefficient and Multicollinearity; 4.7 Detection of Multicollinearity Using Variance Inflation Factors; 4.8 Hessian Matrix of Log-Likelihood; References; Chapter 5: Akaike's Information Criterion (AIC) and the Third Variance; 5.1 Cp and FPE 327 $a5.2 AIC of a Multiple Regression Equation with Independent and Identical Normal Distribution5.3 Derivation of AIC for Multiple Regression; 5.4 AIC with Unbiased Estimator for Error Variance; 5.5 Error Variance by Maximizing Expectationof Log-Likelihood in Light of the Datain the Future and the ``Third Variance''; 5.6 Relationship Between AIC (or GCV) and F-Test; 5.7 AIC on Poisson Regression; References; Chapter 6: Linear Mixed Model; 6.1 Random-Effects Model; 6.2 Random Intercept Model; 6.3 Random Intercept and Slope Model; 6.4 Generalized Linear Mixed Model 327 $a6.5 Generalized Additive Mixed ModelIndex 330 $aThe standard approach of most introductory books for practical statistics is that readers first learn the minimum mathematical basics of statistics and rudimentary concepts of statistical methodology. They then are given examples of analyses of data obtained from natural and social phenomena so that they can grasp practical definitions of statistical methods. Finally they go on to acquaint themselves with statistical software for the PC and analyze similar data to expand and deepen their understanding of statistical methods. This book, however, takes a slightly different approach, using simulation data instead of actual data to illustrate the functions of statistical methods. Also, "R" programs listed in the book help readers realize clearly how these methods work to bring intrinsic values of data to the surface. "R" is free software enabling users to handle vectors, matrices, data frames, and so on. For example, when a statistical theory indicates that an event happens with a 5 % probability, readers can confirm the fact using "R" programs that this event actually occurs with roughly that probability, by handling data generated by pseudo-random numbers. Simulation gives readers populations with known backgrounds and the nature of the population can be adjusted easily. This feature of the simulation data helps provide a clear picture of statistical methods painlessly. Most readers of introductory books of statistics for practical purposes do not like complex mathematical formulae, but they do not mind using a PC to produce various numbers and graphs by handling a huge variety of numbers. If they know the characteristics of these numbers beforehand, they treat them with ease. Struggling with actual data should come later. Conventional books on this topic frighten readers by presenting unidentified data to them indiscriminately. This book provides a new path to statistical concepts and practical skills in a readily accessible manner. 606 $aStatistics 606 $aMathematical statistics$xData processing 606 $aStatistics 606 $aStatistical Theory and Methods 606 $aStatistics and Computing 606 $aStatistics in Engineering, Physics, Computer Science, Chemistry and Earth Sciences 615 0$aStatistics. 615 0$aMathematical statistics$xData processing. 615 0$aStatistics. 615 14$aStatistical Theory and Methods. 615 24$aStatistics and Computing. 615 24$aStatistics in Engineering, Physics, Computer Science, Chemistry and Earth Sciences. 676 $a519.536 700 $aTakezawa$b Kunio$f1959-$0520704 906 $aBOOK 912 $a9910300145603321 996 $aLearning regression analysis by simulation$91409956 997 $aUNINA