04965nam 2200661Ia 450 991014141490332120170815150404.01-283-59289-497866139053451-118-49178-51-118-49176-91-118-49177-7(CKB)2670000000237810(EBL)1011370(OCoLC)809539052(SSID)ssj0000715402(PQKBManifestationID)11454833(PQKBTitleCode)TC0000715402(PQKBWorkID)10700962(PQKB)10726895(MiAaPQ)EBC1011370(EXLCZ)99267000000023781019970708d1997 uy 0engur|n|---|||||txtccrLinear models[electronic resource] /S. R. SearleNew York Wileyc19971 online resource (560 p.)Wiley classics libraryDescription based upon print version of record.0-471-76950-9 0-471-18499-3 Linear Models; Contents; 1. Generalized Inverse Matrices; 1. Introduction; a. Definition and existence; b. An algorithm; 2. Solving linear equations; a. Consistent equations; b. Obtaining solutions; c. Properties of solutions; 3. The Penrose inverse; 4. Other definitions; 5. Symmetric matrices; a. Properties of a generalized inverse; b. Two methods of derivation; 6. Arbitrariness in a generalized inverse; 7. Other results; 8. Exercises; 2. Distributions and Quadratic Forms; 1. Introduction; 2. Symmetric matrices; 3. Positive definiteness; 4. Distributions; a. Multivariate density functionsb. Momentsc. Linear transformations; d. Moment generating functions; e. Univariate normal; f. Multivariate normal; (i) Density function; (ii) Aitken's integral; (iii) Moment generating function; (iv) Marginal distributions; (v) Conditional distributions; (vi) Independence; g. Central χ2, F and t; h. Non-central χ2; i. Non-central F; j . Other non-central distributions; 5. Distribution of quadratic forms; a. Cumulants; b. Distributions; c. Independence; 6. Bilinear forms; 7. The singular normal distribution; 8. Exercises; 3. Regression, or the Full Rank Model; 1. Introduction; a. The modelb. Observationsc. Estimation; d. Example; e. The general case of k x-vartables; f. Example (continued); g. Intercept and no-intercept models; h. Example (continued); 2. Deviations from means; 3. Four methods of estimation; a. Ordinary least squares; b. Generalized least squares; c. Maximum likelihood; d. The best linear unbiased estimator (b.l.u.e.); 4. Consequences of estimation; a. Unbiasedness; b. Variances; c. Estimating E(y); d. Residual error sum of squares; e. Estimating the residual error variance; f. Partitioning the total sum of squares; g. Multiple correlationh. Example (continued)5. Distributional properties; a. y is normal; b. b is normal; c. b and σ2 are independent; d. SSE/σ2 has a χ2-distribution; e. Non-central χ2's; f. F-distributions; g. Analyses of variance; h. Pure error; i. Tests of hypotheses; j . Example (continued); k. Confidence intervals; l. Example (continued); 6. The general linear hypothesis; a. Testing linear hypotheses; b. Estimation under the null hypothesis; c. Four common hypotheses; (i) H: b = 0; (ii) H: b = b0; (iii) H: λ'b = m; (iv) H: bq = 0; d. Reduced models; (i) K'b = m; (ii) K'b = 0; (iii) bq = 0; 7. Related topicsa. The likelihood ratio testb. Type I and II errors; c. The power of a test; d. Examining residuals; 8. Summary of regression calculations; 9. Exercises; 4. Introducing Linear Models: Regression on Dummy Variables; 1. Regression on allocated codes; a. Allocated codes; b. Difficulties and criticism; c. Grouped variables; d. Unbalanced data; 2. Regression on dummy (0, 1) variables; a. Factors and levels; b. The regression; 3. Describing linear models; a. A 1-way classification; b. A 2-way classification; c. A 3-way classification; d. Main effects and interactions; (i) Main effects(ii) InteractionsThis 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.Wiley Series in Probability and Statistics - Applied Probability and Statistics SectionLinear models (Statistics)StatisticsElectronic books.Linear models (Statistics)Statistics.519.5519.538Searle S. R(Shayle R.),1928-105121MiAaPQMiAaPQMiAaPQBOOK9910141414903321Linear models197209UNINA