LEADER 05568nam 22006974a 450 001 996211655103316 005 20230617031032.0 010 $a1-280-36625-7 010 $a9786610366255 010 $a0-470-30781-1 010 $a0-471-45864-3 010 $a0-471-44835-4 035 $a(CKB)1000000000018977 035 $a(EBL)159847 035 $a(OCoLC)123112222 035 $a(SSID)ssj0000295984 035 $a(PQKBManifestationID)11250991 035 $a(PQKBTitleCode)TC0000295984 035 $a(PQKBWorkID)10322357 035 $a(PQKB)10628633 035 $a(MiAaPQ)EBC159847 035 $a(EXLCZ)991000000000018977 100 $a20021105d2003 uy 0 101 0 $aeng 135 $aur|n|---||||| 181 $ctxt 182 $cc 183 $acr 200 10$aExploratory data mining and data cleaning$b[electronic resource] /$fTamraparni Dasu, Theorodre Johnson 210 $aNew York $cWiley-Interscience$d2003 215 $a1 online resource (226 p.) 225 1 $aWiley series in probability and statistics 300 $aDescription based upon print version of record. 311 $a0-471-26851-8 320 $aIncludes bibliographical references (p. 189-195) and index. 327 $aExploratory Data Mining and Data Cleaning; Contents; Preface; 1. Exploratory Data Mining and Data Cleaning: An Overview; 1.1 Introduction; 1.2 Cautionary Tales; 1.3 Taming the Data; 1.4 Challenges; 1.5 Methods; 1.6 EDM; 1.6.1 EDM Summaries-Parametric; 1.6.2 EDM Summaries-Nonparametric; 1.7 End-to-End Data Quality (DQ); 1.7.1 DQ in Data Preparation; 1.7.2 EDM and Data Glitches; 1.7.3 Tools for DQ; 1.7.4 End-to-End DQ: The Data Quality Continuum; 1.7.5 Measuring Data Quality; 1.8 Conclusion; 2. Exploratory Data Mining; 2.1 Introduction; 2.2 Uncertainty; 2.2.1 Annotated Bibliography 327 $a2.3 EDM: Exploratory Data Mining2.4 EDM Summaries; 2.4.1 Typical Values; 2.4.2 Attribute Variation; 2.4.3 Example; 2.4.4 Attribute Relationships; 2.4.5 Annotated Bibliography; 2.5 What Makes a Summary Useful?; 2.5.1 Statistical Properties; 2.5.2 Computational Criteria; 2.5.3 Annotated Bibliography; 2.6 Data-Driven Approach-Nonparametric Analysis; 2.6.1 The Joy of Counting; 2.6.2 Empirical Cumulative Distribution Function (ECDF); 2.6.3 Univariate Histograms; 2.6.4 Annotated Bibliography; 2.7 EDM in Higher Dimensions; 2.8 Rectilinear Histograms; 2.9 Depth and Multivariate Binning 327 $a2.9.1 Data Depth2.9.2 Aside: Depth-Related Topics; 2.9.3 Annotated Bibliography; 2.10 Conclusion; 3. Partitions and Piecewise Models; 3.1 Divide and Conquer; 3.1.1 Why Do We Need Partitions?; 3.1.2 Dividing Data; 3.1.3 Applications of Partition-Based EDM Summaries; 3.2 Axis-Aligned Partitions and Data Cubes; 3.2.1 Annotated Bibliography; 3.3 Nonlinear Partitions; 3.3.1 Annotated Bibliography; 3.4 DataSpheres (DS); 3.4.1 Layers; 3.4.2 Data Pyramids; 3.4.3 EDM Summaries; 3.4.4 Annotated Bibliography; 3.5 Set Comparison Using EDM Summaries; 3.5.1 Motivation; 3.5.2 Comparison Strategy 327 $a3.5.3 Statistical Tests for Change3.5.4 Application-Two Case Studies; 3.5.5 Annotated Bibliography; 3.6 Discovering Complex Structure in Data with EDM Summaries; 3.6.1 Exploratory Model Fitting in Interactive Response Time; 3.6.2 Annotated Bibliography; 3.7 Piecewise Linear Regression; 3.7.1 An Application; 3.7.2 Regression Coefficients; 3.7.3 Improvement in Fit; 3.7.4 Annotated Bibliography; 3.8 One-Pass Classification; 3.8.1 Quantile-Based Prediction with Piecewise Models; 3.8.2 Simulation Study; 3.8.3 Annotated Bibliography; 3.9 Conclusion; 4. Data Quality; 4.1 Introduction 327 $a4.2 The Meaning of Data Quality4.2.1 An Example; 4.2.2 Data Glitches; 4.2.3 Conventional Definition of DQ; 4.2.4 Times Have Changed; 4.2.5 Annotated Bibliography; 4.3 Updating DQ Metrics: Data Quality Continuum; 4.3.1 Data Gathering; 4.3.2 Data Delivery; 4.3.3 Data Monitoring; 4.3.4 Data Storage; 4.3.5 Data Integration; 4.3.6 Data Retrieval; 4.3.7 Data Mining/Analysis; 4.3.8 Annotated Bibliography; 4.4 The Meaning of Data Quality Revisited; 4.4.1 Data Interpretation; 4.4.2 Data Suitability; 4.4.3 Dataset Type; 4.4.4 Attribute Type; 4.4.5 Application Type 327 $a4.4.6 Data Quality-A Many Splendored Thing 330 $aWritten for practitioners of data mining, data cleaning and database management. Presents a technical treatment of data quality including process, metrics, tools and algorithms.Focuses on developing an evolving modeling strategy through an iterative data exploration loop and incorporation of domain knowledge.Addresses methods of detecting, quantifying and correcting data quality issues that can have a significant impact on findings and decisions, using commercially available tools as well as new algorithmic approaches.Uses case studies to illustrate applications in real 410 0$aWiley series in probability and statistics. 606 $aData mining 606 $aElectronic data processing$xData preparation 606 $aElectronic data processing$xQuality control 615 0$aData mining. 615 0$aElectronic data processing$xData preparation. 615 0$aElectronic data processing$xQuality control. 676 $a005.741 676 $a006.3 676 $a006.312 700 $aDasu$b Tamraparni$0281835 701 $aJohnson$b Theodore$0281836 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a996211655103316 996 $aExploratory data mining and data cleaning$9673537 997 $aUNISA LEADER 06873oam 2200757Ka 450 001 9910781517103321 005 20190503073400.0 010 $a0-262-30056-7 010 $a1-283-42078-3 010 $a9786613420787 010 $a0-262-30134-2 024 8 $a9786613420787 035 $a(CKB)2550000000075160 035 $a(EBL)3339352 035 $a(SSID)ssj0000570683 035 $a(PQKBManifestationID)11364049 035 $a(PQKBTitleCode)TC0000570683 035 $a(PQKBWorkID)10610924 035 $a(PQKB)10580704 035 $a(MiAaPQ)EBC3339352 035 $a(OCoLC)768348820$z(OCoLC)775992319$z(OCoLC)777333211$z(OCoLC)804683202$z(OCoLC)817055259$z(OCoLC)838809769$z(OCoLC)939263724$z(OCoLC)961532528$z(OCoLC)962590615$z(OCoLC)988410484$z(OCoLC)992039248$z(OCoLC)994931060$z(OCoLC)1013301831$z(OCoLC)1037930736$z(OCoLC)1038699725$z(OCoLC)1051626817$z(OCoLC)1054959793$z(OCoLC)1055363967$z(OCoLC)1065806688$z(OCoLC)1081292746 035 $a(OCoLC-P)768348820 035 $a(MaCbMITP)8731 035 $a(Au-PeEL)EBL3339352 035 $a(CaPaEBR)ebr10520614 035 $a(CaONFJC)MIL342078 035 $a(OCoLC)939263724 035 $a(EXLCZ)992550000000075160 100 $a20111214d2012 uy 0 101 0 $aeng 135 $aurcnu---unuuu 181 $ctxt 182 $cc 183 $acr 200 10$aGetting it wrong $ehow faulty monetary statistics undermine the Fed, the financial system, and the economy /$fWilliam A. Barnett 210 $aCambridge, Mass. $cMIT Press$dİ2012 215 $a1 online resource (357 p.) 300 $aDescription based upon print version of record. 311 $a0-262-51688-8 311 $a0-262-01691-5 320 $aIncludes bibliographical references and index. 327 $aContents; Foreword: Macroeconomics as a Science; Preface; Acknowledgments; I. The Facts without the Math; 1. Introduction; 1.1 Whose Greed?; 1.2 The Great Moderation; 1.3 The Maestro; 1.4 Paradoxes; 1.5 Conclusion; 2. Monetary Aggregation Theory; 2.1 Adding Apples and Oranges; 2.2 Dual Price Aggregation; 2.3 Financial Aggregation; 2.4 The Commerce Department and the Department of Labor; 2.5 The Major Academic Players; 2.6 Banks throughout the World; 2.7 Mechanism Design: Why Is the Fed Getting It Wrong?; 2.8 Conclusion; 3. The History; 3.1 The 1960's and 1970's 327 $a3.2 The Monetarist Experiment: October 1979 to September 19823.3 The End of the Monetarist Experiment: 1983 to 1984; 3.4 The Rise of Risk-Adjustment Concerns: 1984 to 1993; 3.5 The Y2K Computer Bug: 1999 to 2000; 3.6 Conclusion; 4. Current Policy Problems; 4.1 European ECB Data; 4.2 The Most Recent Data: Would You Believe This?; 4.3 The Current Crisis; 4.4 Conclusion; 5. Summary and Conclusion; II. Mathematical Appendixes; A. Monetary Aggregation Theory under Perfect Certainty; A.1 Introduction; A.2 Consumer Demand for Monetary Assets; A.3 Supply of Monetary Assets by Financial Intermediaries 327 $aA.4 Demand for Monetary Assets by Manufacturing Firms A.5 Aggregation Theory under Homogeneity; A.6 Index- Number Theory under Homogeneity; A.7 Aggregation Theory without Homotheticity; A.8 Index- Number Theory under Nonhomogeneity; A.9 Aggregation over Consumers and Firms; A.10 Technical Change; A.11 Value Added; A.12 Macroeconomic and General Equilibrium Theory; A.13 Aggregation Error from Simple- Sum Aggregation; A.14 Conclusion; B. Discounted Capital Stock of Money with Risk Neutrality; B.1 Introduction; B.2 Economic Stock of Money (ESM) under Perfect Foresight; B.3 Extension to Risk 327 $aB.4 CE and Simple Sum as Special Cases of the ESMB.5 Measurement of the Economic Stock of Money; C. Multilateral Aggregation within a Multicountry Economic Union; C.1 Introduction; C.2 Definition of Variables; C.3 Aggregation within Countries; C.4 Aggregation over Countries; C.5 Special Cases; C.6 Interest Rate Aggregation; C.7 Divisia Second Moments; C.8 Conclusion; D. Extension to Risk Aversion; D.1 Introduction; D.2 Consumer Demand for Monetary Assets; D.3 The Perfect- Certainty Case; D.4 The New Generalized Divisia Index; D.5 The CCAPM Special Case; D.6 The Magnitude of the Adjustment 327 $aD.7 Intertemporal Nonseparability D.8 Consumer's Nonseparable Optimization Problem; D.9 Extended Risk- Adjusted User Cost of Monetary Assets; D.10 Conclusion; E. The Middle Ground: Understanding Divisia Aggregation; E.1 Introduction; E.2 The Divisia Index; E.3 The Weights; E.4 Is It a Quantity or Price Index?; E.5 Stocks versus Flows; E.6 Conclusion; References; Index 330 $aBlame for the recent financial crisis and subsequent recession has commonly been assigned to everyone from Wall Street firms to individual homeowners. It has been widely argued that the crisis and recession were caused by "greed" and the failure of mainstream economics. In this book, leading economist William Barnett argues instead that there was too little use of the relevant economics, especially from the literature on economic measurement. Barnett contends that as financial instruments became more complex, the simple-sum monetary aggregation formulas used by central banks, including the U.S. Federal Reserve, became obsolete. Instead, a major increase in public availability of best-practice data was needed. Households, firms, and governments, lacking the requisite information, incorrectly assessed systemic risk and significantly increased their leverage and risk-taking activities. Better financial data, Barnett argues, could have signaled the misperceptions and prevented the erroneous systemic-risk assessments. When extensive, best-practice information is not available from the central bank, increased regulation can constrain the adverse consequences of ill-informed decisions. Instead, there was deregulation. The result, Barnett argues, was a worst-case toxic mix: increasing complexity of financial instruments, inadequate and poor-quality data, and declining regulation. Following his accessible narrative of the deep causes of the crisis and the long history of private and public errors, Barnett provides technical appendixes, containing the mathematical analysis supporting his arguments. -- Back Cover. 606 $aEconometrics 606 $aFinance$xMathematical models 606 $aFinancial crises 606 $aMonetary policy$zUnited States 607 $aUnited States$xEconomic policy$y2009- 610 $aECONOMICS/Macroeconomics 610 $aECONOMICS/Finance 615 0$aEconometrics. 615 0$aFinance$xMathematical models. 615 0$aFinancial crises. 615 0$aMonetary policy 676 $a332.401/5195 700 $aBarnett$b William A$01121739 801 0$bOCoLC-P 801 1$bOCoLC-P 906 $aBOOK 912 $a9910781517103321 996 $aGetting it wrong$93748240 997 $aUNINA