LEADER 06433nam 22007575 450 001 9910300151503321 005 20200703024908.0 010 $a1-306-54344-4 010 $a3-319-04181-9 024 7 $a10.1007/978-3-319-04181-0 035 $a(CKB)3710000000083700 035 $a(SSID)ssj0001163078 035 $a(PQKBManifestationID)11745397 035 $a(PQKBTitleCode)TC0001163078 035 $a(PQKBWorkID)11141630 035 $a(PQKB)10094734 035 $a(MiAaPQ)EBC1636536 035 $a(DE-He213)978-3-319-04181-0 035 $a(PPN)176109129 035 $a(EXLCZ)993710000000083700 100 $a20140103d2014 u| 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt 182 $cc 183 $acr 200 10$aMachine Learning in Medicine - Cookbook /$fby Ton J. Cleophas, Aeilko H. Zwinderman 205 $a1st ed. 2014. 210 1$aCham :$cSpringer International Publishing :$cImprint: Springer,$d2014. 215 $a1 online resource (131 pages) $cillustrations 225 1 $aSpringerBriefs in Statistics,$x2191-544X 300 $aIncludes index. 311 $a3-319-04180-0 327 $aI Cluster Models -- Hierarchical Clustering and K-means Clustering to Identify Subgroups in Surveys (50 Patients) -- Density-based Clustering to Identify Outlier Groups in Otherwise Homogeneous Data (50 Patients) -- Two Step Clustering to Identify Subgroups and Predict Subgroup Memberships in Individual Future Patients (120 Patients) -- II Linear Models -- Linear, Logistic and Cox Regression for Outcome Prediction with Unpaired Data (20, 55 and 60 Patients) -- Generalized Linear Models for Outcome Prediction with Paired Data (100 Patients and 139 Physicians) -- Generalized Linear Models for Predicting Event-Rates (50 Patients) Exact P-Values -- Factor Analysis and Partial Least Squares (PLS) for Complex-Data Reduction (250 Patients) -- Optimal Scaling of High-sensitivity Analysis of Health Predictors (250 Patients) -- Discriminant Analysis for Making a Diagnosis from Multiple Outcomes (45 Patients) -- Weighted Least Squares for Adjusting Efficacy Data with Inconsistent Spread (78 Patients) -- Partial Correlations for Removing Interaction Effects from Efficacy Data (64 Patients) -- Canonical Regression for Overall Statistics of Multivariate Data (250 Patients). III Rules Models -- Neural Networks for Assessing Relationships that are Typically Nonlinear (90 Patients) -- Complex Samples Methodologies for Unbiased Sampling (9,678 Persons) -- Correspondence Analysis for Identifying the Best of Multiple Treatments in Multiple Groups (217 Patients) -- Decision Trees for Decision Analysis (1004 and 953 Patients) -- Multidimensional Scaling for Visualizing Experienced Drug Efficacies (14 Pain-killers and 42 Patients) -- Stochastic Processes for Long Term Predictions from Short Term Observations -- Optimal Binning for Finding High Risk Cut-offs (1445 Families) -- Conjoint Analysis for Determining the Most Appreciated Properties of Medicines to Be Developed (15 Physicians) -- Index. 330 $aThe amount of data in medical databases doubles every 20 months, and physicians are at a loss to analyze them. Also, traditional methods of data analysis have difficulty to identify outliers and patterns in big data and data with multiple exposure / outcome variables and analysis-rules for surveys and questionnaires, currently common methods of data collection, are, essentially, missing. Obviously, it is time that medical and health professionals mastered their reluctance to use machine learning and the current 100 page cookbook should be helpful to that aim. It covers in a condensed form the subjects reviewed in the 750 page three volume textbook by the same authors, entitled ?Machine Learning in Medicine I-III? (ed. by Springer, Heidelberg, Germany, 2013) and was written as a hand-hold presentation and must-read publication. It was written not only to investigators and students in the fields, but also to jaded clinicians new to the methods and lacking time to read the entire textbooks. General purposes and scientific questions of the methods are only briefly mentioned, but full attention is given to the technical details. The two authors, a statistician and current president of the International Association of Biostatistics and a clinician and past-president of the American College of Angiology, provide plenty of step-by-step analyses from their own research and data files for self-assessment are available at extras.springer.com. From their experience the authors demonstrate that machine learning performs sometimes better than traditional statistics does. Machine learning may have little options for adjusting confounding and interaction, but you can add propensity scores and interaction variables to almost any machine learning method. 410 0$aSpringerBriefs in Statistics,$x2191-544X 606 $aMedicine 606 $aBiostatistics 606 $aStatistics  606 $aApplication software 606 $aBiometrics (Biology) 606 $aMedicine/Public Health, general$3https://scigraph.springernature.com/ontologies/product-market-codes/H00007 606 $aBiostatistics$3https://scigraph.springernature.com/ontologies/product-market-codes/L15020 606 $aStatistics for Life Sciences, Medicine, Health Sciences$3https://scigraph.springernature.com/ontologies/product-market-codes/S17030 606 $aComputer Applications$3https://scigraph.springernature.com/ontologies/product-market-codes/I23001 606 $aBiometrics$3https://scigraph.springernature.com/ontologies/product-market-codes/I22040 615 0$aMedicine. 615 0$aBiostatistics. 615 0$aStatistics . 615 0$aApplication software. 615 0$aBiometrics (Biology). 615 14$aMedicine/Public Health, general. 615 24$aBiostatistics. 615 24$aStatistics for Life Sciences, Medicine, Health Sciences. 615 24$aComputer Applications. 615 24$aBiometrics. 676 $a006.31 700 $aCleophas$b Ton J$4aut$4http://id.loc.gov/vocabulary/relators/aut$0472359 702 $aZwinderman$b Aeilko H$4aut$4http://id.loc.gov/vocabulary/relators/aut 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910300151503321 996 $aMachine Learning in Medicine - Cookbook$92522504 997 $aUNINA