LEADER 06021nam 2200613 450 001 9910157529303321 005 20221206182701.0 010 $a1-62705-967-9 024 7 $a10.2200/S00743ED1V01Y201611ICR053 035 $a(CKB)3710000001001279 035 $a(MiAaPQ)EBC4774128 035 $a(CaBNVSL)swl00407068 035 $a(OCoLC)970006781 035 $a(IEEE)7809449 035 $a(MOCL)201611ICR053 035 $a(EXLCZ)993710000001001279 100 $a20170111h20172017 uy 0 101 0 $aeng 135 $aurcnu|||||||| 181 $2rdacontent 182 $2rdamedia 183 $2rdacarrier 200 10$aQuantifying Research Integrity /$fMichael Seadle 210 1$a[San Rafael, California] :$cMorgan & Claypool,$d2017. 210 4$dİ2017 215 $a1 online resource (143 pages) $cillustrations 225 1 $aSynthesis Lectures on Information Concepts, Retrieval, and Services,$x1947-9468 ;$v53 300 $aPart of: Synthesis digital library of engineering and computer science. 311 $a1-62705-640-8 320 $aIncludes bibliographical references. 327 $a1. Introduction -- 1.1 Overview -- 1.2 Context -- 1.3 Time -- 1.4 Images -- 327 $a2. State of the art -- 2.1 Introduction -- 2.2 Legal issues -- 2.3 Ethics -- 2.3.1 Second-language students -- 2.3.2 Self-plagiarism -- 2.4 Prevention -- 2.4.1 Education -- 2.4.2 Detection as prevention -- 2.5 Detection tools -- 2.5.1 Plagiarism tools -- 2.5.2 iThenticate -- 2.5.3 Crowdsourcing -- 2.5.4 Image-manipulation tools -- 2.6 Replication -- 327 $a3. Quantifying plagiarism -- 3.1 Overview -- 3.1.1 History -- 3.1.2 Definition -- 3.1.3 Pages and percents -- 3.1.4 Context, quotes, and references -- 3.1.5 Sentences, paragraphs, and other units -- 3.1.6 Self-plagiarism -- 3.2 In the humanities -- 3.2.1 Overview -- 3.2.2 Paragraph-length examples -- 3.2.3 Book-length examples -- 3.3 In the social sciences -- 3.3.1 Overview -- 3.3.2 Example 1 -- 3.3.3 Example 2 -- 3.4 In the natural sciences -- 3.4.1 Overview -- 3.4.2 Example 1 -- 3.4.3 Example 2 -- 3.5 Conclusion: plagiarism -- 327 $a4. Quantifying data falsification -- 4.1 Introduction -- 4.2 Metadata -- 4.3 Humanities -- 4.3.1 Introduction -- 4.3.2 History -- 4.3.3 Art and art history -- 4.3.4 Ethnography -- 4.3.5 Literature -- 4.4 Social sciences -- 4.4.1 Introduction -- 4.4.2 Replication studies -- 4.4.3 Diederik Stapel -- 4.4.4 James Hunton -- 4.4.5 Database revisions -- 4.4.6 Data manipulation -- 4.5 Natural sciences -- 4.5.1 Introduction -- 4.5.2 Lab sciences -- 4.5.3 Medical sciences -- 4.5.4 Computing and statistics -- 4.5.5 Other non-lab sciences -- 4.6 Conclusion -- 327 $a5. Quantifying image manipulation -- 5.1 Introduction -- 5.2 Digital imaging technology -- 5.2.1 Background -- 5.2.2 How a digital camera works -- 5.2.3 Raw format -- 5.2.4 Discovery analytics -- 5.2.5 Digital video -- 5.3 Arts and humanities -- 5.3.1 Introduction -- 5.3.2 Arts -- 5.3.3 Humanities -- 5.4 Social sciences and computing -- 5.4.1 Overview -- 5.4.2 Training and visualization -- 5.4.3 Standard manipulations -- 5.5 Biology -- 5.5.1 Legitimate manipulations -- 5.5.2 Illegitimate manipulations -- 5.6 Medicine -- 5.6.1 Limits -- 5.6.2 Case 1 -- 5.6.3 Case 2 -- 5.7 Other natural sciences -- 5.8 Detection tools and services -- 5.9 Conclusion -- 327 $a6. Applying the metrics -- 6.1 Introduction -- 6.2 Detecting gray zones -- 6.3 Determining falsification -- 6.4 Prevention -- 6.5 Conclusion -- 6.6 HEADT Centre -- 327 $aBibliography -- Author's biography. 330 3 $aInstitutions typically treat research integrity violations as black and white, right or wrong. The result is that the wide range of grayscale nuances that separate accident, carelessness, and bad practice from deliberate fraud and malpractice often get lost. This lecture looks at how to quantify the grayscale range in three kinds of research integrity violations: plagiarism, data falsification, and image manipulation. Quantification works best with plagiarism, because the essential one-to-one matching algorithms are well known and established tools for detecting when matches exist. Questions remain, however, of how many matching words of what kind in what location in which discipline constitute reasonable suspicion of fraudulent intent. Different disciplines take different perspectives on quantity and location. Quantification is harder with data falsification, because the original data are often not available, and because experimental replication remains surprisingly difficult. The same is true with image manipulation, where tools exist for detecting certain kinds of manipulations, but where the tools are also easily defeated. This lecture looks at how to prevent violations of research integrity from a pragmatic viewpoint, and at what steps can institutions and publishers take to discourage problems beyond the usual ethical admonitions. There are no simple answers, but two measures can help: the systematic use of detection tools and requiring original data and images. These alone do not suffice, but they represent a start. The scholarly community needs a better awareness of the complexity of research integrity decisions. Only an open and wide-spread international discussion can bring about a consensus on where the boundary lines are and when grayscale problems shade into black. One goal of this work is to move that discussion forward. 410 0$aSynthesis lectures on information concepts, retrieval, and services ;$v53. 606 $aResearch$xMoral and ethical aspects 606 $aPlagiarism 606 $aFraud 615 0$aResearch$xMoral and ethical aspects. 615 0$aPlagiarism. 615 0$aFraud. 676 $a174.95 700 $aSeadle$b Michael$01268231 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910157529303321 996 $aQuantifying Research Integrity$92983012 997 $aUNINA