1.

Record Nr.

UNINA9910157529303321

Autore

Seadle Michael

Titolo

Quantifying Research Integrity / / Michael Seadle

Pubbl/distr/stampa

[San Rafael, California] : , : Morgan & Claypool, , 2017

©2017

ISBN

1-62705-967-9

Descrizione fisica

1 online resource (143 pages) : illustrations

Collana

Synthesis Lectures on Information Concepts, Retrieval, and Services, , 1947-9468 ; ; 53

Disciplina

174.95

Soggetti

Research - Moral and ethical aspects

Plagiarism

Fraud

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Note generali

Part of: Synthesis digital library of engineering and computer science.

Nota di bibliografia

Includes bibliographical references.

Nota di contenuto

1. Introduction -- 1.1 Overview -- 1.2 Context -- 1.3 Time -- 1.4 Images --

2. State of the art -- 2.1 Introduction -- 2.2 Legal issues -- 2.3 Ethics -- 2.3.1 Second-language students -- 2.3.2 Self-plagiarism -- 2.4 Prevention -- 2.4.1 Education -- 2.4.2 Detection as prevention -- 2.5 Detection tools -- 2.5.1 Plagiarism tools -- 2.5.2 iThenticate -- 2.5.3 Crowdsourcing -- 2.5.4 Image-manipulation tools -- 2.6 Replication --

3. Quantifying plagiarism -- 3.1 Overview -- 3.1.1 History -- 3.1.2 Definition -- 3.1.3 Pages and percents -- 3.1.4 Context, quotes, and references -- 3.1.5 Sentences, paragraphs, and other units -- 3.1.6 Self-plagiarism -- 3.2 In the humanities -- 3.2.1 Overview -- 3.2.2 Paragraph-length examples -- 3.2.3 Book-length examples -- 3.3 In the social sciences -- 3.3.1 Overview -- 3.3.2 Example 1 -- 3.3.3 Example 2 -- 3.4 In the natural sciences -- 3.4.1 Overview -- 3.4.2 Example 1 -- 3.4.3 Example 2 -- 3.5 Conclusion: plagiarism --

4. Quantifying data falsification -- 4.1 Introduction -- 4.2 Metadata -- 4.3 Humanities -- 4.3.1 Introduction -- 4.3.2 History -- 4.3.3 Art and art history -- 4.3.4 Ethnography -- 4.3.5 Literature -- 4.4 Social sciences -- 4.4.1 Introduction -- 4.4.2 Replication studies -- 4.4.3 Diederik Stapel -- 4.4.4 James Hunton -- 4.4.5 Database revisions --



4.4.6 Data manipulation -- 4.5 Natural sciences -- 4.5.1 Introduction -- 4.5.2 Lab sciences -- 4.5.3 Medical sciences -- 4.5.4 Computing and statistics -- 4.5.5 Other non-lab sciences -- 4.6 Conclusion --

5. Quantifying image manipulation -- 5.1 Introduction -- 5.2 Digital imaging technology -- 5.2.1 Background -- 5.2.2 How a digital camera works -- 5.2.3 Raw format -- 5.2.4 Discovery analytics -- 5.2.5 Digital video -- 5.3 Arts and humanities -- 5.3.1 Introduction -- 5.3.2 Arts -- 5.3.3 Humanities -- 5.4 Social sciences and computing -- 5.4.1 Overview -- 5.4.2 Training and visualization -- 5.4.3 Standard manipulations -- 5.5 Biology -- 5.5.1 Legitimate manipulations -- 5.5.2 Illegitimate manipulations -- 5.6 Medicine -- 5.6.1 Limits -- 5.6.2 Case 1 -- 5.6.3 Case 2 -- 5.7 Other natural sciences -- 5.8 Detection tools and services -- 5.9 Conclusion --

6. Applying the metrics -- 6.1 Introduction -- 6.2 Detecting gray zones -- 6.3 Determining falsification -- 6.4 Prevention -- 6.5 Conclusion -- 6.6 HEADT Centre --

Bibliography -- Author's biography.

Sommario/riassunto

Institutions typically treat research integrity violations as black and white, right or wrong. The result is that the wide range of grayscale nuances that separate accident, carelessness, and bad practice from deliberate fraud and malpractice often get lost. This lecture looks at how to quantify the grayscale range in three kinds of research integrity violations: plagiarism, data falsification, and image manipulation. Quantification works best with plagiarism, because the essential one-to-one matching algorithms are well known and established tools for detecting when matches exist. Questions remain, however, of how many matching words of what kind in what location in which discipline constitute reasonable suspicion of fraudulent intent. Different disciplines take different perspectives on quantity and location. Quantification is harder with data falsification, because the original data are often not available, and because experimental replication remains surprisingly difficult. The same is true with image manipulation, where tools exist for detecting certain kinds of manipulations, but where the tools are also easily defeated. This lecture looks at how to prevent violations of research integrity from a pragmatic viewpoint, and at what steps can institutions and publishers take to discourage problems beyond the usual ethical admonitions. There are no simple answers, but two measures can help: the systematic use of detection tools and requiring original data and images. These alone do not suffice, but they represent a start. The scholarly community needs a better awareness of the complexity of research integrity decisions. Only an open and wide-spread international discussion can bring about a consensus on where the boundary lines are and when grayscale problems shade into black. One goal of this work is to move that discussion forward.