Vai al contenuto principale della pagina

Artificial intelligence in brain and mental health : philosophical, ethical and policy issues / / Fabrice Jotterand and Marcello Ienca, editors



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Titolo: Artificial intelligence in brain and mental health : philosophical, ethical and policy issues / / Fabrice Jotterand and Marcello Ienca, editors Visualizza cluster
Pubblicazione: Cham, Switzerland : , : Springer, , [2022]
©2022
Descrizione fisica: 1 online resource (270 pages)
Disciplina: 006.3
Soggetto topico: Artificial intelligence
Intel·ligència artificial en medicina
Salut mental
Neurociències
Soggetto genere / forma: Llibres electrònics
Persona (resp. second.): JotterandFabrice <1967->
IencaMarcello
Nota di contenuto: Intro -- Acknowledgments -- Contents -- About the Authors -- 1: Introduction -- References -- Part I: Big Data and Automated Learning: Scientific and Ethical Considerations -- 2: Big Data in Medical AI: How Larger Data Sets Lead to Robust, Automated Learning for Medicine -- 2.1 Why the Big Data Revolution? -- 2.1.1 More Samples, More Features -- 2.1.2 Hardware Improvements -- 2.2 Precision Medicine -- 2.2.1 Challenges and Opportunities Unique to Mental Health and Wellness -- 2.3 Tools for Big Data in Medicine -- 2.3.1 Standardization Tools -- 2.3.2 Analytics to Leverage Big Data -- 2.3.2.1 Unsupervised and Semi-Supervised Learning Methods -- 2.3.2.2 High-Throughput Model Selection and Testing -- 2.3.2.3 Ensemble Methods -- 2.3.2.4 Deep Learning -- 2.4 Big Data Concerns -- 2.4.1 The Need for Proper Validation -- 2.4.1.1 Test Models with Separating Training, Validation, and Test Sets -- 2.4.1.2 Assure Samples Adequately Represent the Reported Populations and Contexts -- 2.4.1.3 Avoid Contamination Between Training and Test Sets -- 2.4.1.4 Select the Right Performance Metrics -- 2.4.2 Security -- 2.4.3 Ethical Challenges -- 2.5 Conclusion -- References -- 3: Automatic Diagnosis and Screening of Personality Dimensions and Mental Health Problems -- 3.1 Introduction: Automatic Analysis of Personality -- 3.1.1 Personality and Diagnosis -- 3.1.2 Diagnosis Versus Screening -- 3.1.3 The Clinical Versus the Nonclinical Context -- 3.2 Computational Personality Analysis -- 3.2.1 The Relevance of Computational Personality Analysis in the Current Culture -- 3.2.2 Computational Personality Analysis in a Nutshell -- 3.3 Computational Personality Analysis Further Detailed -- 3.4 A Critical Perspective -- 3.5 Summary and Conclusions -- References.
4: Intelligent Virtual Agents in Behavioral and Mental Healthcare: Ethics and Application Considerations -- 4.1 Introduction -- 4.1.1 Technical Overview -- 4.2 Practical Applications in Healthcare and Benefits -- 4.2.1 Use in Care Settings -- 4.2.2 IVAs in Serious Games -- 4.3 Ethical Issues -- 4.3.1 User Safety -- 4.3.2 Risks Associated with Overreliance on Technology -- 4.3.3 Risks to Privacy -- 4.3.4 Deception -- 4.3.5 Artificial Relationships -- 4.3.6 Bias in Design -- 4.3.7 Black-Box Problem -- 4.3.8 Legal Responsibility and Liability -- 4.4 Recommendations -- 4.5 Summary and Conclusions -- References -- 5: Machine Learning in Stroke Medicine: Opportunities and Challenges for Risk Prediction and Prevention -- 5.1 Introduction -- 5.2 Burden of Stroke -- 5.3 Stroke Prevention: A Public Health Priority -- 5.4 The Advent of Data-Driven Risk Prediction Models -- 5.5 From Data-Driven Risk Prediction to Stroke Prevention -- 5.6 Technological, Methodological, and Ethical Challenges -- 5.6.1 Data Sourcing -- 5.6.2 Application Development -- 5.6.3 Deployment in Clinical Practice -- 5.7 Conclusion -- References -- 6: Respect for Persons and Artificial Intelligence in the Age of Big Data -- 6.1 Introduction -- 6.2 Big Data and AI in Biomedical Research -- 6.3 Big Data Health Research and the Belmont Principles -- 6.4 Privacy and Confidentiality -- 6.5 Notification and Broad Consent: Ethically Insufficient -- 6.6 A Broader Conceptualization of Respect for Persons and a Balance with Other Principles -- 6.7 Beyond Informed Consent -- 6.8 Include Participant Representatives in Project Leadership -- 6.9 Stakeholder Engagement in AI Mental Health Research -- 6.10 Conclusion -- References -- Part II: AI for Digital Mental Health and Assistive Robotics: Philosophical and Regulatory Challenges.
7: Social Robots and Dark Patterns: Where Does Persuasion End and Deception Begin? -- 7.1 Introduction -- 7.2 Robots in the Wild -- 7.2.1 Robots in the Realm of Personal Human Experience -- 7.2.2 Socially Interactive Robots -- 7.2.3 The Challenge of Long-Term Interaction with Social Robots -- 7.3 Social Robot Design -- 7.3.1 Anthropomorphism in Social Robot Design -- 7.3.2 Social Robot Design Is Not Neutral -- 7.3.3 Social Robots as Exemplars of Persuasive Design -- 7.4 Dark Patterns Meet Robotics -- 7.4.1 What Are Dark Patterns? -- 7.4.2 Pervasiveness of Dark Patterns -- 7.4.3 When Social Robots Meet Dark Patterns -- 7.4.3.1 aibo -- 7.4.3.2 Pepper -- 7.5 Discussion -- 7.6 Conclusions -- References -- 8: Minding the AI: Ethical Challenges and Practice for AI Mental Health Care Tools -- 8.1 Introduction -- 8.2 Artificial Intelligence -- 8.3 AI Mental Health Applications -- 8.4 Ethical Challenges -- 8.5 Therapeutic Relationship -- 8.6 Safety and Effectiveness -- 8.7 Bias/Fairness -- 8.8 Privacy/Trust -- 8.9 Surveillance -- 8.10 Conclusion -- References -- 9: Digital Behavioral Technology, Deep Learning, and Self-Optimization -- 9.1 Introduction -- 9.2 Digital Behavioral Technology -- 9.2.1 Functionalities -- 9.2.2 Usage Profiles -- 9.3 Digital Behavioral Technology and Self-Optimization -- 9.3.1 Information -- 9.3.2 Parameterization of Behavior -- 9.3.3 Direct Interaction -- 9.4 Digital Behavioral Technology and Artificial Intelligence -- 9.5 Problems with Artificial Intelligence -- 9.6 Possible Effects of Problems with AI on DBT's Role in Self-Optimization -- 9.7 Conclusion -- References -- 10: Mental Health Chatbots, Moral Bio-Enhancement, and the Paradox of Weak Moral AI -- 10.1 Major Concerns Regarding Moral Bio-Enhancement -- 10.2 Weak Moral AI as a Socratic Assistant.
10.3 The Paradox of a Weak Moral AI: A Philosophical Argument -- 10.3.1 The Impotence of Moral Judgment: Humeanism Vs. Anti-Humeanism -- 10.3.2 Moral Hard-Wiring and the Limits of Weak Moral AI -- 10.3.3 Motivation Ethics: The Evaluation of Moral Agents and Actions -- 10.4 Chatbots Used in Mental Health and How the Case Sheds Light on the Feasibility of a Weak Moral AI -- 10.5 Can Moral Info-enhancement Interventions Be "Medically Indicated"? -- 10.6 Concluding Remarks -- References -- 11: The AI-Powered Digital Health Sector: Ethical and Regulatory Considerations When Developing Digital Mental Health Tools for the Older Adult Demographic -- 11.1 Introduction -- 11.2 Regulatory Gaps -- 11.3 Ethical Principles -- 11.4 Smart Home Use Case -- 11.5 Discussion and Case Analysis: Ethical Dimensions of Smart Home Technologies -- 11.5.1 Informed Consent and Agency -- 11.5.1.1 Content and Delivery -- 11.5.1.2 Cognitive Capacity -- 11.5.1.3 Bystanders -- 11.5.2 Usability and Accessibility -- 11.5.2.1 Usability -- 11.5.2.2 Accessibility -- 11.5.3 Risks and Benefits -- 11.5.4 Privacy -- 11.5.5 Data Management -- 11.6 Conclusion -- References -- 12: AI Extenders and the Ethics of Mental Health -- 12.1 Introduction -- 12.2 What Is the Extended Mind Thesis? -- 12.3 What Is an "AI Extender"? How This Differs from Standard Kinds of Cognitive Extension -- 12.4 How the Extended Mind Can Change Our Understanding, Assessment, and Treatment of Cognitive Disorders -- 12.4.1 Alzheimer's Disease -- 12.4.2 Learning Disabilities and Disorders -- 12.4.3 Addiction -- 12.4.4 Borderline Personality Disorder -- 12.4.5 Autistic Disorders -- 12.5 The Specific Effects of AI Extenders on Mental Health -- 12.6 Potentialities and Challenges of AI Extenders -- 12.7 Recommendations -- Appendix -- References.
Part III: AI in Neuroscience and Neurotechnology: Ethical, Social and Policy Issues -- 13: The Importance of Expiry Dates: Evaluating the Societal Impact of AI-Based Neuroimaging -- 13.1 Introduction -- 13.2 The Risks of an Early Technology Assessment -- 13.3 Evaluating Implications of AI in Combination with Neuroimaging (AINI) -- 13.4 AINI in Society: Are We Ready Just Yet? -- 13.5 AINI and Expiry Dates -- 13.6 Conclusion -- References -- 14: Does Closed-Loop DBS for Treatment of Psychiatric Disorders Raise Salient Authenticity Concerns? -- 14.1 Introduction -- 14.2 Deep Brain Stimulation and Psychiatric Disorders -- 14.3 Authenticity and Treatment of Psychiatric Disorders -- 14.3.1 Sense of Authenticity -- 14.3.2 Narrative Authenticity -- 14.3.3 Assessing Authenticity -- 14.3.4 Why Authenticity? -- 14.4 Authenticity and Closed-Loop DBS -- 14.4.1 Reading Data -- 14.4.2 Analyzing Data -- 14.4.3 Stimulation -- 14.5 Future Directions -- References -- 15: Matter Over Mind: Liability Considerations Surrounding Artificial Intelligence in Neuroscience -- 15.1 Introduction -- 15.2 Traditional Tort Liability -- 15.2.1 Physicians -- 15.2.2 HealthCare Organizations -- 15.2.3 Medical Device Manufactures -- 15.3 Machine Learning -- 15.4 Liability Surrounding Neurological Medical Devices That Utilize AI -- 15.4.1 Physicians -- 15.4.2 Hospitals -- 15.4.3 Manufacturers -- 15.4.4 Regulatory Considerations -- 15.4.4.1 Software -- 15.4.4.2 Devices -- 15.5 Data Collection Consideration -- 15.5.1 Invasive BCIs -- 15.5.2 Neurowearables -- 15.6 Pathways Forward -- 15.6.1 Ethics of Informed Consent -- 15.6.2 Neuro Rights -- 15.6.3 Physician Education -- 15.6.4 Hospitals -- 15.6.5 FDA Updates -- 15.7 Conclusion -- References -- 16: A Common Ground for Human Rights, AI, and Brain and Mental Health -- 16.1 Introduction.
16.2 Human Rights in AI and Healthcare.
Titolo autorizzato: Artificial intelligence in brain and mental health  Visualizza cluster
ISBN: 3-030-74188-5
Formato: Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 9910544864703321
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Serie: Advances in neuroethics series.