10988nam 2200505 450 99646448830331620220330125819.03-030-77772-3(CKB)4100000011979497(MiAaPQ)EBC6676367(Au-PeEL)EBL6676367(OCoLC)1259338936(PPN)258058757(EXLCZ)99410000001197949720220330d2021 uy 0engurcnu||||||||txtrdacontentcrdamediacrrdacarrierArtificial intelligence in HCI second International Conference, AI-HCI 2021, held as part of the 23rd HCI International Conference, HCII 2021, Virtual event, July 24-29, 2021, Proceedings /Helmut Degen, Stavroula Ntoa (editors)Cham, Switzerland :Springer,[2021]©20211 online resource (565 pages)Lecture notes in computer science ;127973-030-77771-5 Intro -- Foreword -- HCI International 2021 Thematic Areas and Affiliated Conferences -- Contents -- Ethics, Trust and Explainability -- Can You Trust the Black Box? The Effect of Personality Traits on Trust in AI-Enabled User Interfaces -- 1 Introduction -- 2 Research Background and Related Work -- 2.1 Human-Centred AI -- 2.2 Trust in AI -- 2.3 Best Practices and Guidelines for the Design of Intelligent Solutions -- 2.4 User-Type Models -- 3 Research Design -- 4 Results -- 4.1 Demographic Information -- 4.2 Personality Traits and Trust in AI-Enabled User Interfaces -- 5 Conclusion -- 6 Limitations and Future Research -- References -- Towards Design Principles for User-Centric Explainable AI in Fraud Detection -- 1 Introduction -- 2 Theoretical Foundation and Related Work -- 3 Research Methodology -- 4 Design Principles -- 5 Evaluation and Results -- 5.1 Evaluation and Experiment Design -- 5.2 Evaluation Results -- 5.3 Information Quality of Design Principles with Syntactic and Semantic Criteria -- 5.4 Quality of Design Principles Instantiation Through a Simulation -- 6 Discussion and Implications -- 6.1 Practical Implications of Design Principles -- 6.2 Simulation Findings -- 7 Conclusions -- References -- Disentangling Trust and Anthropomorphism Toward the Design of Human-Centered AI Systems -- 1 Introduction -- 2 Trust -- 2.1 The Function of Trust -- 2.2 Trust in Human-Machine Interaction -- 3 Anthropomorphism -- 3.1 The Function of Anthropomorphism -- 3.2 Anthropomorphism in Human-Machine Interaction -- 4 Trust and Anthropomorphism -- 4.1 Similarities -- 4.2 Differences -- 5 Toward Human-Centered Design -- 5.1 Designing for Trust Repair and Trust Dampening -- 5.2 Anthropomorphism as a Tool for User Understanding -- 6 Conclusion -- References -- Designing a Gender-Inclusive Conversational Agent For Pair Programming: An Empirical Investigation.1 Introduction -- 2 Methodology -- 2.1 Human-Human Study -- 2.2 Human-Agent Study -- 2.3 Analysis of Data -- 3 Results -- 4 Discussions -- 5 Conclusion -- References -- The Challenge of Digital Education and Equality in Taiwan -- 1 Introduction -- 2 Digital Divide: A Daily Scenario of Regular Life -- 3 The Challenges: IT Arm Races or Digital Equality -- 4 Conclusion -- References -- Morality Beyond the Lines: Detecting Moral Sentiment Using AI-Generated Synthetic Context -- 1 Introduction -- 2 Methodologies -- 2.1 Public Opinion Problem Under Study: Local Attitudes Toward US Oversea Bases and US Troop Presence -- 2.2 Fine-Tuning Data Collection -- 2.3 Fine-Tuning GPT-2 Language Model -- 2.4 Synthetic Context Generation -- 2.5 Keyword Identification and Counts -- 3 Results and Discussions -- 3.1 Number of MFD-Related Keywords Before and After Fine-Tuning -- 3.2 Discussion -- 4 Conclusion -- References -- Whoops! Something Went Wrong: Errors, Trust, and Trust Repair Strategies in Human Agent Teaming -- 1 Introduction -- 2 Trust Development in HAT -- 2.1 Tangibility and Immediacy -- 2.2 Transparency -- 2.3 Reliability -- 2.4 Task -- 3 Trust Violation in HAT -- 4 Trust Repair in HAT -- 4.1 Apologies -- 4.2 Denial -- 4.3 Context Specific Strategies -- 4.4 Interactions Between Trust Violation and Trust Repair -- 4.5 Moving Beyond Repair and into Calibration -- 5 Conclusion -- References -- What Does It Mean to Explain? A User-Centered Study on AI Explainability -- 1 Introduction -- 1.1 Background -- 2 Related Work -- 2.1 XAI Definition -- 2.2 XAI Techniques -- 2.3 XAI HCI Approaches -- 3 User Study -- 3.1 Individual User Interview -- 3.2 Group Workshop -- 4 Discussion -- 4.1 Explanation Focus -- 4.2 User Profiles -- 4.3 Mapping -- 5 Conclusion -- References -- Human-Centered AI.How Intuitive Is It? Comparing Metrics for Attitudes in Argumentation with a Human Baseline -- 1 Introduction -- 2 Definitions -- 3 Distance Functions for Argumentations -- 4 Comparison with a Human Baseline -- 5 Discussion -- 6 Related Work -- 7 Conclusion and Future Work -- References -- A Contextual Bayesian User Experience Model for Scholarly Recommender Systems -- 1 Introduction and Problem Statement -- 2 Research Background -- 2.1 Contextual Approaches -- 2.2 Contextual User Modeling -- 2.3 User eXperience (UX) -- 2.4 Bayesian Network -- 2.5 Bayesian Network Algorithms -- 3 Related Work -- 4 Why Bayesian Network Modeling? -- 4.1 Suitable to Deal with Uncertain and Dynamic Contexts -- 4.2 Well Adapted for Diagnose of Users' Preferences -- 4.3 Appropriate for Diagnose of User's Information Needs in SRSs -- 4.4 Appropriate for Representation of Casualty Relationships -- 4.5 Well Adapted to Other Recommending Approaches -- 5 Bayesian UM Development -- 5.1 Dataset Preparation -- 5.2 BN Structure Learning -- 5.3 BN Parameter Learning and Inference -- 6 Bayesian UM Evaluation and Results -- 6.1 Robustness of BN Structure: Sensitivity Analysis -- 6.2 Comparison of BN Algorithm-Expected Loss -- 6.3 Predictive Performance Assessment -- 7 Discussion: Proposed Model Advantages -- 8 Recommendations for Future Studies -- Appendix A: Data Codebook -- References -- From a Workshop to a Framework for Human-Centered Artificial Intelligence -- 1 Introduction -- 2 The Workshop -- 2.1 Workshop Scope -- 2.2 Workshop Format and Participants -- 3 Results -- 3.1 Workshop Expectations and Concerns -- 3.2 AI in HCI Research Topics -- 3.3 AI and Trust -- 3.4 Ethical AI -- 3.5 Human-Centered AI -- 3.6 Workshop Evaluation -- 3.7 The AI in HCI Research Map -- 4 A Framework for Human-Centered AI -- 5 Discussion and Conclusions -- References.Collaborative Human-AI Sensemaking for Intelligence Analysis -- 1 Intelligence Analysis -- 1.1 Challenges and Trends -- 1.2 Sensemaking in Intelligence Analysis -- 1.3 Artificial Intelligence and Machine Learning -- 1.4 Research Questions and Use Case -- 2 Theoretical Foundations of Human-AI Sensemaking -- 3 Methods -- 4 Results -- 4.1 Algorithms Matter -- 4.2 Features Matter -- 4.3 Outcomes Matter -- 5 Discussion -- 5.1 A Model of Collaborative Sensemaking with AI -- 5.2 Conclusions -- 5.3 Future Work -- References -- Design Intelligence - Taking Further Steps Towards New Methods and Tools for Designing in the Age of AI -- 1 Introduction -- 2 Overview Design for AI Challenges -- 2.1 Case Studies from B2B Factory Automation -- 2.2 Additional Findings -- 2.3 Related Work -- 2.4 Compare the Findings -- 3 Overview Human-Centered-AI Principles -- 3.1 Purpose and Definition -- 3.2 Proposed Solutions -- 3.3 Explanations and Analysis -- 3.4 Conclusion and Missing Pieces -- 4 Mapping Challenges and Principles -- 4.1 Which Design Challenges Are Addressed? -- 4.2 Which Aspects Are Still Missing? -- 5 Conclusion -- References -- Towards Incorporating AI into the Mission Planning Process -- 1 Introduction -- 1.1 Background -- 2 Key Challenges and Considerations -- 2.1 Data Challenges -- 2.2 Mission Plan Exploration -- 2.3 3rd Wave AI -- 3 AI Framework Design -- 3.1 Deep Reinforcement Learning -- 3.2 Neural Policy Programs -- 3.3 Human-NPP Teaming -- 4 Prototyping and Demonstration Environment -- 4.1 StarCraft II Background -- 4.2 StarCraft II Similarities to Navy Mission Environment -- 4.3 Prototyping Environment in StarCraft II -- 5 Conclusions and Future Work -- References -- Putting a Face on Algorithms: Personas for Modeling Artificial Intelligence -- 1 Introduction -- 1.1 Personas -- 2 Method -- 3 AI Persona Prototype -- 4 Experience with Collaboration.5 Discussion and Future Work -- References -- Tool or Partner: The Designer's Perception of an AI-Style Generating Service -- 1 Introduction -- 2 Related Work -- 2.1 Changes in the Creativity Concept and Research -- 2.2 System and Tool Evolution -- 2.3 The Emergence of New Works -- 2.4 Visual Field Expansion Through Tools -- 2.5 Summary -- 3 Research Verification: Case Analysis -- 3.1 Purpose of Analysis -- 3.2 Participants -- 3.3 Differences of Expression Type by Artists -- 3.4 Change and Discovery of Work Process -- 3.5 Semi-structured Interview -- 3.6 Results and Implications -- 4 Research Evaluation: Expert Perspective &amp -- Evaluation -- 4.1 Purpose of Analysis -- 4.2 Participants -- 4.3 Keywords and Questionnaire -- 4.4 Various Perspectives and Implications -- 4.5 Result -- 4.6 Summary -- 5 Conclusion -- 6 Discussion and Future Work -- References -- Human-Centered Artificial Intelligence Considerations and Implementations: A Case Study from Software Product Development -- 1 Introduction -- 2 Artificial Intelligence -- 3 Human-Centered AI -- 4 Case Study -- 4.1 AI Vision -- 4.2 Approach and Process -- 4.3 Selected Applications of AI -- 5 Conclusion and Outlook -- References -- Sage Advice? The Impacts of Explanations for Machine Learning Models on Human Decision-Making in Spam Detection -- 1 Introduction -- 2 Method -- 2.1 Participants -- 2.2 Materials -- 2.3 Procedure -- 3 Results -- 3.1 Effects of Visualization Type -- 3.2 Effects of Trial Type -- 3.3 Individual Differences in Model Trust -- 4 Discussion -- References -- HCD3A: An HCD Model to Design Data-Driven Apps -- 1 Introduction -- 2 Theory -- 2.1 Human-Centered Design Process -- 2.2 Artificial Intelligence and Machine Learning -- 3 Research Question, Hypothesis and the HCD3A Model -- 3.1 Research Question and Hypothesis -- 3.2 The HCD3A Model -- 4 Project Work.4.1 Project Idea and Background Information.Lecture notes in computer science ;12797.Human-computer interactionCongressesHuman-computer interaction004.019Degen Helmut Ntoa StavroulaMiAaPQMiAaPQMiAaPQBOOK996464488303316Artificial Intelligence in HCI2162732UNISA