LEADER 13082nam 22008415 450 001 996538665503316 005 20230624230202.0 010 $a3-031-36272-1 024 7 $a10.1007/978-3-031-36272-9 035 $a(MiAaPQ)EBC30607319 035 $a(Au-PeEL)EBL30607319 035 $a(DE-He213)978-3-031-36272-9 035 $a(PPN)272259691 035 $a(EXLCZ)9927213348000041 100 $a20230624d2023 u| 0 101 0 $aeng 135 $aurcnu|||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aArtificial Intelligence in Education$b[electronic resource] $e24th International Conference, AIED 2023, Tokyo, Japan, July 3?7, 2023, Proceedings /$fedited by Ning Wang, Genaro Rebolledo-Mendez, Noboru Matsuda, Olga C. Santos, Vania Dimitrova 205 $a1st ed. 2023. 210 1$aCham :$cSpringer Nature Switzerland :$cImprint: Springer,$d2023. 215 $a1 online resource (863 pages) 225 1 $aLecture Notes in Artificial Intelligence,$x2945-9141 ;$v13916 311 08$aPrint version: Wang, Ning Artificial Intelligence in Education Cham : Springer,c2023 9783031362712 327 $aIntro -- Preface -- Organization -- International Artificial Intelligence in Education Society -- Contents -- Full Papers -- Machine-Generated Questions Attract Instructors When Acquainted with Learning Objectives -- 1 Introduction -- 2 Related Work -- 3 Overview of Quadl -- 4 Evaluation Study -- 4.1 Model Implementation -- 4.2 Survey Study -- 5 Results -- 5.1 Instructor Survey -- 5.2 Accuracy of the Answer Prediction Model -- 5.3 Qualitative Analysis of Questions Generated by Quadl -- 6 Discussion -- 7 Conclusion -- References -- SmartPhone: Exploring Keyword Mnemonic with Auto-generated Verbal and Visual Cues -- 1 Introduction -- 2 Methodology -- 2.1 Pipeline for Auto-generating Verbal and Visual Cues -- 3 Experimental Evaluation -- 3.1 Experimental Design -- 3.2 Experimental Conditions -- 3.3 Evaluation Metrics -- 3.4 Results and Discussion -- 4 Conclusions and Future Work -- References -- Implementing and Evaluating ASSISTments Online Math Homework Support At large Scale over Two Years: Findings and Lessons Learned -- 1 Introduction -- 2 Background -- 2.1 The ASSISTments Program -- 2.2 Theoretical Framework -- 2.3 Research Design -- 3 Implementation of ASSISTments at Scale -- 3.1 Recruitment -- 3.2 Understanding School Context -- 3.3 Training and Continuous Support -- 3.4 Specifying a Use Model and Expectation -- 3.5 Monitoring Dosage and Evaluating Quality of Implementation -- 4 Data Collection -- 5 Analysis and Results -- 6 Conclusion -- References -- The Development of Multivariable Causality Strategy: Instruction or Simulation First? -- 1 Introduction -- 2 Literature Review -- 2.1 Learning Multivariable Causality Strategy with Interactive Simulation -- 2.2 Problem Solving Prior to Instruction Approach to Learning -- 3 Method -- 3.1 Participants -- 3.2 Design and Procedure -- 3.3 Materials -- 3.4 Data Sources and Analysis -- 4 Results. 327 $a5 Discussion -- 6 Conclusions, Limitations, and Future Work -- References -- Content Matters: A Computational Investigation into the Effectiveness of Retrieval Practice and Worked Examples -- 1 Introduction -- 2 A Computational Model of Human Learning -- 3 Simulation Studies -- 3.1 Data -- 3.2 Method -- 4 Results -- 4.1 Pretest -- 4.2 Learning Gain -- 4.3 Error Type -- 5 General Discussion -- 6 Future Work -- 7 Conclusions -- References -- Investigating the Utility of Self-explanation Through Translation Activities with a Code-Tracing Tutor -- 1 Introduction -- 1.1 Code Tracing: Related Work -- 2 Current Study -- 2.1 Translation Tutor vs. Standard Tutor -- 2.2 Participants -- 2.3 Materials -- 2.4 Experimental Design and Procedure -- 3 Results -- 4 Discussion and Future Work -- References -- Reducing the Cost: Cross-Prompt Pre-finetuning for Short Answer Scoring -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 Task Definition -- 3.2 Scoring Model -- 4 Method -- 5 Experiment -- 5.1 Dataset -- 5.2 Setting -- 5.3 Results -- 5.4 Analysis: What Does the SAS Model Learn from Pre-finetuning on Cross Prompt Data? -- 6 Conclusion -- References -- Go with the Flow: Personalized Task Sequencing Improves Online Language Learning -- 1 Introduction -- 2 Related Work -- 2.1 Adaptive Item Sequencing -- 2.2 Individual Adjustment of Difficulty Levels in Language Learning -- 3 Methodology -- 3.1 Online-Controlled Experiment -- 4 Results -- 4.1 H1 - Incorrect Answers -- 4.2 H2 - Dropout -- 4.3 H3 - User Competency -- 5 Discussion -- 6 Conclusion -- References -- Automated Hand-Raising Detection in Classroom Videos: A View-Invariant and Occlusion-Robust Machine Learning Approach -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Data -- 3.2 Skeleton-Based Hand-Raising Detection -- 3.3 Automated Hand-Raising Annotation -- 4 Results. 327 $a4.1 Relation Between Hand-Raising and Self-reported Learning Activities -- 4.2 Hand-Raising Classification -- 4.3 Automated Hand-Raising Annotation -- 5 Discussion -- 6 Conclusion -- References -- Robust Educational Dialogue Act Classifiers with Low-Resource and Imbalanced Datasets -- 1 Introduction -- 2 Background -- 2.1 Educational Dialogue Act Classification -- 2.2 AUC Maximization on Imbalanced Data Distribution -- 3 Methods -- 3.1 Dataset -- 3.2 Scheme for Educational Dialogue Act -- 3.3 Approaches for Model Optimization -- 3.4 Model Architecture by AUC Maximization -- 3.5 Study Setup -- 4 Results -- 4.1 AUC Maximization Under Low-Resource Scenario -- 4.2 AUC Maximization Under Imbalanced Scenario -- 5 Discussion and Conclusion -- References -- What and How You Explain Matters: Inquisitive Teachable Agent Scaffolds Knowledge-Building for Tutor Learning -- 1 Introduction -- 2 SimStudent: The Teachable Agent -- 3 Constructive Tutee Inquiry -- 3.1 Motivation -- 3.2 Response Classifier -- 3.3 Dialog Manager -- 4 Method -- 5 Results -- 5.1 RQ1: Can we Identify Knowledge-Building and Knowledge-Telling from Tutor Responses to Drive CTI? -- 5.2 RQ2: Does CTI Facilitate Tutor Learning? -- 5.3 RQ3: Does CTI Help Tutors Learn to Engage in Knowledge-Building? -- 6 Discussion -- 7 Conclusion -- References -- Help Seekers vs. Help Accepters: Understanding Student Engagement with a Mentor Agent -- 1 Introduction -- 2 Mr. Davis and Betty's Brain -- 3 Methods -- 3.1 Participants -- 3.2 Interaction Log Data -- 3.3 In-situ Interviews -- 3.4 Learning and Anxiety Measures -- 4 Results -- 4.1 Help Acceptance -- 4.2 Help Seeking -- 4.3 Learning Outcomes -- 4.4 Insights from Qualitative Interviews -- 5 Conclusions -- References -- Adoption of Artificial Intelligence in Schools: Unveiling Factors Influencing Teachers' Engagement -- 1 Introduction. 327 $a2 Context and the Adaptive Learning Platform Studied -- 3 Methodology -- 4 Results -- 4.1 Teachers' Responses to the Items -- 4.2 Predicting Teachers' Engagement with the Adaptive Learning Platform -- 5 Discussion -- 6 Conclusion -- Appendix -- References -- The Road Not Taken: Preempting Dropout in MOOCs -- 1 Introduction -- 2 Related Work -- 3 Method -- 3.1 Dataset -- 3.2 Modeling Student Engagement by HMM -- 3.3 Study Setup -- 4 Results -- 5 Discussion and Conclusion -- References -- Does Informativeness Matter? Active Learning for Educational Dialogue Act Classification -- 1 Introduction -- 2 Related Work -- 2.1 Educational Dialogue Act Classification -- 2.2 Sample Informativeness -- 2.3 Statistical Active Learning -- 3 Methods -- 3.1 Dataset -- 3.2 Educational Dialogue Act Scheme and Annotation -- 3.3 Identifying Sample Informativeness via Data Maps -- 3.4 Active Learning Selection Strategies -- 3.5 Study Setup -- 4 Results -- 4.1 Estimation of Sample Informativeness -- 4.2 Efficacy of Statistical Active Learning Methods -- 5 Conclusion -- References -- Can Virtual Agents Scale Up Mentoring?: Insights from College Students' Experiences Using the CareerFair.ai Platform at an American Hispanic-Serving Institution -- 1 Introduction -- 2 CareerFair.ai Design -- 3 Research Design -- 4 Results -- 5 Discussion -- 6 Conclusions and Future Directions -- References -- Real-Time AI-Driven Assessment and Scaffolding that Improves Students' Mathematical Modeling during Science Investigations -- 1 Introduction -- 1.1 Related Work -- 2 Methods -- 2.1 Participants and Materials -- 2.2 Inq-ITS Virtual Lab Activities with Mathematical Modeling -- 2.3 Approach to Automated Assessment and Scaffolding of Science Practices -- 2.4 Measures and Analyses -- 3 Results -- 4 Discussion -- References. 327 $aImproving Automated Evaluation of Student Text Responses Using GPT-3.5 for Text Data Augmentation -- 1 Introduction -- 2 Background and Research Questions -- 3 Methods -- 3.1 Data Sets -- 3.2 Augmentation Approach -- 3.3 Model Classification -- 3.4 Baseline Evaluation -- 4 Results -- 5 Discussion -- 6 Conclusion -- 7 Future Work -- References -- The Automated Model of Comprehension Version 3.0: Paying Attention to Context -- 1 Introduction -- 2 Method -- 2.1 Processing Flow -- 2.2 Features Derived from AMoC -- 2.3 Experimental Setup -- 2.4 Comparison Between AMoC Versions -- 3 Results -- 3.1 Use Case -- 3.2 Correlations to the Landscape Model -- 3.3 Diffentiating Between High-Low Cohesion Texts -- 4 Conclusions and Future Work -- References -- Analysing Verbal Communication in Embodied Team Learning Using Multimodal Data and Ordered Network Analysis -- 1 Introduction -- 2 Methods -- 3 Results -- 3.1 Primary Tasks -- 3.2 Secondary Tasks -- 4 Discussion -- References -- Improving Adaptive Learning Models Using Prosodic Speech Features -- 1 Introduction -- 2 Methods -- 2.1 Participants -- 2.2 Design and Procedure -- 2.3 Materials -- 2.4 Speech Feature Extraction -- 2.5 Data and Statistical Analyses -- 3 Results -- 3.1 The Association Between Speech Prosody and Memory Retrieval Performance -- 3.2 Improving Predictions of Future Performance Using Speech Prosody -- 4 Discussion -- References -- Neural Automated Essay Scoring Considering Logical Structure -- 1 Introduction -- 2 Conventional Neural AES Model Using BERT -- 3 Argument Mining -- 4 Proposed Method -- 4.1 DNN Model for Processing Logical Structure -- 4.2 Neural AES Model Considering Logical Structure -- 5 Experiment -- 5.1 Setup -- 5.2 Experimental Results -- 5.3 Analysis -- 6 Conclusion -- References. 327 $a"Why My Essay Received a 4?": A Natural Language Processing Based Argumentative Essay Structure Analysis. 330 $aThis book constitutes the refereed proceedings of the 24th International Conference on Artificial Intelligence in Education, AIED 2023, held in Tokyo, Japan, during July 3-7, 2023. This event took place in hybrid mode. The 53 full papers and 26 short papers presented in this book were carefully reviewed and selected from 311 submissions. The papers present result in high-quality research on intelligent systems and the cognitive sciences for the improvement and advancement of education. The conference was hosted by the prestigious International Artificial Intelligence in Education Society, a global association of researchers and academics specializing in the many fields that comprise AIED, including, but not limited to, computer science, learning sciences, and education. 410 0$aLecture Notes in Artificial Intelligence,$x2945-9141 ;$v13916 606 $aArtificial intelligence 606 $aDatabase management 606 $aData mining 606 $aApplication software 606 $aUser interfaces (Computer systems) 606 $aHuman-computer interaction 606 $aEducation?Data processing 606 $aArtificial Intelligence 606 $aDatabase Management 606 $aData Mining and Knowledge Discovery 606 $aComputer and Information Systems Applications 606 $aUser Interfaces and Human Computer Interaction 606 $aComputers and Education 615 0$aArtificial intelligence. 615 0$aDatabase management. 615 0$aData mining. 615 0$aApplication software. 615 0$aUser interfaces (Computer systems). 615 0$aHuman-computer interaction. 615 0$aEducation?Data processing. 615 14$aArtificial Intelligence. 615 24$aDatabase Management. 615 24$aData Mining and Knowledge Discovery. 615 24$aComputer and Information Systems Applications. 615 24$aUser Interfaces and Human Computer Interaction. 615 24$aComputers and Education. 676 $a006.3 700 $aWang$b Ning$0674438 701 $aRebolledo-Mendez$b Genaro$01012022 701 $aMatsuda$b Noboru$01369715 701 $aSantos$b Olga C$01369716 701 $aDimitrova$b Vania$01369717 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a996538665503316 996 $aArtificial Intelligence in Education$93396451 997 $aUNISA