LEADER 01940nam0 2200337 i 450 001 SUN0051867 005 20161013110101.931 010 $a978-08-987141-2-8$d0.00 100 $a20060911d1998 |0engc50 ba 101 $aeng 102 $aUS 105 $a|||| ||||| 200 1 $a*Computer methods for ordinary differential equations and differential algebraic equations$fUri M. Ascher, Linda R. Petzold 210 $aPhiladelphia$cSIAM$d1998 215 $aXVII, 314 p.$cill.$d25 cm. 606 $a65-XX$xNumerical analysis [MSC 2020]$2MF$3SUNC019772 606 $a65Lxx$xNumerical methods for ordinary differential equations [MSC 2020]$2MF$3SUNC020052 620 $dPhiladelphia$3SUNL000037 700 1$aAscher$b, Uri M.$3SUNV040779$021815 701 1$aPetzold$b, Linda R.$3SUNV040784$0496795 712 $aSIAM$3SUNV001157$4650 790 1$aPetzold, L. R.$zPetzold, Linda R.$3SUNV042110 790 1$aPetzold, Linda Ruth$zPetzold, Linda R.$3SUNV060849 801 $aIT$bSOL$c20201026$gRICA 856 4 $uhttps://books.google.it/books?id=VL51G5JYYAYC&printsec=frontcover&dq=Computer+methods+for+ordinary+differential+equations+and+differential-algebraic+equations&hl=it&sa=X&ved=0ahUKEwiSspaptdfPAhVEShQKHUPyBM8Q6AEIJTAA#v=onepage&q=Computer%20m 912 $aSUN0051867 950 $aUFFICIO DI BIBLIOTECA DEL DIPARTIMENTO DI MATEMATICA E FISICA$d08CONS 65-XX 0168 $e08 4681 II a 20060911 950 $aUFFICIO DI BIBLIOTECA DEL DIPARTIMENTO DI MATEMATICA E FISICA$d08PREST 65-XX 0168 $e08 5032 II b 20060911 950 $aUFFICIO DI BIBLIOTECA DEL DIPARTIMENTO DI MATEMATICA E FISICA$d08PREST 65-XX 0168 $e08 5195 II c 20060911 996 $aComputer methods for ordinary differential equations and differential-algebraic equations$9749559 997 $aUNICAMPANIA LEADER 00810nam0-22002771i-450 001 990005382910403321 005 20230328162944.0 035 $a000538291 035 $aFED01000538291 035 $a(Aleph)000538291FED01 035 $a000538291 100 $a19990530d1964----km-y0itay50------ba 101 0 $afre 105 $aa-------00--- 200 1 $aPalèontologie vègètale$fLèon Moret 205 $a3. Td. revue, corrigèe et augmentèe d'un addendum 210 $aParis$cMasson et C.ie$d1964 215 $aVIII, 244 p.$cill.$d24 cm 700 1$aMoret,$bLéon$f<1890-1972>$0333714 801 0$aIT$bUNINA$gRICA$2UNIMARC 901 $aBK 912 $a990005382910403321 952 $aARCH. PR 155 8$bARCH. 11905$fFLFBC 959 $aFLFBC 996 $aPalèontologie vègètale$9596626 997 $aUNINA LEADER 01135nam2 22003253i 450 001 UTO1110187 005 20231121125917.0 010 $a2070366693 100 $a20161128d1975 ||||0itac50 ba 101 | $afre 102 $afr 181 1$6z01$ai $bxxxe 182 1$6z01$an 200 1 $a˜3: L'œinsurge$fJules Valles.$ged. presentee, etablie et annotee par Marie-Claire Bancquart 210 $aParis$cGallimard$d1975 215 $a410 p.$d18 cm. 225 | $aFolio$v669 410 0$1001RAV0110397$12001 $aFolio$v669 461 1$1001UFI0264608$12001 $aJacques Vingtras$fJules Vallès$v3 700 1$aVallès$b, Jules$3CFIV081011$0396131 702 1$aBancquart$b, Marie-Claire$3BVEV012347 790 1$aLa Rue$b, Jean$3MILV157776$zVallès, Jules 801 3$aIT$bIT-01$c20161128 850 $aIT-FR0017 899 $aBiblioteca umanistica Giorgio Aprea$bFR0017 $eN 912 $aUTO1110187 950 2$aBiblioteca umanistica Giorgio Aprea$d 52FOLIO 669$e 52FLS0000059795 VMB RS $fA $h20161128$i20161128 977 $a 52 996 $aInsurgé$9193607 997 $aUNICAS LEADER 10976nam 22005293 450 001 996594166503316 005 20240503084506.0 010 $a981-9722-66-7 035 $a(CKB)31801766700041 035 $a(MiAaPQ)EBC31305356 035 $a(Au-PeEL)EBL31305356 035 $a(MiAaPQ)EBC31319818 035 $a(Au-PeEL)EBL31319818 035 $a(EXLCZ)9931801766700041 100 $a20240503d2024 uy 0 101 0 $aeng 135 $aur||||||||||| 181 $ctxt$2rdacontent 182 $cc$2rdamedia 183 $acr$2rdacarrier 200 10$aAdvances in Knowledge Discovery and Data Mining $e28th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2024, Taipei, Taiwan, May 7-10, 2024, Proceedings, Part VI 205 $a1st ed. 210 1$aSingapore :$cSpringer Singapore Pte. Limited,$d2024. 210 4$d©2024. 215 $a1 online resource (329 pages) 225 1 $aLecture Notes in Computer Science Series ;$vv.14650 311 $a981-9722-65-9 327 $aIntro -- General Chairs' Preface -- PC Chairs' Preface -- Organization -- Contents - Part VI -- Scientific Data -- FR3LS: A Forecasting Model with Robust and Reduced Redundancy Latent Series -- 1 Introduction -- 2 Related Work -- 3 Problem Setup -- 4 Model Architecture -- 4.1 Temporal Contextual Consistency -- 4.2 Non-contrastive Representations Learning -- 4.3 Deterministic Forecasting -- 4.4 Probabilistic Forecasting -- 4.5 End-to-End Training -- 5 Experiments -- 5.1 Experimental Results -- 5.2 Visualization of Latent and Original Series Forecasts -- 5.3 Further Experimental Setup Details -- 6 Conclusion -- References -- Knowledge-Infused Optimization for Parameter Selection in Numerical Simulations -- 1 Introduction -- 2 Preliminaries on Numerical Simulations -- 3 Identification and Analysis of Relevant Metadata -- 4 Efficient Metadata Capture for Parameter Optimization -- 4.1 Early Termination of Farming Runs -- 4.2 PROBE: Probing Specific Parameter Combinations -- 5 Experimental Evaluation -- 5.1 Quality of Parameter Optimization Using PROBE -- 5.2 Efficiency Evaluation -- 5.3 Reuse of Metadata Acquired Through PROBE -- 5.4 Generalization to Other Model Problems and Schemes -- 6 Conclusion and Outlook -- References -- Material Microstructure Design Using VAE-Regression with a Multimodal Prior -- 1 Introduction -- 2 Methodology -- 3 Related Work -- 4 Experimental Results -- 5 Summary and Conclusions -- References -- A Weighted Cross-Modal Feature Aggregation Network for Rumor Detection -- 1 Introduction -- 2 Related Work -- 2.1 Rumor Detection -- 2.2 Multimodal Alignment -- 3 Methodology -- 3.1 Overview of WCAN -- 3.2 Feature Extraction -- 3.3 Weighted Cross-Modal Aggregation Module -- 3.4 Multimodal Feature Fusion -- 3.5 Objective Function -- 4 Experiments -- 4.1 Datasets -- 4.2 Baselines -- 4.3 Ablation Experiment. 327 $a4.4 Hyper-parameter Analysis -- 4.5 Visualization on the Representations -- 4.6 Case Study -- 5 Conclusions -- References -- Texts, Web, Social Network -- Quantifying Opinion Rejection: A Method to Detect Social Media Echo Chambers -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 4 Echo Chamber Detection in Signed Networks -- 5 SEcho Method -- 5.1 SEcho Metric -- 5.2 Greedy Optimisation -- 6 Experiments -- 7 Conclusion -- References -- KiProL: A Knowledge-Injected Prompt Learning Framework for Language Generation -- 1 Introduction -- 2 Methodology -- 2.1 Problem Statement -- 2.2 Knowledge-Injected Prompt Learning Generation -- 2.3 Training and Inference -- 3 Experiments -- 3.1 Datasets -- 3.2 Settings -- 3.3 Automatic Evaluation -- 3.4 Human Annotation -- 3.5 Ablation Study -- 3.6 In-Depth Analysis -- 3.7 Case Study -- 4 Conclusion -- References -- GViG: Generative Visual Grounding Using Prompt-Based Language Modeling for Visual Question Answering -- 1 Introduction -- 2 Related Work -- 2.1 Pix2Seq Framework -- 2.2 Prompt Tuning -- 3 Methodology -- 3.1 Prompt Tuning Module -- 3.2 VG Module -- 3.3 Conditional Trie-Based Search Algorithm (CTS) -- 4 Results -- 4.1 Dataset Description -- 4.2 Results on WSDM 2023 Toloka VQA Dataset Benchmark -- 5 Discussion -- 5.1 Prompt Study -- 5.2 Interpretable Attention -- 6 Conclusion -- References -- Aspect-Based Fake News Detection -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Problem Definition and Model Overview -- 3.2 Aspect Learning and Extraction -- 3.3 News Article Classification -- 4 Experiments -- 4.1 Datasets -- 4.2 Experimental Settings -- 4.3 Evaluation -- 5 Analysing the Effect of Aspects Across Topics -- 6 Discussion and Future Work -- 7 Conclusion -- References -- DQAC: Detoxifying Query Auto-completion with Adapters -- 1 Introduction -- 2 Related Work -- 3 Methodology. 327 $a3.1 QDetoxify: Toxicity Classifier for Search Queries -- 3.2 The DQAC Model -- 4 Experimental Setup -- 5 Results and Analyses -- 6 Conclusions -- References -- Graph Neural Network Approach to Semantic Type Detection in Tables -- 1 Introduction -- 2 Related Works -- 3 Problem Definition -- 4 GAIT -- 4.1 Single-Column Prediction -- 4.2 Graph-Based Prediction -- 4.3 Overall Prediction -- 5 Evaluation -- 5.1 Evaluation Method -- 5.2 Results -- 6 Conclusion -- References -- TCGNN: Text-Clustering Graph Neural Networks for Fake News Detection on Social Media -- 1 Introduction -- 2 Related Work -- 3 The Proposed TCGNN Method -- 3.1 Text-Clustering Graph Construction -- 3.2 Model Training -- 4 Experiments -- 5 Conclusions and Discussion -- References -- Exploiting Adaptive Contextual Masking for Aspect-Based Sentiment Analysis -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Problem Formulation and Motivation -- 3.2 Standalone ABSA Tasks -- 3.3 Adaptive Contextual Threshold Masking (ACTM) -- 3.4 Adaptive Attention Masking (AAM) -- 3.5 Adaptive Mask Over Masking (AMOM) -- 3.6 Training Procedure for ATE and ASC -- 4 Experiments and Results -- 5 Conclusion -- References -- An Automated Approach for Generating Conceptual Riddles -- 1 Introduction -- 2 Related Work -- 3 Methodology -- 3.1 Triples Creator -- 3.2 Properties Classifier -- 3.3 Generator -- 3.4 Validator -- 4 Evaluation and Results -- 5 Conclusion and Future Work -- References -- Time-Series and Streaming Data -- DiffFind: Discovering Differential Equations from Time Series -- 1 Introduction -- 2 Background and Related Work -- 2.1 Related Work -- 2.2 Background - Genetic Algorithms for Architecture Search -- 3 Proposed Method: DiffFind -- 4 Experiments -- 4.1 Q1 - DiffFind is Effective -- 4.2 Q2 - DiffFind is Explainable -- 4.3 Q3 - DiffFind is Scalable -- 5 Conclusions -- References. 327 $aDEAL: Data-Efficient Active Learning for Regression Under Drift -- 1 Introduction -- 2 Related Work -- 3 Problem Statement and Notation -- 4 Our Method: DEAL -- 4.1 The Adapted Stream-Based AL Cycle -- 4.2 Our Drift-Aware Estimation Model -- 5 Experimental Design -- 5.1 Baselines -- 5.2 Evaluation Data -- 5.3 Evaluation Metrics -- 6 Evaluation -- 6.1 Comparison of DEAL Against Baselines -- 6.2 Impact of the User-Required Error Threshold -- 7 Conclusion -- References -- Evolving Super Graph Neural Networks for Large-Scale Time-Series Forecasting -- 1 Introduction -- 2 Related Models -- 3 Evolving Super Graph Neural Networks -- 3.1 Preliminary Notations -- 3.2 Super Graph Construction -- 4 Diffusion on Evolving Super Graphs -- 4.1 Predictor -- 5 Experiments on Large-Scale Datasets -- 5.1 Forecasting Result and Analysis -- 5.2 Runtime and Space Usage Analysis -- 5.3 Ablation Study -- 6 Conclusion -- References -- Unlearnable Examples for Time Series -- 1 Introduction -- 2 Related Work -- 2.1 Data Poisoning -- 2.2 Adversarial Attack -- 2.3 Unlearnable Examples -- 3 Error-Minimizing Noise for Time Series -- 3.1 Objective -- 3.2 Threat Model -- 3.3 Challenges -- 3.4 Problem Formulation -- 3.5 A Straightforward Baseline Approach -- 3.6 Controllable Noise on Partial Time Series Samples -- 4 Experiments -- 4.1 Experiment Setup -- 4.2 Against Classification Models -- 4.3 Against Generative Models -- 5 Conclusion -- References -- Learning Disentangled Task-Related Representation for Time Series -- 1 Introduction -- 2 Related Work -- 3 The Proposed Method -- 3.1 Overview -- 3.2 Task-Relevant Feature Disentangled -- 3.3 Task-Adaptive Augmentation Selection -- 4 Experiments and Discussions -- 4.1 Datasets and Implementation Details -- 4.2 Ablation Analysis -- 4.3 Results on Classification Tasks -- 4.4 Results on Forecasting Tasks -- 4.5 Visualization Analysis. 327 $a5 Conclusion -- References -- A Multi-view Feature Construction and Multi-Encoder-Decoder Transformer Architecture for Time Series Classification -- 1 Introduction -- 2 Related Works -- 3 Problem Formulation -- 4 Methodology -- 4.1 Feature Construction -- 4.2 Multi-view Representation -- 4.3 Multi-Encoder-Decoder Transformer (MEDT) Classification -- 5 Experiments -- 5.1 Experiments Using Multivariate Time Series Data Benchmarks -- 5.2 Experiment Using a Real-World Physical Activities Dataset -- 6 Conclusion -- References -- Kernel Representation Learning with Dynamic Regime Discovery for Time Series Forecasting -- 1 Introduction -- 2 Related Work -- 3 Preliminaries -- 3.1 Key Concepts -- 3.2 Self-representation Learning in Time Series -- 3.3 Kernel Trick for Modeling Time Series -- 4 Proposed Method -- 4.1 Kernel Representation Learning: Modeling Regime Behavior -- 4.2 Forecasting -- 5 Experiments -- 5.1 Data -- 5.2 Experimental Setup and Evaluation -- 5.3 Regime Identification -- 5.4 Benchmark Comparison -- 5.5 Ablation Study -- 6 Conclusion -- References -- Hyperparameter Tuning MLP's for Probabilistic Time Series Forecasting -- 1 Introduction -- 2 Problem Statement -- 3 MLPs for Time Series Forecasting -- 3.1 Nlinear Model -- 4 Hyperparameters -- 4.1 Time Series Specific Configuration -- 4.2 Training Specific Configurations -- 4.3 TSBench-Metadataset -- 5 Experimental Setup -- 6 Results -- 7 Conclusion -- References -- Efficient and Accurate Similarity-Aware Graph Neural Network for Semi-supervised Time Series Classification -- 1 Introduction -- 2 Related Work -- 2.1 Graph-Based Time Series Classification -- 2.2 Lower Bound of DTW -- 3 Problem Formulation -- 4 Methodology -- 4.1 Batch Sampling -- 4.2 LB_Keogh Graph Construction -- 4.3 Graph Convolution and Classification -- 4.4 Advantages of Our Model -- 5 Experimental Evaluation. 327 $a5.1 Comparing with 1NN-DTW. 410 0$aLecture Notes in Computer Science Series 700 $aYang$b De-Nian$01737375 701 $aXie$b Xing$01734375 701 $aTseng$b Vincent S$01737376 701 $aPei$b Jian$0868267 701 $aHuang$b Jen-Wei$01737377 701 $aLin$b Jerry Chun-Wei$01453271 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a996594166503316 996 $aAdvances in Knowledge Discovery and Data Mining$94159072 997 $aUNISA