top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Machine learning and data analytics for solving business problems : methods, applications, and case studies / / edited by Bader Alyoubi, [and four others]
Machine learning and data analytics for solving business problems : methods, applications, and case studies / / edited by Bader Alyoubi, [and four others]
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (214 pages)
Disciplina 780
Collana Unsupervised and Semi-Supervised Learning
Soggetto topico Machine learning
Aprenentatge automàtic
Presa de decisions
Processament de dades
Soggetto genere / forma Llibres electrònics
ISBN 3-031-18483-1
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910635386903321
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine learning and data analytics for solving business problems : methods, applications, and case studies / / edited by Bader Alyoubi, [and four others]
Machine learning and data analytics for solving business problems : methods, applications, and case studies / / edited by Bader Alyoubi, [and four others]
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (214 pages)
Disciplina 780
Collana Unsupervised and Semi-Supervised Learning
Soggetto topico Machine learning
Aprenentatge automàtic
Presa de decisions
Processament de dades
Soggetto genere / forma Llibres electrònics
ISBN 3-031-18483-1
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNISA-996503550603316
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine learning applications in electronic design automation / / edited by Haoxing Ren, Jiang Hu
Machine learning applications in electronic design automation / / edited by Haoxing Ren, Jiang Hu
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (585 pages)
Disciplina 929.374
Soggetto topico Engineering
Disseny de circuits electrònics
Automatització
Aprenentatge automàtic
Aplicacions industrials
Soggetto genere / forma Llibres electrònics
ISBN 3-031-13074-X
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Contents -- About the Editors -- Part I Machine Learning-Based Design Prediction Techniques -- 1 ML for Design QoR Prediction -- 1.1 Introduction -- 1.2 Challenges of Design QoR Prediction -- 1.2.1 Limited Number of Samples -- 1.2.2 Chaotic Behaviors of EDA Tools -- 1.2.3 Actionable Predictions -- 1.2.4 Infrastructure Needs -- 1.2.5 The Bar for Design QoR Prediction -- 1.3 ML Techniques in QoR Prediction -- 1.3.1 Graph Neural Networks -- 1.3.2 Long Short-Term Memory (LSTM) Networks -- 1.3.3 Reinforcement Learning -- 1.3.4 Other Models -- 1.4 Timing Estimation -- 1.4.1 Problem Formulation -- 1.4.2 Estimation Flow -- 1.4.3 Feature Engineering -- 1.4.4 Machine Learning Engines -- 1.5 Design Space Exploration -- 1.5.1 Problem Formulation -- 1.5.2 Estimation Flow -- 1.5.3 Feature Engineering -- 1.5.4 Machine Learning Engines -- 1.6 Summary -- References -- 2 Deep Learning for Routability -- 2.1 Introduction -- 2.2 Background on DL for Routability -- 2.2.1 Routability Prediction Background -- 2.2.1.1 Design Rule Checking (DRC) Violations -- 2.2.1.2 Routing Congestion and Pin Accessibility -- 2.2.1.3 Relevant Physical Design Steps -- 2.2.1.4 Routability Prediction -- 2.2.2 DL Techniques in Routability Prediction -- 2.2.2.1 CNN Methods -- 2.2.2.2 FCN Methods -- 2.2.2.3 GAN Methods -- 2.2.2.4 NAS Methods -- 2.2.3 Why DL for Routability -- 2.3 DL for Routability Prediction Methodologies -- 2.3.1 Data Preparation and Augmentation -- 2.3.2 Feature Engineering -- 2.3.2.1 Blockage -- 2.3.2.2 Wire Density -- 2.3.2.3 Routing Congestion -- 2.3.2.4 Pin Accessibility -- 2.3.2.5 Routability Label -- 2.3.3 DL Model Architecture Design -- 2.3.3.1 Common Operators and Connections -- 2.3.3.2 Case Study: RouteNet 2:xie2018routenet -- 2.3.3.3 Case Study: PROS 2:chen2020pros -- 2.3.3.4 Case Study: J-Net 2:liang2020drc.
2.3.3.5 Case Study: Painting 2:yu2019painting -- 2.3.3.6 Case Study: Automated Model Development 2:chang2021auto -- 2.3.4 DL Model Training and Inference -- 2.4 DL for Routability Deployment -- 2.4.1 Direct Feedback to Engineers -- 2.4.2 Macro Location Optimization -- 2.4.3 White Space-Driven Model-Guided Detailed Placement -- 2.4.4 Pin Accessibility-Driven Model-Guided Detailed Placement -- 2.4.5 Integration in Routing Flow -- 2.4.6 Explicit Routability Optimization During Global Placement -- 2.4.7 Visualization of Routing Utilization -- 2.4.8 Optimization with Reinforcement Learning (RL) -- 2.5 Summary -- References -- 3 Net-Based Machine Learning-Aided Approaches for Timing and Crosstalk Prediction -- 3.1 Introduction -- 3.2 Backgrounds on Machine Learning-Aided Timing and Crosstalk Estimation -- 3.2.1 Timing Prediction Background -- 3.2.2 Crosstalk Prediction Background -- 3.2.3 Relevant Design Steps -- 3.2.4 ML Techniques in Net-Based Prediction -- 3.2.5 Why ML for Timing and Crosstalk Prediction -- 3.3 Preplacement Net Length and Timing Prediction -- 3.3.1 Problem Formulation -- 3.3.2 Prediction Flow -- 3.3.3 Feature Engineering -- 3.3.3.1 Features for Net Length Prediction -- 3.3.3.2 Features for Timing Prediction -- 3.3.4 Machine Learning Engines -- 3.3.4.1 Machine Learning Engine for Net Length Prediction -- 3.3.4.2 Machine Learning Engine for Preplacement Timing Prediction -- 3.4 Pre-Routing Timing Prediction -- 3.4.1 Problem Formulation -- 3.4.2 Prediction Flow -- 3.4.3 Feature Engineering -- 3.4.4 Machine Learning Engines -- 3.5 Pre-Routing Crosstalk Prediction -- 3.5.1 Problem Formulation -- 3.5.2 Prediction Flow -- 3.5.3 Feature Engineering -- 3.5.3.1 Probabilistic Congestion Estimation -- 3.5.3.2 Net Physical Information -- 3.5.3.3 Product of the Wirelength and Congestion -- 3.5.3.4 Electrical and Logic Features.
3.5.3.5 Timing Information -- 3.5.3.6 Neighboring Net Information -- 3.5.4 Machine Learning Engines -- 3.6 Interconnect Coupling Delay and Transition Effect Prediction at Sign-Off -- 3.6.1 Problem Formulation -- 3.6.2 Prediction Flow -- 3.6.3 Feature Engineering -- 3.6.4 Machine Learning Engines -- 3.7 Summary -- References -- 4 Deep Learning for Power and Switching Activity Estimation -- 4.1 Introduction -- 4.2 Background on Modeling Methods for Switching Activity Estimators -- 4.2.1 Statistical Approaches to Switching Activity Estimators -- 4.2.2 ``Cost-of-Action''-Based Power Estimation Models -- 4.2.3 Learning/Regression-Based Power Estimation Models -- 4.3 Deep Learning Models for Power Estimation -- 4.4 A Case Study on Using Deep Learning Models for Per Design Power Estimation -- 4.4.1 PRIMAL Methodology -- 4.4.2 List of PRIMAL ML Models for Experimentation -- 4.4.2.1 Feature Construction Techniques in PRIMAL -- 4.4.2.2 Feature Encoding for Cycle-by-Cycle Power Estimation -- 4.4.2.3 Mapping Registers and Signals to Pixels -- 4.5 PRIMAL Experiments -- 4.5.1 Power Estimation Results of PRIMAL -- 4.5.2 Results Analysis -- 4.6 A Case Study on Using Graph Neural Networks for Generalizable Power Estimation -- 4.6.1 GRANNITE Introduction -- 4.6.2 The Role of GPUs in Gate-Level Simulation and Power Estimation -- 4.6.3 GRANNITE Implementation -- 4.6.3.1 Toggle Rate Features -- 4.6.3.2 Graph Object Creation -- 4.6.3.3 GRANNITE Architecture -- 4.7 GRANNITE Results -- 4.7.1 Analysis -- 4.8 Conclusion -- References -- 5 Deep Learning for Analyzing Power Delivery Networks and Thermal Networks -- 5.1 Introduction -- 5.2 Deep Learning for PDN Analysis -- 5.2.1 CNNs for IR Drop Estimation -- 5.2.1.1 PowerNet Input Feature Representation -- 5.2.1.2 PowerNet Architecture -- 5.2.1.3 Evaluation of PowerNet -- 5.2.2 Encoder-Decoder Networks for PDN Analysis.
5.2.2.1 PDN Analysis as an Image-to-Image Translation Task -- 5.2.2.2 U-Nets for PDN Analysis -- 5.2.2.3 3D U-Nets for IR Drop Sequence-to-Sequence Translation -- 5.2.2.4 Regression-Like Layer for Instance-Level IR Drop Prediction -- 5.2.2.5 Encoder-Secoder Network Training -- 5.2.2.6 Evaluation of EDGe Networks for PDN Analysis -- 5.3 Deep Learning for Thermal Analysis -- 5.3.1 Problem Formulation -- 5.3.2 Model Architecture for Thermal Analysis -- 5.3.3 Model Training and Data Generation -- 5.3.4 Evaluation of ThermEDGe -- 5.4 Deep Learning for PDN Synthesis -- 5.4.1 Template-Driven PDN Optimization -- 5.4.2 PDN Synthesis as an Image Classification Task -- 5.4.3 Principle of Locality for Region Size Selection -- 5.4.4 ML-Based PDN Synthesis and Refinement Through the Design Flow -- 5.4.5 Neural Network Architectures for PDN Synthesis -- 5.4.6 Transfer Learning-Based CNN Training -- 5.4.6.1 Synthetic Input Feature Set Generation -- 5.4.6.2 Transfer Learning Model -- 5.4.6.3 Training Data Generation -- 5.4.7 Evaluation of OpeNPDN for PDN Synthesis -- 5.4.7.1 Justification for Transfer Learning -- 5.4.7.2 Validation on Real Design Testcases -- 5.5 DL for PDN Benchmark Generation -- 5.5.1 Introduction -- 5.5.2 GANs for PDN Benchmark Generation -- 5.5.2.1 Synthetic Image Generation for GAN Pretraining -- 5.5.2.2 GAN Architecture and Training -- 5.5.2.3 GAN Inference for Current Map Generation -- 5.5.3 Evaluation of GAN-Generated PDN Benchmarks -- 5.6 Conclusion -- References -- 6 Machine Learning for Testability Prediction -- 6.1 Introduction -- 6.2 Classical Testability Measurements -- 6.2.1 Approximate Measurements -- 6.2.1.1 SCOAP -- 6.2.1.2 Random Testability -- 6.2.2 Simulation-Based Measurements -- 6.3 Learning-Based Testability Prediction -- 6.3.1 Node-Level Testability Prediction -- 6.3.1.1 Conventional Machine Learning Methods.
6.3.1.2 Graph-Based Deep Learning Methods -- 6.3.2 Circuit-Level Testability Prediction -- 6.3.2.1 Fault Coverage Prediction -- 6.3.2.2 Test Cost Prediction -- 6.3.2.3 X-Sensitivity Prediction -- 6.4 Additional Considerations -- 6.4.1 Imbalanced Dataset -- 6.4.2 Scalability of Graph Neural Networks -- 6.4.3 Integration with Design Flow -- 6.4.4 Robustness of Machine Learning Model and Metrics -- 6.5 Summary -- References -- Part II Machine Learning-Based Design Optimization Techniques -- 7 Machine Learning for Logic Synthesis -- 7.1 Introduction -- 7.2 Supervised and Reinforcement Learning -- 7.2.1 Supervised Learning -- 7.2.2 Reinforcement Learning -- 7.3 Supervised Learning for Guiding Logic Synthesis Algorithms -- 7.3.1 Guiding Logic Network Type for Logic Network Optimization -- 7.3.2 Guiding Logic Synthesis Flow Optimization -- 7.3.3 Guiding Cut Choices for Technology Mapping -- 7.3.4 Guiding Delay Constraints for Technology Mapping -- 7.4 Reinforcement Learning Formulations for Logic Synthesis Algorithms -- 7.4.1 Logic Network Optimization -- 7.4.2 Logic Synthesis Flow Optimization -- 7.4.2.1 Synthesis Flow Optimization for Circuit Area and Delay -- 7.4.2.2 Synthesis Flow Optimization for Logic Network Node and Level Counts -- 7.4.3 Datapath Logic Optimization -- 7.5 Scalability Considerations for Reinforcement Learning -- References -- 8 RL for Placement and Partitioning -- 8.1 Introduction -- 8.2 Background -- 8.3 RL for Combinatorial Optimization -- 8.3.1 How to Perform Decision-Making with RL -- 8.4 RL for Placement Optimization -- 8.4.1 The Action Space for Chip Placement -- 8.4.2 Engineering the Reward Function -- 8.4.2.1 Wirelength -- 8.4.2.2 Routing Congestion -- 8.4.2.3 Density and Macro Overlap -- 8.4.2.4 State Representation -- 8.4.3 Generating Adjacency Matrix for a Chip Netlist -- 8.4.4 Learning RL Policies that Generalize.
8.5 Future Directions.
Record Nr. UNISA-996503548803316
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine learning applications in electronic design automation / / edited by Haoxing Ren, Jiang Hu
Machine learning applications in electronic design automation / / edited by Haoxing Ren, Jiang Hu
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (585 pages)
Disciplina 929.374
Soggetto topico Engineering
Disseny de circuits electrònics
Automatització
Aprenentatge automàtic
Aplicacions industrials
Soggetto genere / forma Llibres electrònics
ISBN 3-031-13074-X
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Contents -- About the Editors -- Part I Machine Learning-Based Design Prediction Techniques -- 1 ML for Design QoR Prediction -- 1.1 Introduction -- 1.2 Challenges of Design QoR Prediction -- 1.2.1 Limited Number of Samples -- 1.2.2 Chaotic Behaviors of EDA Tools -- 1.2.3 Actionable Predictions -- 1.2.4 Infrastructure Needs -- 1.2.5 The Bar for Design QoR Prediction -- 1.3 ML Techniques in QoR Prediction -- 1.3.1 Graph Neural Networks -- 1.3.2 Long Short-Term Memory (LSTM) Networks -- 1.3.3 Reinforcement Learning -- 1.3.4 Other Models -- 1.4 Timing Estimation -- 1.4.1 Problem Formulation -- 1.4.2 Estimation Flow -- 1.4.3 Feature Engineering -- 1.4.4 Machine Learning Engines -- 1.5 Design Space Exploration -- 1.5.1 Problem Formulation -- 1.5.2 Estimation Flow -- 1.5.3 Feature Engineering -- 1.5.4 Machine Learning Engines -- 1.6 Summary -- References -- 2 Deep Learning for Routability -- 2.1 Introduction -- 2.2 Background on DL for Routability -- 2.2.1 Routability Prediction Background -- 2.2.1.1 Design Rule Checking (DRC) Violations -- 2.2.1.2 Routing Congestion and Pin Accessibility -- 2.2.1.3 Relevant Physical Design Steps -- 2.2.1.4 Routability Prediction -- 2.2.2 DL Techniques in Routability Prediction -- 2.2.2.1 CNN Methods -- 2.2.2.2 FCN Methods -- 2.2.2.3 GAN Methods -- 2.2.2.4 NAS Methods -- 2.2.3 Why DL for Routability -- 2.3 DL for Routability Prediction Methodologies -- 2.3.1 Data Preparation and Augmentation -- 2.3.2 Feature Engineering -- 2.3.2.1 Blockage -- 2.3.2.2 Wire Density -- 2.3.2.3 Routing Congestion -- 2.3.2.4 Pin Accessibility -- 2.3.2.5 Routability Label -- 2.3.3 DL Model Architecture Design -- 2.3.3.1 Common Operators and Connections -- 2.3.3.2 Case Study: RouteNet 2:xie2018routenet -- 2.3.3.3 Case Study: PROS 2:chen2020pros -- 2.3.3.4 Case Study: J-Net 2:liang2020drc.
2.3.3.5 Case Study: Painting 2:yu2019painting -- 2.3.3.6 Case Study: Automated Model Development 2:chang2021auto -- 2.3.4 DL Model Training and Inference -- 2.4 DL for Routability Deployment -- 2.4.1 Direct Feedback to Engineers -- 2.4.2 Macro Location Optimization -- 2.4.3 White Space-Driven Model-Guided Detailed Placement -- 2.4.4 Pin Accessibility-Driven Model-Guided Detailed Placement -- 2.4.5 Integration in Routing Flow -- 2.4.6 Explicit Routability Optimization During Global Placement -- 2.4.7 Visualization of Routing Utilization -- 2.4.8 Optimization with Reinforcement Learning (RL) -- 2.5 Summary -- References -- 3 Net-Based Machine Learning-Aided Approaches for Timing and Crosstalk Prediction -- 3.1 Introduction -- 3.2 Backgrounds on Machine Learning-Aided Timing and Crosstalk Estimation -- 3.2.1 Timing Prediction Background -- 3.2.2 Crosstalk Prediction Background -- 3.2.3 Relevant Design Steps -- 3.2.4 ML Techniques in Net-Based Prediction -- 3.2.5 Why ML for Timing and Crosstalk Prediction -- 3.3 Preplacement Net Length and Timing Prediction -- 3.3.1 Problem Formulation -- 3.3.2 Prediction Flow -- 3.3.3 Feature Engineering -- 3.3.3.1 Features for Net Length Prediction -- 3.3.3.2 Features for Timing Prediction -- 3.3.4 Machine Learning Engines -- 3.3.4.1 Machine Learning Engine for Net Length Prediction -- 3.3.4.2 Machine Learning Engine for Preplacement Timing Prediction -- 3.4 Pre-Routing Timing Prediction -- 3.4.1 Problem Formulation -- 3.4.2 Prediction Flow -- 3.4.3 Feature Engineering -- 3.4.4 Machine Learning Engines -- 3.5 Pre-Routing Crosstalk Prediction -- 3.5.1 Problem Formulation -- 3.5.2 Prediction Flow -- 3.5.3 Feature Engineering -- 3.5.3.1 Probabilistic Congestion Estimation -- 3.5.3.2 Net Physical Information -- 3.5.3.3 Product of the Wirelength and Congestion -- 3.5.3.4 Electrical and Logic Features.
3.5.3.5 Timing Information -- 3.5.3.6 Neighboring Net Information -- 3.5.4 Machine Learning Engines -- 3.6 Interconnect Coupling Delay and Transition Effect Prediction at Sign-Off -- 3.6.1 Problem Formulation -- 3.6.2 Prediction Flow -- 3.6.3 Feature Engineering -- 3.6.4 Machine Learning Engines -- 3.7 Summary -- References -- 4 Deep Learning for Power and Switching Activity Estimation -- 4.1 Introduction -- 4.2 Background on Modeling Methods for Switching Activity Estimators -- 4.2.1 Statistical Approaches to Switching Activity Estimators -- 4.2.2 ``Cost-of-Action''-Based Power Estimation Models -- 4.2.3 Learning/Regression-Based Power Estimation Models -- 4.3 Deep Learning Models for Power Estimation -- 4.4 A Case Study on Using Deep Learning Models for Per Design Power Estimation -- 4.4.1 PRIMAL Methodology -- 4.4.2 List of PRIMAL ML Models for Experimentation -- 4.4.2.1 Feature Construction Techniques in PRIMAL -- 4.4.2.2 Feature Encoding for Cycle-by-Cycle Power Estimation -- 4.4.2.3 Mapping Registers and Signals to Pixels -- 4.5 PRIMAL Experiments -- 4.5.1 Power Estimation Results of PRIMAL -- 4.5.2 Results Analysis -- 4.6 A Case Study on Using Graph Neural Networks for Generalizable Power Estimation -- 4.6.1 GRANNITE Introduction -- 4.6.2 The Role of GPUs in Gate-Level Simulation and Power Estimation -- 4.6.3 GRANNITE Implementation -- 4.6.3.1 Toggle Rate Features -- 4.6.3.2 Graph Object Creation -- 4.6.3.3 GRANNITE Architecture -- 4.7 GRANNITE Results -- 4.7.1 Analysis -- 4.8 Conclusion -- References -- 5 Deep Learning for Analyzing Power Delivery Networks and Thermal Networks -- 5.1 Introduction -- 5.2 Deep Learning for PDN Analysis -- 5.2.1 CNNs for IR Drop Estimation -- 5.2.1.1 PowerNet Input Feature Representation -- 5.2.1.2 PowerNet Architecture -- 5.2.1.3 Evaluation of PowerNet -- 5.2.2 Encoder-Decoder Networks for PDN Analysis.
5.2.2.1 PDN Analysis as an Image-to-Image Translation Task -- 5.2.2.2 U-Nets for PDN Analysis -- 5.2.2.3 3D U-Nets for IR Drop Sequence-to-Sequence Translation -- 5.2.2.4 Regression-Like Layer for Instance-Level IR Drop Prediction -- 5.2.2.5 Encoder-Secoder Network Training -- 5.2.2.6 Evaluation of EDGe Networks for PDN Analysis -- 5.3 Deep Learning for Thermal Analysis -- 5.3.1 Problem Formulation -- 5.3.2 Model Architecture for Thermal Analysis -- 5.3.3 Model Training and Data Generation -- 5.3.4 Evaluation of ThermEDGe -- 5.4 Deep Learning for PDN Synthesis -- 5.4.1 Template-Driven PDN Optimization -- 5.4.2 PDN Synthesis as an Image Classification Task -- 5.4.3 Principle of Locality for Region Size Selection -- 5.4.4 ML-Based PDN Synthesis and Refinement Through the Design Flow -- 5.4.5 Neural Network Architectures for PDN Synthesis -- 5.4.6 Transfer Learning-Based CNN Training -- 5.4.6.1 Synthetic Input Feature Set Generation -- 5.4.6.2 Transfer Learning Model -- 5.4.6.3 Training Data Generation -- 5.4.7 Evaluation of OpeNPDN for PDN Synthesis -- 5.4.7.1 Justification for Transfer Learning -- 5.4.7.2 Validation on Real Design Testcases -- 5.5 DL for PDN Benchmark Generation -- 5.5.1 Introduction -- 5.5.2 GANs for PDN Benchmark Generation -- 5.5.2.1 Synthetic Image Generation for GAN Pretraining -- 5.5.2.2 GAN Architecture and Training -- 5.5.2.3 GAN Inference for Current Map Generation -- 5.5.3 Evaluation of GAN-Generated PDN Benchmarks -- 5.6 Conclusion -- References -- 6 Machine Learning for Testability Prediction -- 6.1 Introduction -- 6.2 Classical Testability Measurements -- 6.2.1 Approximate Measurements -- 6.2.1.1 SCOAP -- 6.2.1.2 Random Testability -- 6.2.2 Simulation-Based Measurements -- 6.3 Learning-Based Testability Prediction -- 6.3.1 Node-Level Testability Prediction -- 6.3.1.1 Conventional Machine Learning Methods.
6.3.1.2 Graph-Based Deep Learning Methods -- 6.3.2 Circuit-Level Testability Prediction -- 6.3.2.1 Fault Coverage Prediction -- 6.3.2.2 Test Cost Prediction -- 6.3.2.3 X-Sensitivity Prediction -- 6.4 Additional Considerations -- 6.4.1 Imbalanced Dataset -- 6.4.2 Scalability of Graph Neural Networks -- 6.4.3 Integration with Design Flow -- 6.4.4 Robustness of Machine Learning Model and Metrics -- 6.5 Summary -- References -- Part II Machine Learning-Based Design Optimization Techniques -- 7 Machine Learning for Logic Synthesis -- 7.1 Introduction -- 7.2 Supervised and Reinforcement Learning -- 7.2.1 Supervised Learning -- 7.2.2 Reinforcement Learning -- 7.3 Supervised Learning for Guiding Logic Synthesis Algorithms -- 7.3.1 Guiding Logic Network Type for Logic Network Optimization -- 7.3.2 Guiding Logic Synthesis Flow Optimization -- 7.3.3 Guiding Cut Choices for Technology Mapping -- 7.3.4 Guiding Delay Constraints for Technology Mapping -- 7.4 Reinforcement Learning Formulations for Logic Synthesis Algorithms -- 7.4.1 Logic Network Optimization -- 7.4.2 Logic Synthesis Flow Optimization -- 7.4.2.1 Synthesis Flow Optimization for Circuit Area and Delay -- 7.4.2.2 Synthesis Flow Optimization for Logic Network Node and Level Counts -- 7.4.3 Datapath Logic Optimization -- 7.5 Scalability Considerations for Reinforcement Learning -- References -- 8 RL for Placement and Partitioning -- 8.1 Introduction -- 8.2 Background -- 8.3 RL for Combinatorial Optimization -- 8.3.1 How to Perform Decision-Making with RL -- 8.4 RL for Placement Optimization -- 8.4.1 The Action Space for Chip Placement -- 8.4.2 Engineering the Reward Function -- 8.4.2.1 Wirelength -- 8.4.2.2 Routing Congestion -- 8.4.2.3 Density and Macro Overlap -- 8.4.2.4 State Representation -- 8.4.3 Generating Adjacency Matrix for a Chip Netlist -- 8.4.4 Learning RL Policies that Generalize.
8.5 Future Directions.
Record Nr. UNINA-9910637722203321
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning Applied to Composite Materials [[electronic resource] /] / edited by Vinod Kushvaha, M. R. Sanjay, Priyanka Madhushri, Suchart Siengchin
Machine Learning Applied to Composite Materials [[electronic resource] /] / edited by Vinod Kushvaha, M. R. Sanjay, Priyanka Madhushri, Suchart Siengchin
Edizione [1st ed. 2022.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2022
Descrizione fisica 1 online resource (202 pages)
Disciplina 006.31
Collana Composites Science and Technology
Soggetto topico Composite materials
Machine learning
Computational intelligence
Materials science - Data processing
Composites
Machine Learning
Computational Intelligence
Computational Materials Science
Materials compostos
Simulació per ordinador
Aprenentatge automàtic
Soggetto genere / forma Llibres electrònics
ISBN 981-19-6278-2
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Importance of machine learning in material science -- Machine Learning: A methodology to explain and predict material behavior -- Effect of aspect ratio on dynamic fracture toughness of particulate polymer composite using artificial neural network -- Methodology of K-Nearest Neighbor for predicting the fracture toughness of polymer composites -- Forward machine learning technique to predict dynamic fracture behavior of particulate composite -- Predictive modelling of fracture behavior in silica-filled polymer composite subjected to impact with varying loading rates -- Machine learning approach to determine the elastic modulus of Carbon fiber-reinforced laminates -- Effect of weight ratio on mechanical behaviour of natural fiber based biocomposite using machine learning -- Effect of natural fiber’s mechanical properties and fiber matrix adhesion strength to design biocomposite -- Comparison of various machine learning algorithms to predict material behavior in GFRP.
Record Nr. UNINA-9910633937803321
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2022
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning Applied to Composite Materials [[electronic resource] /] / edited by Vinod Kushvaha, M. R. Sanjay, Priyanka Madhushri, Suchart Siengchin
Machine Learning Applied to Composite Materials [[electronic resource] /] / edited by Vinod Kushvaha, M. R. Sanjay, Priyanka Madhushri, Suchart Siengchin
Edizione [1st ed. 2022.]
Pubbl/distr/stampa Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2022
Descrizione fisica 1 online resource (202 pages)
Disciplina 006.31
Collana Composites Science and Technology
Soggetto topico Composite materials
Machine learning
Computational intelligence
Materials science - Data processing
Composites
Machine Learning
Computational Intelligence
Computational Materials Science
Materials compostos
Simulació per ordinador
Aprenentatge automàtic
Soggetto genere / forma Llibres electrònics
ISBN 981-19-6278-2
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Importance of machine learning in material science -- Machine Learning: A methodology to explain and predict material behavior -- Effect of aspect ratio on dynamic fracture toughness of particulate polymer composite using artificial neural network -- Methodology of K-Nearest Neighbor for predicting the fracture toughness of polymer composites -- Forward machine learning technique to predict dynamic fracture behavior of particulate composite -- Predictive modelling of fracture behavior in silica-filled polymer composite subjected to impact with varying loading rates -- Machine learning approach to determine the elastic modulus of Carbon fiber-reinforced laminates -- Effect of weight ratio on mechanical behaviour of natural fiber based biocomposite using machine learning -- Effect of natural fiber’s mechanical properties and fiber matrix adhesion strength to design biocomposite -- Comparison of various machine learning algorithms to predict material behavior in GFRP.
Record Nr. UNISA-996499867603316
Singapore : , : Springer Nature Singapore : , : Imprint : Springer, , 2022
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine learning control by symbolic regression / / Askhat Diveev, Elizaveta Shmalko
Machine learning control by symbolic regression / / Askhat Diveev, Elizaveta Shmalko
Autore Diveev Askhat
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2021]
Descrizione fisica 1 online resource (162 pages)
Disciplina 629.8
Soggetto topico Machine learning
Control automàtic
Processament de dades
Aprenentatge automàtic
Soggetto genere / forma Llibres electrònics
ISBN 3-030-83213-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNISA-996466411203316
Diveev Askhat  
Cham, Switzerland : , : Springer, , [2021]
Materiale a stampa
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Machine learning control by symbolic regression / / Askhat Diveev, Elizaveta Shmalko
Machine learning control by symbolic regression / / Askhat Diveev, Elizaveta Shmalko
Autore Diveev Askhat
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2021]
Descrizione fisica 1 online resource (162 pages)
Disciplina 629.8
Soggetto topico Machine learning
Control automàtic
Processament de dades
Aprenentatge automàtic
Soggetto genere / forma Llibres electrònics
ISBN 3-030-83213-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Record Nr. UNINA-9910506390903321
Diveev Askhat  
Cham, Switzerland : , : Springer, , [2021]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine Learning for Data Science Handbook : Data Mining and Knowledge Discovery Handbook / / edited by Lior Rokach, Oded Maimon, Erez Shmueli
Machine Learning for Data Science Handbook : Data Mining and Knowledge Discovery Handbook / / edited by Lior Rokach, Oded Maimon, Erez Shmueli
Autore Rokach Lior
Edizione [3rd ed. 2023.]
Pubbl/distr/stampa Cham : , : Springer International Publishing : , : Imprint : Springer, , 2023
Descrizione fisica 1 online resource (975 pages)
Disciplina 006.312
Altri autori (Persone) MaimonOded
ShmueliErez
Soggetto topico Machine learning
Artificial intelligence
Data mining
Information storage and retrieval systems
Machine Learning
Artificial Intelligence
Data Mining and Knowledge Discovery
Information Storage and Retrieval
Mineria de dades
Aprenentatge automàtic
Soggetto genere / forma Llibres electrònics
ISBN 3-031-24628-4
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Introduction to Knowledge Discovery and Data Mining -- Preprocessing Methods -- Data Cleansing: A Prelude to Knowledge Discovery -- Handling Missing Attribute Values -- Geometric Methods for Feature Extraction and Dimensional Reduction - A Guided Tour -- Dimension Reduction and Feature Selection -- Discretization Methods -- Outlier Detection -- Supervised Methods -- Supervised Learning -- Classification Trees -- Bayesian Networks -- Data Mining within a Regression Framework -- Support Vector Machines -- Rule Induction -- Unsupervised Methods -- A survey of Clustering Algorithms -- Association Rules -- Frequent Set Mining -- Constraint-based Data Mining -- Link Analysis -- Soft Computing Methods -- A Review of Evolutionary Algorithms for Data Mining -- A Review of Reinforcement Learning Methods -- Neural Networks For Data Mining -- Granular Computing and Rough Sets - An Incremental Development -- Pattern Clustering Using a Swarm Intelligence Approach -- Using Fuzzy Logic in Data Mining -- Supporting Methods -- Statistical Methods for Data Mining -- Logics for Data Mining -- Wavelet Methods in Data Mining -- Fractal Mining - Self Similarity-based Clustering and its Applications -- Visual Analysis of Sequences Using Fractal Geometry -- Interestingness Measures - On Determining What Is Interesting -- Quality Assessment Approaches in Data Mining -- Data Mining Model Comparison -- Data Mining Query Languages -- Advanced Methods -- Mining Multi-label Data -- Privacy in Data Mining -- Meta-Learning - Concepts and Techniques -- Bias vs Variance Decomposition for Regression and Classification -- Mining with Rare Cases -- Data Stream Mining -- Mining Concept-Drifting Data Streams -- Mining High-Dimensional Data -- Text Mining and Information Extraction -- Spatial Data Mining -- Spatio-temporal clustering -- Data Mining for Imbalanced Datasets: An Overview -- Relational Data Mining -- Web Mining -- A Review of Web Document Clustering Approaches -- Causal Discovery -- Ensemble Methods in Supervised Learning -- Data Mining using Decomposition Methods -- Information Fusion - Methods and Aggregation Operators -- Parallel and Grid-Based Data Mining – Algorithms, Models and Systems for High-Performance KDD -- Collaborative Data Mining -- Organizational Data Mining -- Mining Time Series Data -- Applications -- Multimedia Data Mining -- Data Mining in Medicine -- Learning Information Patterns in Biological Databases - Stochastic Data Mining -- Data Mining for Financial Applications -- Data Mining for Intrusion Detection -- Data Mining for CRM -- Data Mining for Target Marketing -- NHECD - Nano Health and Environmental Commented Database -- Software -- Commercial Data Mining Software -- Weka-A Machine Learning Workbench for Data Mining.
Record Nr. UNINA-9910739470003321
Rokach Lior  
Cham : , : Springer International Publishing : , : Imprint : Springer, , 2023
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Machine learning for practical decision making : a multidisciplinary perspective with applications from healthcare, engineering and business analytics / / Christo El Morr [and three others]
Machine learning for practical decision making : a multidisciplinary perspective with applications from healthcare, engineering and business analytics / / Christo El Morr [and three others]
Autore El Morr Christo <1966->
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2022]
Descrizione fisica 1 online resource (475 pages)
Disciplina 658.403
Collana International series in operations research & management science
Soggetto topico Decision making - Data processing
Machine learning
Presa de decisions
Processament de dades
Aprenentatge automàtic
Soggetto genere / forma Llibres electrònics
ISBN 3-031-16990-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Contents -- Chapter 1: Introduction to Machine Learning -- 1.1 Introduction to Machine Learning -- 1.2 Origin of Machine Learning -- 1.3 Growth of Machine Learning -- 1.4 How Machine Learning Works -- 1.5 Machine Learning Building Blocks -- 1.5.1 Data Management and Exploration -- 1.5.1.1 Data, Information, and Knowledge -- 1.5.1.2 Big Data -- 1.5.1.3 OLAP Versus OLTP -- 1.5.1.4 Databases, Data Warehouses, and Data Marts -- 1.5.1.5 Multidimensional Analysis Techniques -- 1.5.1.5.1 Slicing and Dicing -- 1.5.1.5.2 Pivoting -- 1.5.1.5.3 Drill-Down, Roll-Up, and Drill-Across -- 1.5.2 The Analytics Landscape -- 1.5.2.1 Types of Analytics (Descriptive, Diagnostic, Predictive, Prescriptive) -- 1.5.2.1.1 Descriptive Analytics -- 1.5.2.1.2 Diagnostic Analytics -- 1.5.2.1.3 Predictive Analytics -- 1.5.2.1.4 Prescriptive Analytics -- 1.6 Conclusion -- 1.7 Key Terms -- 1.8 Test Your Understanding -- 1.9 Read More -- 1.10 Lab -- 1.10.1 Introduction to R -- 1.10.2 Introduction to RStudio -- 1.10.2.1 RStudio Download and Installation -- 1.10.2.2 Install a Package -- 1.10.2.3 Activate Package -- 1.10.2.4 User Readr to Load Data -- 1.10.2.5 Run a Function -- 1.10.2.6 Save Status -- 1.10.3 Introduction to Python and Jupyter Notebook IDE -- 1.10.3.1 Python Download and Installation -- 1.10.3.2 Jupyter Download and Installation -- 1.10.3.3 Load Data and Plot It Visually -- 1.10.3.4 Save the Execution -- 1.10.3.5 Load a Saved Execution -- 1.10.3.6 Upload a Jupyter Notebook File -- 1.10.4 Do It Yourself -- References -- Chapter 2: Statistics -- 2.1 Overview of the Chapter -- 2.2 Definition of General Terms -- 2.3 Types of Variables -- 2.3.1 Measures of Central Tendency -- 2.3.1.1 Measures of Dispersion -- 2.4 Inferential Statistics -- 2.4.1 Data Distribution -- 2.4.2 Hypothesis Testing -- 2.4.3 Type I and II Errors.
2.4.4 Steps for Performing Hypothesis Testing -- 2.4.5 Test Statistics -- 2.4.5.1 Student´s t-test -- 2.4.5.2 One-Way Analysis of Variance -- 2.4.5.3 Chi-Square Statistic -- 2.4.5.4 Correlation -- 2.4.5.5 Simple Linear Regression -- 2.5 Conclusion -- 2.6 Key Terms -- 2.7 Test Your Understanding -- 2.8 Read More -- 2.9 Lab -- 2.9.1 Working Example in R -- 2.9.1.1 Statistical Measures Overview -- 2.9.1.2 Central Tendency Measures in R -- 2.9.1.3 Dispersion in R -- 2.9.1.4 Statistical Test Using p-value in R -- 2.9.2 Working Example in Python -- 2.9.2.1 Central Tendency Measure in Python -- 2.9.2.2 Dispersion Measures in Python -- 2.9.2.3 Statistical Testing Using p-value in Python -- 2.9.3 Do It Yourself -- 2.9.4 Do More Yourself (Links to Available Datasets for Use) -- References -- Chapter 3: Overview of Machine Learning Algorithms -- 3.1 Introduction -- 3.2 Data Mining -- 3.3 Analytics and Machine Learning -- 3.3.1 Terminology Used in Machine Learning -- 3.3.2 Machine Learning Algorithms: A Classification -- 3.4 Supervised Learning -- 3.4.1 Multivariate Regression -- 3.4.1.1 Multiple Linear Regression -- 3.4.1.2 Multiple Logistic Regression -- 3.4.2 Decision Trees -- 3.4.3 Artificial Neural Networks -- 3.4.3.1 Perceptron -- 3.4.4 Naïve Bayes Classifier -- 3.4.5 Random Forest -- 3.4.6 Support Vector Machines (SVM) -- 3.5 Unsupervised Learning -- 3.5.1 K-Means -- 3.5.2 K-Nearest Neighbors (KNN) -- 3.5.3 AdaBoost -- 3.6 Applications of Machine Learning -- 3.6.1 Machine Learning Demand Forecasting and Supply Chain Performance [42] -- 3.6.2 A Case Study on Cervical Pain Assessment with Motion Capture [43] -- 3.6.3 Predicting Bank Insolvencies Using Machine Learning Techniques [44] -- 3.6.4 Deep Learning with Convolutional Neural Network for Objective Skill Evaluation in Robot-Assisted Surgery [45] -- 3.7 Conclusion -- 3.8 Key Terms.
3.9 Test Your Understanding -- 3.10 Read More -- 3.11 Lab -- 3.11.1 Machine Learning Overview in R -- 3.11.1.1 Caret Package -- 3.11.1.2 ggplot2 Package -- 3.11.1.3 mlBench Package -- 3.11.1.4 Class Package -- 3.11.1.5 DataExplorer Package -- 3.11.1.6 Dplyr Package -- 3.11.1.7 KernLab Package -- 3.11.1.8 Mlr3 Package -- 3.11.1.9 Plotly Package -- 3.11.1.10 Rpart Package -- 3.11.2 Supervised Learning Overview -- 3.11.2.1 KNN Diamonds Example -- 3.11.2.1.1 Loading KNN Algorithm Package -- 3.11.2.1.2 Loading Dataset for KNN -- 3.11.2.1.3 Preprocessing Data -- 3.11.2.1.4 Scaling Data -- 3.11.2.1.5 Splitting Data and Applying KNN Algorithm -- 3.11.2.1.6 Model Performance -- 3.11.3 Unsupervised Learning Overview -- 3.11.3.1 Loading K-Means Clustering Package -- 3.11.3.2 Loading Dataset for K-Means Clustering Algorithm -- 3.11.3.3 Preprocessing Data -- 3.11.3.4 Executing K-Means Clustering Algorithm -- 3.11.3.5 Results Discussion -- 3.11.4 Python Scikit-Learn Package Overview -- 3.11.5 Python Supervised Learning Machine (SML) -- 3.11.5.1 Using Scikit-Learn Package -- 3.11.5.2 Loading Diamonds Dataset Using Python -- 3.11.5.3 Preprocessing Data -- 3.11.5.4 Splitting Data and Executing Linear Regression Algorithm -- 3.11.5.5 Model Performance Explanation -- 3.11.5.6 Classification Performance -- 3.11.6 Unsupervised Machine Learning (UML) -- 3.11.6.1 Loading Dataset for Hierarchical Clustering Algorithm -- 3.11.6.2 Running Hierarchical Algorithm and Plotting Data -- 3.11.7 Do It Yourself -- 3.11.8 Do More Yourself -- References -- Chapter 4: Data Preprocessing -- 4.1 The Problem -- 4.2 Data Preprocessing Steps -- 4.2.1 Data Collection -- 4.2.2 Data Profiling, Discovery, and Access -- 4.2.3 Data Cleansing and Validation -- 4.2.4 Data Structuring -- 4.2.5 Feature Selection -- 4.2.6 Data Transformation and Enrichment.
4.2.7 Data Validation, Storage, and Publishing -- 4.3 Feature Engineering -- 4.3.1 Feature Creation -- 4.3.2 Transformation -- 4.3.3 Feature Extraction -- 4.4 Feature Engineering Techniques -- 4.4.1 Imputation -- 4.4.1.1 Numerical Imputation -- 4.4.1.2 Categorical Imputation -- 4.4.2 Discretizing Numerical Features -- 4.4.3 Converting Categorical Discrete Features to Numeric (Binarization) -- 4.4.4 Log Transformation -- 4.4.5 One-Hot Encoding -- 4.4.6 Scaling -- 4.4.6.1 Normalization (Min-Max Normalization) -- 4.4.6.2 Standardization (Z-Score Normalization) -- 4.4.7 Reduce the Features Dimensionality -- 4.5 Overfitting -- 4.6 Underfitting -- 4.7 Model Selection: Selecting the Best Performing Model of an Algorithm -- 4.7.1 Model Selection Using the Holdout Method -- 4.7.2 Model Selection Using Cross-Validation -- 4.7.3 Evaluating Model Performance in Python -- 4.8 Data Quality -- 4.9 Key Terms -- 4.10 Test Your Understanding -- 4.11 Read More -- 4.12 Lab -- 4.12.1 Working Example in Python -- 4.12.1.1 Read the Dataset -- 4.12.1.2 Split the Dataset -- 4.12.1.3 Impute Data -- 4.12.1.4 One-Hot-Encode Data -- 4.12.1.5 Scale Numeric Data: Standardization -- 4.12.1.6 Create Pipelines -- 4.12.1.7 Creating Models -- 4.12.1.8 Cross-Validation -- 4.12.1.9 Hyperparameter Finetuning -- 4.12.2 Working Example in Weka -- 4.12.2.1 Missing Values -- 4.12.2.2 Discretization (or Binning) -- 4.12.2.3 Data Normalization and Standardization -- 4.12.2.4 One-Hot-Encoding (Nominal to Numeric) -- 4.12.3 Do It Yourself -- 4.12.3.1 Lenses Dataset -- 4.12.3.2 Nested Cross-Validation -- 4.12.4 Do More Yourself -- References -- Chapter 5: Data Visualization -- 5.1 Introduction -- 5.2 Presentation and Visualization of Information -- 5.2.1 A Taxonomy of Graphs -- 5.2.2 Relationships and Graphs -- 5.2.3 Dashboards -- 5.2.4 Infographics -- 5.3 Building Effective Visualizations.
5.4 Data Visualization Software -- 5.5 Conclusion -- 5.6 Key Terms -- 5.7 Test Your Understanding -- 5.8 Read More -- 5.9 Lab -- 5.9.1 Working Example in Tableau -- 5.9.1.1 Getting a Student Copy of Tableau Desktop -- 5.9.1.2 Learning with Tableau´s how-to Videos and Resources -- 5.9.2 Do It Yourself -- 5.9.2.1 Assignment 1: Introduction to Tableau -- 5.9.2.2 Assignment 2: Data Manipulation and Basic Charts with Tableau -- 5.9.3 Do More Yourself -- 5.9.3.1 Assignment 3: Charts and Dashboards with Tableau -- 5.9.3.2 Assignment 4: Analytics with Tableau -- References -- Chapter 6: Linear Regression -- 6.1 The Problem -- 6.2 A Practical Example -- 6.3 The Algorithm -- 6.3.1 Modeling the Linear Regression -- 6.3.2 Gradient Descent -- 6.3.3 Gradient Descent Example -- 6.3.4 Batch Versus Stochastic Gradient Descent -- 6.3.5 Examples of Error Functions -- 6.3.6 Gradient Descent Types -- 6.3.6.1 Stochastic Gradient Descent -- 6.3.6.2 Batch Gradient -- 6.4 Final Notes: Advantages, Disadvantages, and Best Practices -- 6.5 Key Terms -- 6.6 Test Your Understanding -- 6.7 Read More -- 6.8 Lab -- 6.8.1 Working Example in R -- 6.8.1.1 Load Diabetes Dataset -- 6.8.1.2 Preprocess Diabetes Dataset -- 6.8.1.3 Choose Dependent and Independent Variables -- 6.8.1.4 Visualize Your Dataset -- 6.8.1.5 Split Data into Test and Train Datasets -- 6.8.1.6 Create Linear Regression Model and Visualize it -- 6.8.1.7 Calculate Confusion Matrix -- 6.8.1.8 Gradient Descent -- 6.8.2 Working Example in Python -- 6.8.2.1 Load USA House Prices Dataset -- 6.8.2.2 Explore Housing Prices Visually -- 6.8.2.3 Preprocess Data -- 6.8.2.4 Split Data and Scale Features -- 6.8.2.5 Create and Visualize Model Using the LinearRegression Algorithm -- 6.8.2.6 Evaluate Performance of LRM -- 6.8.2.7 Optimize LRM Manually with Gradient Descent.
6.8.2.8 Create and Visualize a Model Using the Stochastic Gradient Descent (SGD).
Record Nr. UNINA-9910633918303321
El Morr Christo <1966->  
Cham, Switzerland : , : Springer, , [2022]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui