top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
Analysis of microarray data : a network-based approach / / edited by Frank Emmert-Streib and Matthias Dehmer
Analysis of microarray data : a network-based approach / / edited by Frank Emmert-Streib and Matthias Dehmer
Pubbl/distr/stampa Weinheim, [Germany] : , : Wiley-VCH Verlag GmbH & Co. KGaA, , 2008
Descrizione fisica 1 online resource (440 p.)
Disciplina 572.8636
Soggetto topico DNA microarrays
Soggetto genere / forma Electronic books.
ISBN 1-281-94703-2
9786611947033
3-527-62281-0
3-527-62282-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Analysis of Microarray Data; Contents; Preface; List of Contributors; 1 Introduction to DNA Microarrays; 1.1 Introduction; 1.1.1 The Genome is an Information Scaffold; 1.1.2 Gene Expression is Detected by Hybridization; 1.1.2.1 Hybridization is Used to Measure Gene Expression; 1.1.2.2 Microarrays Provide a New Twist to an Old Technique; 1.2 Types of Arrays; 1.2.1 Spotted Microarrays; 1.2.2 Affymetrix GeneChips; 1.2.2.1 Other In Situ Synthesis Platforms; 1.2.2.2 Uses of Microarrays; 1.3 Array Content; 1.3.1 ESTs Are the First View; 1.3.1.1 Probe Design; 1.4 Normalization and Scaling
1.4.1 Be Unbiased, Be Complete1.4.2 Sequence Counts; References; 2 Comparative Analysis of Clustering Methods for Microarray Data; 2.1 Introduction; 2.2 Measuring Distance Between Genes or Clusters; 2.3 Network Models; 2.3.1 Boolean Network; 2.3.2 Coexpression Network; 2.3.3 Bayesian Network; 2.3.4 Co-Occurrence Network; 2.4 Network Constrained Clustering Method; 2.4.1 Extract the Giant Connected Component; 2.4.2 Compute "Network Constrained Distance Matrix"; 2.5 Network Constrained Clustering Results; 2.5.1 Yeast Galactose Metabolism Pathway; 2.5.2 Retinal Gene Expression Data
2.5.3 Mouse Segmentation Clock Data2.6 Discussion and Conclusion; References; 3 Finding Verified Edges in Genetic/Gene Networks: Bilayer Verification for Network Recovery in the Presence of Hidden Confounders; 3.1 Introduction: Gene and Genetic Networks; 3.2 Background and Prior Theory; 3.2.1 Motivation; 3.2.2 Bayesian Networks Theory; 3.2.2.1 d-Separation at Colliders; 3.2.2.2 Placing Genetic Tests Within the Bayesian Network Framework; 3.2.3 Learning Network Structure from Observed Conditional Independencies; 3.2.4 Prior Work: The PC Algorithm; 3.2.4.1 PC Algorithm
3.5 Results and Further Application3.5.1 Estimating α False-Positive Rates for the v-Structure Test; 3.5.2 Learning an Aortic Lesion Network; 3.5.3 Further Utilizing Networks: Assigning Functional Roles to Genes; 3.5.4 Future Work; References; 4 Computational Inference of Biological Causal Networks - Analysis of Therapeutic Compound Effects; 4.1 Introduction; 4.2 Basic Theory of Bayesian Networks; 4.2.1 Bayesian Scoring Metrics; 4.2.2 Heuristic Search Methods; 4.2.3 Inference Score; 4.3 Methods; 4.3.1 Experimental Design; 4.3.2 Tissue Contamination; 4.3.3 Gene List Prefiltering
4.3.4 Outlier Removal
Record Nr. UNINA-9910144107303321
Weinheim, [Germany] : , : Wiley-VCH Verlag GmbH & Co. KGaA, , 2008
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Analysis of microarray data : a network-based approach / / edited by Frank Emmert-Streib and Matthias Dehmer
Analysis of microarray data : a network-based approach / / edited by Frank Emmert-Streib and Matthias Dehmer
Pubbl/distr/stampa Weinheim, [Germany] : , : Wiley-VCH Verlag GmbH & Co. KGaA, , 2008
Descrizione fisica 1 online resource (440 p.)
Disciplina 572.8636
Soggetto topico DNA microarrays
ISBN 1-281-94703-2
9786611947033
3-527-62281-0
3-527-62282-9
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Analysis of Microarray Data; Contents; Preface; List of Contributors; 1 Introduction to DNA Microarrays; 1.1 Introduction; 1.1.1 The Genome is an Information Scaffold; 1.1.2 Gene Expression is Detected by Hybridization; 1.1.2.1 Hybridization is Used to Measure Gene Expression; 1.1.2.2 Microarrays Provide a New Twist to an Old Technique; 1.2 Types of Arrays; 1.2.1 Spotted Microarrays; 1.2.2 Affymetrix GeneChips; 1.2.2.1 Other In Situ Synthesis Platforms; 1.2.2.2 Uses of Microarrays; 1.3 Array Content; 1.3.1 ESTs Are the First View; 1.3.1.1 Probe Design; 1.4 Normalization and Scaling
1.4.1 Be Unbiased, Be Complete1.4.2 Sequence Counts; References; 2 Comparative Analysis of Clustering Methods for Microarray Data; 2.1 Introduction; 2.2 Measuring Distance Between Genes or Clusters; 2.3 Network Models; 2.3.1 Boolean Network; 2.3.2 Coexpression Network; 2.3.3 Bayesian Network; 2.3.4 Co-Occurrence Network; 2.4 Network Constrained Clustering Method; 2.4.1 Extract the Giant Connected Component; 2.4.2 Compute "Network Constrained Distance Matrix"; 2.5 Network Constrained Clustering Results; 2.5.1 Yeast Galactose Metabolism Pathway; 2.5.2 Retinal Gene Expression Data
2.5.3 Mouse Segmentation Clock Data2.6 Discussion and Conclusion; References; 3 Finding Verified Edges in Genetic/Gene Networks: Bilayer Verification for Network Recovery in the Presence of Hidden Confounders; 3.1 Introduction: Gene and Genetic Networks; 3.2 Background and Prior Theory; 3.2.1 Motivation; 3.2.2 Bayesian Networks Theory; 3.2.2.1 d-Separation at Colliders; 3.2.2.2 Placing Genetic Tests Within the Bayesian Network Framework; 3.2.3 Learning Network Structure from Observed Conditional Independencies; 3.2.4 Prior Work: The PC Algorithm; 3.2.4.1 PC Algorithm
3.5 Results and Further Application3.5.1 Estimating α False-Positive Rates for the v-Structure Test; 3.5.2 Learning an Aortic Lesion Network; 3.5.3 Further Utilizing Networks: Assigning Functional Roles to Genes; 3.5.4 Future Work; References; 4 Computational Inference of Biological Causal Networks - Analysis of Therapeutic Compound Effects; 4.1 Introduction; 4.2 Basic Theory of Bayesian Networks; 4.2.1 Bayesian Scoring Metrics; 4.2.2 Heuristic Search Methods; 4.2.3 Inference Score; 4.3 Methods; 4.3.1 Experimental Design; 4.3.2 Tissue Contamination; 4.3.3 Gene List Prefiltering
4.3.4 Outlier Removal
Record Nr. UNINA-9910830082603321
Weinheim, [Germany] : , : Wiley-VCH Verlag GmbH & Co. KGaA, , 2008
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Applied statistics for network biology : methods in system biology / / edited by Matthias Dehmer ... [et al.]
Applied statistics for network biology : methods in system biology / / edited by Matthias Dehmer ... [et al.]
Pubbl/distr/stampa Weinheim, Germany, : Wiley-Blackwell, 2011
Descrizione fisica 1 online resource (480 p.)
Disciplina 570.727
Altri autori (Persone) DehmerMatthias
Collana Quantitative and network biology
Soggetto topico Systems biology - Statistical methods
Genomics
Computational biology
Bioinformatics
ISBN 1-283-14095-0
9786613140951
3-527-63808-3
3-527-63809-1
3-527-63807-5
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto pt. 1. Modeling, simulation, and meaning of gene networks -- pt. 2. Inference of gene networks -- pt. 3. Analysis of gene networks -- pt. 4. Systems approach to diseases.
Record Nr. UNINA-9910131029103321
Weinheim, Germany, : Wiley-Blackwell, 2011
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Computational network analysis with R : applications in biology, medicine, and chemistry / / edited by Mathtias Dehmer, Yongtang Shi, and Frank Emmert-Streib
Computational network analysis with R : applications in biology, medicine, and chemistry / / edited by Mathtias Dehmer, Yongtang Shi, and Frank Emmert-Streib
Pubbl/distr/stampa Weinheim : , : Wiley-VCH, , [2017]
Descrizione fisica 1 online resource (365 p.)
Collana Quantitative and network biology
Soggetto topico R (Computer program language)
Medicine - Computer programs
Application software
ISBN 3-527-69437-4
3-527-69440-4
3-527-69436-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover; Title Page; Copyright; Contents; List of Contributors; Chapter 1 Using the DiffCorr Package to Analyze and Visualize Differential Correlations in Biological Networks; 1.1 Introduction; 1.1.1 An Introduction to Omics and Systems Biology; 1.1.2 Correlation Networks in Omics and Systems Biology; 1.1.3 Network Modules and Differential Network Approaches; 1.1.4 Aims of this Chapter; 1.2 What is DiffCorr?; 1.2.1 Background; 1.2.2 Methods; 1.2.3 Main Functions in DiffCorr; 1.2.4 Installing the DiffCorr Package
1.3 Constructing Co-Expression (Correlation) Networks from Omics Data - Transcriptome Data set1.3.1 Downloading the Transcriptome Data set; 1.3.2 Data Filtering; 1.3.3 Calculation of the Correlation and Visualization of Correlation Networks; 1.3.4 Graph Clustering; 1.3.5 Gene Ontology Enrichment Analysis; 1.4 Differential Correlation Analysis by DiffCorr Package; 1.4.1 Calculation of Differential Co-Expression between Organs in Arabidopsis; 1.4.2 Exploring the Metabolome Data of Flavonoid-Deficient Arabidopsis; 1.4.3 Avoiding Pitfalls in (Differential) Correlation Analysis; 1.5 Conclusion
AcknowledgmentsConflicts of Interest; References; Chapter 2 Analytical Models and Methods for Anomaly Detection in Dynamic, Attributed Graphs; 2.1 Introduction; 2.2 Chapter Definitions and Notation; 2.3 Anomaly Detection in Graph Data; 2.3.1 Neighborhood-Based Techniques; 2.3.2 Frequent Subgraph Techniques; 2.3.3 Anomalies in Random Graphs; 2.4 Random Graph Models; 2.4.1 Models with Attributes; 2.4.2 Dynamic Graph Models; 2.5 Spectral Subgraph Detection in Dynamic, Attributed Graphs; 2.5.1 Problem Model; 2.5.2 Filter Optimization; 2.5.3 Residuals Analysis in Attributed Graphs
2.6 Implementation in R2.7 Demonstration in Random Synthetic Backgrounds; 2.8 Data Analysis Example; 2.9 Summary; Acknowledgments; References; Chapter 3 Bayesian Computational Algorithms for Social Network Analysis; 3.1 Introduction; 3.2 Social Networks as Random Graphs; 3.3 Statistical Modeling Approaches to Social Network Analysis; 3.3.1 Exponential Random Graph Models (ERGMs); 3.3.2 Latent Space Models (LSMs); 3.4 Bayesian Inference for Social Network Models; 3.4.1 R-Based Software Tools; 3.5 Data; 3.5.1 Bayesian Inference for Exponential Random Graph Models
3.5.2 Bayesian Inference for Latent Space Models3.5.3 Predictive Goodness-of-Fit (GoF) Diagnostics; 3.6 Conclusions; References; Chapter 4 Threshold Degradation in R Using iDEMO; 4.1 Introduction; 4.2 Statistical Overview: Degradation Models; 4.2.1 Wiener Degradation-Based Process; 4.2.1.1 Lifetime Information; 4.2.1.2 Log-Likelihood Function; 4.2.2 Gamma Degradation-Based Process; 4.2.2.1 Lifetime Information; 4.2.2.2 Log-Likelihood Function; 4.2.3 Inverse Gaussian Degradation-Based Process; 4.2.3.1 Lifetime Distribution; 4.2.3.2 Log-Likelihood Function; 4.2.4 Model Selection Criteria
4.2.5 Choice of (t)
Record Nr. UNINA-9910134853503321
Weinheim : , : Wiley-VCH, , [2017]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Computational network theory : theoretical foundations and applications / / Edited by Matthias Dehmer, Frank Emmert-Streib, and Stefan Pickl
Computational network theory : theoretical foundations and applications / / Edited by Matthias Dehmer, Frank Emmert-Streib, and Stefan Pickl
Pubbl/distr/stampa Weinheim, Germany : , : Wiley-VCH Verlang GmbH & Co. KGaA, , 2015
Descrizione fisica 1 online resource (281 p.)
Disciplina 006.3
Collana Quantitative and network biology
Soggetto topico Electronic commerce
Computational intelligence
Soggetto genere / forma Electronic books.
ISBN 3-527-69154-5
3-527-69151-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover; Title Page; Copyright; Dedication; Contents; Color Plates; Preface; List of Contributors; Chapter 1 Model Selection for Neural Network Models: A Statistical Perspective; 1.1 Introduction; 1.2 Feedforward Neural Network Models; 1.3 Model Selection; 1.3.1 Feature Selection by Relevance Measures; 1.3.2 Some Numerical Examples; 1.3.3 Application to Real Data; 1.4 The Selection of the Hidden Layer Size; 1.4.1 A Reality Check Approach; 1.4.2 Numerical Examples by Using the Reality Check; 1.4.3 Testing Superior Predictive Ability for Neural Network Modeling
1.4.4 Some Numerical Results Using Test of Superior Predictive Ability1.4.5 An Application to Real Data; 1.5 Concluding Remarks; References; Chapter 2 Measuring Structural Correlations in Graphs; 2.1 Introduction; 2.1.1 Solutions for Measuring Structural Correlations; 2.2 Related Work; 2.3 Self Structural Correlation; 2.3.1 Problem Formulation; 2.3.2 The Measure; 2.3.2.1 Random Walk and Hitting Time; 2.3.2.2 Decayed Hitting Time; 2.3.3 Computing Decayed Hitting Time; 2.3.3.1 Iterative Approximation; 2.3.3.2 A Sampling Algorithm for h(vi,B); 2.3.3.3 Complexity; 2.3.4 Assessing SSC
2.3.4.1 Estimating ρ (Vq)2.3.4.2 Estimating the Significance of ρ (Vq); 2.3.5 Empirical Studies; 2.3.5.1 Datasets; 2.3.5.2 Performance of DHT Approximation; 2.3.5.3 Effectiveness on Synthetic Events; 2.3.5.4 SSC of Real Event; 2.3.5.5 Scalability of Sampling-alg; 2.3.6 Discussions; 2.4 Two-Event Structural Correlation; 2.4.1 Preliminaries and Problem Formulation; 2.4.2 Measuring TESC; 2.4.2.1 The Test; 2.4.2.2 Reference Nodes; 2.4.3 Reference Node Sampling; 2.4.3.1 Batch_BFS; 2.4.3.2 Importance Sampling; 2.4.3.3 Global Sampling in Whole Graph; 2.4.3.4 Complexity Analysis; 2.4.4 Experiments
2.4.4.1 Graph Datasets2.4.4.2 Event Simulation Methodology; 2.4.4.3 Performance Comparison; 2.4.4.4 Batch Importance Sampling; 2.4.4.5 Impact of Graph Density; 2.4.4.6 Efficiency and Scalability; 2.4.4.7 Real Events; 2.4.5 Discussions; 2.5 Conclusions; Acknowledgments; References; Chapter 3 Spectral Graph Theory and Structural Analysis of Complex Networks: An Introduction; 3.1 Introduction; 3.2 Graph Theory: Some Basic Concepts; 3.2.1 Connectivity in Graphs; 3.2.2 Subgraphs and Special Graphs; 3.3 Matrix Theory: Some Basic Concepts; 3.3.1 Trace and Determinant of a Matrix
3.3.2 Eigenvalues and Eigenvectors of a Matrix3.4 Graph Matrices; 3.4.1 Adjacency Matrix; 3.4.2 Incidence Matrix; 3.4.3 Degree Matrix and Diffusion Matrix; 3.4.4 Laplace Matrix; 3.4.5 Cut-Set Matrix; 3.4.6 Path Matrix; 3.5 Spectral Graph Theory: Some Basic Results; 3.5.1 Spectral Characterization of Graph Connectivity; 3.5.1.1 Spectral Theory and Walks; 3.5.2 Spectral Characteristics of some Special Graphs and Subgraphs; 3.5.2.1 Tree; 3.5.2.2 Bipartite Graph; 3.5.2.3 Complete Graph; 3.5.2.4 Regular Graph; 3.5.2.5 Line Graph; 3.5.3 Spectral Theory and Graph Colouring
3.5.4 Spectral Theory and Graph Drawing
Record Nr. UNINA-9910463483103321
Weinheim, Germany : , : Wiley-VCH Verlang GmbH & Co. KGaA, , 2015
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Computational network theory : theoretical foundations and applications / / Edited by Matthias Dehmer, Frank Emmert-Streib, and Stefan Pickl
Computational network theory : theoretical foundations and applications / / Edited by Matthias Dehmer, Frank Emmert-Streib, and Stefan Pickl
Pubbl/distr/stampa Weinheim, Germany : , : Wiley-VCH Verlang GmbH & Co. KGaA, , 2015
Descrizione fisica 1 online resource (281 p.)
Disciplina 006.3
Collana Quantitative and network biology
Soggetto topico Electronic commerce
Computational intelligence
ISBN 3-527-69154-5
3-527-69151-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover; Title Page; Copyright; Dedication; Contents; Color Plates; Preface; List of Contributors; Chapter 1 Model Selection for Neural Network Models: A Statistical Perspective; 1.1 Introduction; 1.2 Feedforward Neural Network Models; 1.3 Model Selection; 1.3.1 Feature Selection by Relevance Measures; 1.3.2 Some Numerical Examples; 1.3.3 Application to Real Data; 1.4 The Selection of the Hidden Layer Size; 1.4.1 A Reality Check Approach; 1.4.2 Numerical Examples by Using the Reality Check; 1.4.3 Testing Superior Predictive Ability for Neural Network Modeling
1.4.4 Some Numerical Results Using Test of Superior Predictive Ability1.4.5 An Application to Real Data; 1.5 Concluding Remarks; References; Chapter 2 Measuring Structural Correlations in Graphs; 2.1 Introduction; 2.1.1 Solutions for Measuring Structural Correlations; 2.2 Related Work; 2.3 Self Structural Correlation; 2.3.1 Problem Formulation; 2.3.2 The Measure; 2.3.2.1 Random Walk and Hitting Time; 2.3.2.2 Decayed Hitting Time; 2.3.3 Computing Decayed Hitting Time; 2.3.3.1 Iterative Approximation; 2.3.3.2 A Sampling Algorithm for h(vi,B); 2.3.3.3 Complexity; 2.3.4 Assessing SSC
2.3.4.1 Estimating ρ (Vq)2.3.4.2 Estimating the Significance of ρ (Vq); 2.3.5 Empirical Studies; 2.3.5.1 Datasets; 2.3.5.2 Performance of DHT Approximation; 2.3.5.3 Effectiveness on Synthetic Events; 2.3.5.4 SSC of Real Event; 2.3.5.5 Scalability of Sampling-alg; 2.3.6 Discussions; 2.4 Two-Event Structural Correlation; 2.4.1 Preliminaries and Problem Formulation; 2.4.2 Measuring TESC; 2.4.2.1 The Test; 2.4.2.2 Reference Nodes; 2.4.3 Reference Node Sampling; 2.4.3.1 Batch_BFS; 2.4.3.2 Importance Sampling; 2.4.3.3 Global Sampling in Whole Graph; 2.4.3.4 Complexity Analysis; 2.4.4 Experiments
2.4.4.1 Graph Datasets2.4.4.2 Event Simulation Methodology; 2.4.4.3 Performance Comparison; 2.4.4.4 Batch Importance Sampling; 2.4.4.5 Impact of Graph Density; 2.4.4.6 Efficiency and Scalability; 2.4.4.7 Real Events; 2.4.5 Discussions; 2.5 Conclusions; Acknowledgments; References; Chapter 3 Spectral Graph Theory and Structural Analysis of Complex Networks: An Introduction; 3.1 Introduction; 3.2 Graph Theory: Some Basic Concepts; 3.2.1 Connectivity in Graphs; 3.2.2 Subgraphs and Special Graphs; 3.3 Matrix Theory: Some Basic Concepts; 3.3.1 Trace and Determinant of a Matrix
3.3.2 Eigenvalues and Eigenvectors of a Matrix3.4 Graph Matrices; 3.4.1 Adjacency Matrix; 3.4.2 Incidence Matrix; 3.4.3 Degree Matrix and Diffusion Matrix; 3.4.4 Laplace Matrix; 3.4.5 Cut-Set Matrix; 3.4.6 Path Matrix; 3.5 Spectral Graph Theory: Some Basic Results; 3.5.1 Spectral Characterization of Graph Connectivity; 3.5.1.1 Spectral Theory and Walks; 3.5.2 Spectral Characteristics of some Special Graphs and Subgraphs; 3.5.2.1 Tree; 3.5.2.2 Bipartite Graph; 3.5.2.3 Complete Graph; 3.5.2.4 Regular Graph; 3.5.2.5 Line Graph; 3.5.3 Spectral Theory and Graph Colouring
3.5.4 Spectral Theory and Graph Drawing
Record Nr. UNINA-9910788289203321
Weinheim, Germany : , : Wiley-VCH Verlang GmbH & Co. KGaA, , 2015
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Computational network theory : theoretical foundations and applications / / Edited by Matthias Dehmer, Frank Emmert-Streib, and Stefan Pickl
Computational network theory : theoretical foundations and applications / / Edited by Matthias Dehmer, Frank Emmert-Streib, and Stefan Pickl
Pubbl/distr/stampa Weinheim, Germany : , : Wiley-VCH Verlang GmbH & Co. KGaA, , 2015
Descrizione fisica 1 online resource (281 p.)
Disciplina 006.3
Collana Quantitative and network biology
Soggetto topico Electronic commerce
Computational intelligence
ISBN 3-527-69154-5
3-527-69151-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover; Title Page; Copyright; Dedication; Contents; Color Plates; Preface; List of Contributors; Chapter 1 Model Selection for Neural Network Models: A Statistical Perspective; 1.1 Introduction; 1.2 Feedforward Neural Network Models; 1.3 Model Selection; 1.3.1 Feature Selection by Relevance Measures; 1.3.2 Some Numerical Examples; 1.3.3 Application to Real Data; 1.4 The Selection of the Hidden Layer Size; 1.4.1 A Reality Check Approach; 1.4.2 Numerical Examples by Using the Reality Check; 1.4.3 Testing Superior Predictive Ability for Neural Network Modeling
1.4.4 Some Numerical Results Using Test of Superior Predictive Ability1.4.5 An Application to Real Data; 1.5 Concluding Remarks; References; Chapter 2 Measuring Structural Correlations in Graphs; 2.1 Introduction; 2.1.1 Solutions for Measuring Structural Correlations; 2.2 Related Work; 2.3 Self Structural Correlation; 2.3.1 Problem Formulation; 2.3.2 The Measure; 2.3.2.1 Random Walk and Hitting Time; 2.3.2.2 Decayed Hitting Time; 2.3.3 Computing Decayed Hitting Time; 2.3.3.1 Iterative Approximation; 2.3.3.2 A Sampling Algorithm for h(vi,B); 2.3.3.3 Complexity; 2.3.4 Assessing SSC
2.3.4.1 Estimating ρ (Vq)2.3.4.2 Estimating the Significance of ρ (Vq); 2.3.5 Empirical Studies; 2.3.5.1 Datasets; 2.3.5.2 Performance of DHT Approximation; 2.3.5.3 Effectiveness on Synthetic Events; 2.3.5.4 SSC of Real Event; 2.3.5.5 Scalability of Sampling-alg; 2.3.6 Discussions; 2.4 Two-Event Structural Correlation; 2.4.1 Preliminaries and Problem Formulation; 2.4.2 Measuring TESC; 2.4.2.1 The Test; 2.4.2.2 Reference Nodes; 2.4.3 Reference Node Sampling; 2.4.3.1 Batch_BFS; 2.4.3.2 Importance Sampling; 2.4.3.3 Global Sampling in Whole Graph; 2.4.3.4 Complexity Analysis; 2.4.4 Experiments
2.4.4.1 Graph Datasets2.4.4.2 Event Simulation Methodology; 2.4.4.3 Performance Comparison; 2.4.4.4 Batch Importance Sampling; 2.4.4.5 Impact of Graph Density; 2.4.4.6 Efficiency and Scalability; 2.4.4.7 Real Events; 2.4.5 Discussions; 2.5 Conclusions; Acknowledgments; References; Chapter 3 Spectral Graph Theory and Structural Analysis of Complex Networks: An Introduction; 3.1 Introduction; 3.2 Graph Theory: Some Basic Concepts; 3.2.1 Connectivity in Graphs; 3.2.2 Subgraphs and Special Graphs; 3.3 Matrix Theory: Some Basic Concepts; 3.3.1 Trace and Determinant of a Matrix
3.3.2 Eigenvalues and Eigenvectors of a Matrix3.4 Graph Matrices; 3.4.1 Adjacency Matrix; 3.4.2 Incidence Matrix; 3.4.3 Degree Matrix and Diffusion Matrix; 3.4.4 Laplace Matrix; 3.4.5 Cut-Set Matrix; 3.4.6 Path Matrix; 3.5 Spectral Graph Theory: Some Basic Results; 3.5.1 Spectral Characterization of Graph Connectivity; 3.5.1.1 Spectral Theory and Walks; 3.5.2 Spectral Characteristics of some Special Graphs and Subgraphs; 3.5.2.1 Tree; 3.5.2.2 Bipartite Graph; 3.5.2.3 Complete Graph; 3.5.2.4 Regular Graph; 3.5.2.5 Line Graph; 3.5.3 Spectral Theory and Graph Colouring
3.5.4 Spectral Theory and Graph Drawing
Record Nr. UNINA-9910815219603321
Weinheim, Germany : , : Wiley-VCH Verlang GmbH & Co. KGaA, , 2015
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Elements of Data Science, Machine Learning, and Artificial Intelligence Using R / / Frank Emmert-Streib, Salissou Moutari, and Matthias Dehmer
Elements of Data Science, Machine Learning, and Artificial Intelligence Using R / / Frank Emmert-Streib, Salissou Moutari, and Matthias Dehmer
Autore Emmert-Streib Frank
Edizione [First edition.]
Pubbl/distr/stampa Cham, Switzerland : , : Springer, , [2023]
Descrizione fisica 1 online resource (582 pages)
Disciplina 060
Soggetto topico Artificial intelligence
Machine learning
R (Computer program language)
ISBN 3-031-13339-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Intro -- Preface -- Contents -- 1 Introduction to Learning from Data -- 1.1 What Is Data Science? -- 1.2 Converting Data into Knowledge -- 1.2.1 Big Aims: Big Questions -- 1.2.2 Generating Insights by Visualization -- 1.3 Structure of the Book -- 1.3.1 Part I -- 1.3.2 Part II -- 1.3.3 Part III -- 1.4 Our Motivation for Writing This Book -- 1.5 How to Use This Book -- 1.6 Summary -- Part I General Topics -- 2 General Prediction Models -- 2.1 Introduction -- 2.2 Categorization of Methods -- 2.2.1 Properties of the Data -- 2.2.2 Properties of the Optimization Algorithm -- 2.2.3 Properties of the Model -- 2.2.4 Summary -- 2.3 Overview of Prediction Models -- 2.4 Causal Model versus Predictive Model -- 2.5 Explainable AI -- 2.6 Fundamental Statistical Characteristics of Prediction Models -- 2.6.1 Example -- 2.7 Summary -- 2.8 Exercises -- 3 General Error Measures -- 3.1 Introduction -- 3.2 Motivation -- 3.3 Fundamental Error Measures -- 3.4 Error Measures -- 3.4.1 True-Positive Rate and True-Negative Rate -- 3.4.2 Positive Predictive Value and Negative Predictive Value -- 3.4.3 Accuracy -- 3.4.4 F-Score -- 3.4.5 False Discovery Rate and False Omission Rate -- 3.4.6 False-Negative Rate and False-Positive Rate -- 3.4.7 Matthews Correlation Coefficient -- 3.4.8 Cohen's Kappa -- 3.4.9 Normalized Mutual Information -- 3.4.10 Area Under the Receiver Operator Characteristic Curve -- 3.5 Evaluation of Outcome -- 3.5.1 Evaluation of an Individual Method -- 3.5.2 Comparing Multiple Binary Decision-Making Methods -- 3.6 Summary -- 3.7 Exercises -- 4 Resampling Methods -- 4.1 Introduction -- 4.2 Resampling Methods for Error Estimation -- 4.2.1 Holdout Set -- 4.2.2 Leave-One-Out CV -- 4.2.3 K-Fold Cross-Validation -- 4.3 Extended Resampling Methods for Error Estimation -- 4.3.1 Repeated Holdout Set -- 4.3.2 Repeated K-Fold CV -- 4.3.3 Stratified K-Fold CV.
4.4 Bootstrap -- 4.4.1 Resampling With versus Resampling Without Replacement -- 4.5 Subsampling -- 4.6 Different Types of Prediction Data Sets -- 4.7 Sampling from a Distribution -- 4.8 Standard Error -- 4.9 Summary -- 4.10 Exercises -- 5 Data -- 5.1 Introduction -- 5.2 Data Types -- 5.2.1 Genomic Data -- 5.2.2 Network Data -- 5.2.3 Text Data -- 5.2.4 Time-to-Event Data -- 5.2.5 Business Data -- 5.3 Summary -- Part II Core Methods -- 6 Statistical Inference -- 6.1 Exploratory Data Analysis and Descriptive Statistics -- 6.1.1 Data Structure -- 6.1.2 Data Preprocessing -- 6.1.3 Summary Statistics and Presentation of Information -- 6.1.4 Measures of Location -- 6.1.4.1 Sample Mean -- 6.1.4.2 Trimmed Sample Mean -- 6.1.4.3 Sample Median -- 6.1.4.4 Quartile -- 6.1.4.5 Percentile -- 6.1.4.6 Mode -- 6.1.4.7 Proportion -- 6.1.5 Measures of Scale -- 6.1.5.1 Sample Variance -- 6.1.5.2 Range -- 6.1.5.3 Interquartile Range -- 6.1.6 Measures of Shape -- 6.1.6.1 Skewness -- 6.1.6.2 Kurtosis -- 6.1.7 Data Transformation -- 6.1.8 Example: Summary of Data and EDA -- 6.2 Sample Estimators -- 6.2.1 Point Estimation -- 6.2.2 Unbiased Estimators -- 6.2.3 Biased Estimators -- 6.2.4 Sufficiency -- 6.3 Bayesian Inference -- 6.3.1 Conjugate Priors -- 6.3.2 Continuous Parameter Estimation -- 6.3.2.1 Example: Continuous Bayesian Inference Using R -- 6.3.3 Discrete Parameter Estimation -- 6.3.4 Bayesian Credible Intervals -- 6.3.5 Prediction -- 6.3.6 Model Selection -- 6.4 Maximum Likelihood Estimation -- 6.4.1 Asymptotic Confidence Intervals for MLE -- 6.4.2 Bootstrap Confidence Intervals for MLE -- 6.4.3 Meaning of Confidence Intervals -- 6.5 Expectation-Maximization Algorithm -- 6.5.1 Example: EM Algorithm -- 6.6 Summary -- 6.7 Exercises -- 7 Clustering -- 7.1 Introduction -- 7.2 What Is Clustering? -- 7.3 Comparison of Data Points -- 7.3.1 Distance Measures.
7.3.2 Similarity Measures -- 7.4 Basic Principle of Clustering Algorithms -- 7.5 Non-hierarchical Clustering Methods -- 7.5.1 K-Means Clustering -- 7.5.2 K-Medoids Clustering -- 7.5.3 Partitioning Around Medoids (PAM) -- 7.6 Hierarchical Clustering -- 7.6.1 Dendrograms -- 7.6.2 Two Types of Dissimilarity Measures -- 7.6.3 Linkage Functions for Agglomerative Clustering -- 7.6.4 Example -- 7.7 Defining Feature Vectors for General Objects -- 7.8 Cluster Validation -- 7.8.1 External Criteria -- 7.8.2 Assessing the Numerical Values of Indices -- 7.8.3 Internal Criteria -- 7.9 Summary -- 7.10 Exercises -- 8 Dimension Reduction -- 8.1 Introduction -- 8.2 Feature Extraction -- 8.2.1 An Overview of PCA -- 8.2.2 Geometrical Interpretation of PCA -- 8.2.3 PCA Procedure -- 8.2.4 Underlying Mathematical Problems in PCA -- 8.2.5 PCA Using Singular Value Decomposition -- 8.2.6 Assessing PCA Results -- 8.2.7 Illustration of PCA Using R -- 8.2.8 Kernel PCA -- 8.2.9 Discussion -- 8.2.10 Non-negative Matrix Factorization -- 8.2.10.1 NNMF Using the Frobenius Norm as Objective Function -- 8.2.10.2 NNMF Using the Generalized Kullback-Leibler Divergence as Objective Function -- 8.2.10.3 Example of NNMF Using R -- 8.3 Feature Selection -- 8.3.1 Filter Methods Using Mutual Information -- 8.4 Summary -- 8.5 Exercises -- 9 Classification -- 9.1 Introduction -- 9.2 What Is Classification? -- 9.3 Common Aspects of Classification Methods -- 9.3.1 Basic Idea of a Classifier -- 9.3.2 Training and Test Data -- 9.3.3 Error Measures -- 9.3.3.1 Error Measures for Multi-class Classification -- 9.4 Naive Bayes Classifier -- 9.4.1 Educational Example -- 9.4.2 Example -- 9.5 Linear Discriminant Analysis -- 9.5.1 Extensions -- 9.6 Logistic Regression -- 9.7 k-Nearest Neighbor Classifier -- 9.8 Support Vector Machine -- 9.8.1 Linearly Separable Data -- 9.8.2 Nonlinearly Separable Data.
9.8.3 Nonlinear Support Vector Machines -- 9.8.4 Examples -- 9.9 Decision Tree -- 9.9.1 What Is a Decision Tree? -- 9.9.1.1 Three Principal Steps to Get a Decision Tree -- 9.9.2 Step 1: Growing a Decision Tree -- 9.9.3 Step 2: Assessing the Size of a Decision Tree -- 9.9.3.1 Intuitive Approach -- 9.9.3.2 Formal Approach -- 9.9.4 Step 3: Pruning a Decision Tree -- 9.9.4.1 Alternative Way to Construct Optimal Decision Trees: Stopping Rules -- 9.9.5 Predictions -- 9.10 Summary -- 9.11 Exercises -- 10 Hypothesis Testing -- 10.1 Introduction -- 10.2 What Is Hypothesis Testing? -- 10.3 Key Components of Hypothesis Testing -- 10.3.1 Step 1: Select Test Statistic -- 10.3.2 Step 2: Null Hypothesis H0 and AlternativeHypothesis H1 -- 10.3.3 Step 3: Sampling Distribution -- 10.3.3.1 Examples -- 10.3.4 Step 4: Significance Level α -- 10.3.5 Step 5: Evaluate the Test Statistic from Data -- 10.3.6 Step 6: Determine the p-Value -- 10.3.7 Step 7: Make a Decision about the Null Hypothesis -- 10.4 Type 2 Error and Power -- 10.4.1 Connections between Power and Errors -- 10.5 Confidence Intervals -- 10.5.1 Confidence Intervals for a Population Mean with Known Variance -- 10.5.2 Confidence Intervals for a Population Mean with Unknown Variance -- 10.5.3 Bootstrap Confidence Intervals -- 10.6 Important Hypothesis Tests -- 10.6.1 Student's t-Test -- 10.6.1.1 One-Sample t-Test -- 10.6.1.2 Two-Sample t-Test -- 10.6.1.3 Extensions -- 10.6.2 Correlation Tests -- 10.6.3 Hypergeometric Test -- 10.6.3.1 Null Hypothesis and Sampling Distribution -- 10.6.3.2 Examples -- 10.6.4 Finding the Correct Hypothesis Test -- 10.7 Permutation Tests -- 10.8 Understanding versus Applying Hypothesis Tests -- 10.9 Historical Notes and Misinterpretations -- 10.10 Summary -- 10.11 Exercises -- 11 Linear Regression Models -- 11.1 Introduction -- 11.1.1 What Is Linear Regression?.
11.1.2 Motivating Example -- 11.2 Simple Linear Regression -- 11.2.1 Ordinary Least Squares Estimation of Coefficients -- 11.2.2 Variability of the Coefficients -- 11.2.3 Testing the Necessity of Coefficients -- 11.2.4 Assessing the Quality of a Fit -- 11.3 Preprocessing -- 11.4 Multiple Linear Regression -- 11.4.1 Testing the Necessity of Coefficients -- 11.4.2 Assessing the Quality of a Fit -- 11.5 Diagnosing Linear Models -- 11.5.1 Error Assumptions -- 11.5.2 Linearity Assumption of the Model -- 11.5.3 Leverage Points -- 11.5.4 Outliers -- 11.5.5 Collinearity -- 11.5.6 Discussion -- 11.6 Advanced Topics -- 11.6.1 Interactions -- 11.6.2 Nonlinearities -- 11.6.3 Categorical Predictors -- 11.6.4 Generalized Linear Models -- 11.6.4.1 How to Determine Which Family to Use When Fitting a GLM -- 11.6.4.2 Advantages of GLMs over Traditional OLS Regression -- 11.6.4.3 Example: Poisson Regression -- 11.6.4.4 Example: Logistic Regression -- 11.7 Summary -- 11.8 Exercises -- 12 Model Selection -- 12.1 Introduction -- 12.2 Difference Between Model Selection and Model Assessment -- 12.3 General Approach to Model Selection -- 12.4 Model Selection for Multiple Linear Regression Models -- 12.4.1 R2 and Adjusted R2 -- 12.4.2 Mallow's Cp Statistic -- 12.4.3 Akaike's Information Criterion (AIC) and Schwarz's BIC -- 12.4.4 Best Subset Selection -- 12.4.5 Stepwise Selection -- 12.4.5.1 Forward Stepwise Selection -- 12.4.5.2 Backward Stepwise Selection -- 12.5 Model Selection for Generalized Linear Models -- 12.5.1 Negative Binomial Regression Model -- 12.5.2 Zero-Inflated Poisson Model -- 12.5.3 Quasi-Poisson Model -- 12.5.4 Comparison of GLMs -- 12.6 Model Selection for Bayesian Models -- 12.7 Nonparametric Model Selection for General Models with Resampling -- 12.8 Summary -- 12.9 Exercises -- Part III Advanced Topics -- 13 Regularization -- 13.1 Introduction.
13.2 Preliminaries.
Record Nr. UNINA-9910746976103321
Emmert-Streib Frank  
Cham, Switzerland : , : Springer, , [2023]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Graph polynomials / / [edited by] Yongtang Shi, Matthias Dehmer, Xueliang Li, Ivan Gutman
Graph polynomials / / [edited by] Yongtang Shi, Matthias Dehmer, Xueliang Li, Ivan Gutman
Pubbl/distr/stampa Boca Raton : , : CRC Press, , [2017]
Descrizione fisica 1 online resource (262 pages) : illustrations
Disciplina 511/.5
Collana Discrete Mathematics and Its Applications
Soggetto topico Graph theory
Combinatorial analysis
Polynomials
ISBN 1-315-35096-3
1-315-36799-8
1-4987-5591-7
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto 1. The Interlace Polynomial / Ada Morse -- 2. Independence Polynomials of k-Trees and Compound Graphs / William Staton and Bing Wei -- 3. New Aspects of the Abelian Sandpile Model on Graphs and Their Polynomials / Mark Dukes and Yvan Le Borgne -- 4. Second Quantization of Recurrences / Philip Feinsilver and John P. McSorley -- 5. A Survey on the Matching Polynomial / Ivan Gutman -- 6. On the Permanental Polynomials of Graphs / Wei Li, Shunyi Liu, Tingzeng Wu, and Heping Zhang -- 7. From the Ising and Potts Models to the General Graph Homomorphism Polynomial / Klas Markstrom -- 8. Derivatives and Real Roots of Graph Polynomials / Xueliang Li and Yongtang Shi -- 9. Logic-Based Computation of Graph Polynomials / Tomer Kotek -- 10. Alliance Polynomial / Walter Carballosa, Jose M. Rodriguez, Jose M. Sigarreta, and Yadira Torres-Nuez -- 11. Graph Polynomials and Set Functions / Bodo Lass.
Record Nr. UNINA-9910153182003321
Boca Raton : , : CRC Press, , [2017]
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Mathematical foundations and applications of graph entropy Edited by Matthias Dehmer [and four others]
Mathematical foundations and applications of graph entropy Edited by Matthias Dehmer [and four others]
Pubbl/distr/stampa Weinheim, [Germany] : , : Wiley-VCH Verlag GmbH & Co. KGaA, , 2016
Descrizione fisica 1 online resource (299 p.)
Disciplina 511.5
Collana Quantitative and Network Biology
Soggetto topico Graph theory - Data processing
Soggetto genere / forma Electronic books.
ISBN 3-527-69325-4
3-527-69322-X
3-527-69324-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover; Title Page; Copyright; Contents; List of Contributors; Preface; Chapter 1 Entropy and Renormalization in Chaotic Visibility Graphs; 1.1 Mapping Time Series to Networks; 1.1.1 Natural and Horizontal Visibility Algorithms; 1.1.2 A Brief Overview of Some Initial Applications; 1.1.2.1 Seismicity; 1.1.2.2 Hurricanes; 1.1.2.3 Turbulence; 1.1.2.4 Financial Applications; 1.1.2.5 Physiology; 1.2 Visibility Graphs and Entropy; 1.2.1 Definitions of Entropy in Visibility Graphs; 1.2.2 Pesin Theorem in Visibility Graphs; 1.2.3 Graph Entropy Optimization and Critical Points
1.3 Renormalization Group Transformations of Horizontal Visibility Graphs1.3.1 Tangent Bifurcation; 1.3.2 Period-Doubling Accumulation Point; 1.3.3 Quasi-Periodicity; 1.3.4 Entropy Extrema and RG Transformation; 1.3.4.1 Intermittency; 1.3.4.2 Period Doubling; 1.3.4.3 Quasi-periodicity; 1.4 Summary; 1.5 Acknowledgments; References; Chapter 2 Generalized Entropies of Complex and Random Networks; 2.1 Introduction; 2.2 Generalized Entropies; 2.3 Entropy of Networks: Definition and Properties; 2.4 Application of Generalized Entropy for Network Analysis; 2.5 Open Networks; 2.6 Summary; References
Chapter 3 Information Flow and Entropy Production on Bayesian Networks3.1 Introduction; 3.1.1 Background; 3.1.2 Basic Ideas of Information Thermodynamics; 3.1.3 Outline of this Chapter; 3.2 Brief Review of Information Contents; 3.2.1 Shannon Entropy; 3.2.2 Relative Entropy; 3.2.3 Mutual Information; 3.2.4 Transfer Entropy; 3.3 Stochastic Thermodynamics for Markovian Dynamics; 3.3.1 Setup; 3.3.2 Energetics; 3.3.3 Entropy Production and Fluctuation Theorem; 3.4 Bayesian Networks; 3.5 Information Thermodynamics on Bayesian Networks; 3.5.1 Setup; 3.5.2 Information Contents on Bayesian Networks
3.5.3 Entropy Production3.5.4 Generalized Second Law; 3.6 Examples; 3.6.1 Example 1: Markov Chain; 3.6.2 Example 2: Feedback Control with a Single Measurement; 3.6.3 Example 3: Repeated Feedback Control with Multiple Measurements; 3.6.4 Example 4: Markovian Information Exchanges; 3.6.5 Example 5: Complex Dynamics; 3.7 Summary and Prospects; References; Chapter 4 Entropy, Counting, and Fractional Chromatic Number; 4.1 Entropy of a Random Variable; 4.2 Relative Entropy and Mutual Information; 4.3 Entropy and Counting; 4.4 Graph Entropy; 4.5 Entropy of a Convex Corner; 4.6 Entropy of a Graph
4.7 Basic Properties of Graph Entropy4.8 Entropy of Some Special Graphs; 4.9 Graph Entropy and Fractional Chromatic Number; 4.10 Symmetric Graphs with respect to Graph Entropy; 4.11 Conclusion; Appendix 4.A; References; Chapter 5 Graph Entropy: Recent Results and Perspectives; 5.1 Introduction; 5.2 Inequalities and Extremal Properties on (Generalized) Graph Entropies; 5.2.1 Inequalities for Classical Graph Entropies and Parametric Measures; 5.2.2 Graph Entropy Inequalities with Information Functions fV, fP and fC; 5.2.3 Information Theoretic Measures of UHG Graphs
5.2.4 Bounds for the Entropies of Rooted Trees and Generalized Trees
Record Nr. UNINA-9910134854503321
Weinheim, [Germany] : , : Wiley-VCH Verlag GmbH & Co. KGaA, , 2016
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui