1.

Record Nr.

UNISA990003280270203316

Autore

MACGREGOR, Malcom H., B.G.

Titolo

The nature of the elementary particle / Malcom H. MacGregor

Pubbl/distr/stampa

Berlin : Springer, copyr. 1978

Descrizione fisica

XXII, 482 p. : ill. ; 24 cm

Collana

Lecture notes in physics ; 81

Disciplina

539.72

Soggetti

Particelle elementari

Collocazione

530 LNP (81) A

Lingua di pubblicazione

Francese

Formato

Materiale a stampa

Livello bibliografico

Monografia

2.

Record Nr.

UNINA9910151959503321

Autore

Chen Zhiyuan (Computer scientist)

Titolo

Lifelong machine learning / / Zhiyuan Chen, Bing Liu

Pubbl/distr/stampa

[San Rafael, California] : , : Morgan & Claypool, , 2017

ISBN

1-62705-877-X

Descrizione fisica

1 online resource (147 pages) : illustrations (some color)

Collana

Synthesis lectures on artificial intelligence and machine learning, , 1939-4616 ; ; # 33

Disciplina

006.31

Soggetti

Machine learning

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Note generali

Part of: Synthesis digital library of engineering and computer science.

Nota di bibliografia

Includes bibliographical references (pages 111-125).

Nota di contenuto

1. Introduction -- 1.1 A brief history of lifelong learning -- 1.2 Definition of lifelong learning -- 1.3 Lifelong learning system architecture -- 1.4 Evaluation methodology -- 1.5 Role of big data in lifelong learning -- 1.6 Outline of the book --

2. Related learning paradigms -- 2.1 Transfer learning -- 2.1.1



Structural correspondence learning -- 2.1.2 Naïve Bayes transfer classifier -- 2.1.3 Deep learning in transfer learning -- 2.1.4 Difference from lifelong learning -- 2.2 Multi-task learning -- 2.2.1 Task relatedness in multi-task learning -- 2.2.2 GO-MTL: multi-task learning using latent basis -- 2.2.3 Deep learning in multi-task learning -- 2.2.4 Difference from lifelong learning -- 2.3 Online learning -- 2.3.1 Difference from lifelong learning -- 2.4 Reinforcement learning -- 2.4.1 Difference from lifelong learning -- 2.5 Summary --

3. Lifelong supervised learning -- 3.1 Definition and overview -- 3.2 Lifelong memory-based learning -- 3.2.1 Two memory-based learning methods -- 3.2.2 Learning a new representation for lifelong learning -- 3.3 Lifelong neural networks -- 3.3.1 MTL Net -- 3.3.2 Lifelong EBNN -- 3.4 Cumulative learning and self-motivated learning -- 3.4.1 Training a cumulative learning model -- 3.4.2 Testing a cumulative learning model -- 3.4.3 Open world learning for unseen class detection -- 3.5 ELLA: an efficient lifelong learning algorithm -- 3.5.1 Problem setting -- 3.5.2 Objective function -- 3.5.3 Dealing with the first inefficiency -- 3.5.4 Dealing with the second inefficiency -- 3.5.5 Active task selection -- 3.6 LSC: lifelong sentiment classification -- 3.6.1 Naïve Bayesian text classification -- 3.6.2 Basic ideas of LSC -- 3.6.3 LSC technique -- 3.7 Summary and evaluation datasets --

4. Lifelong unsupervised learning -- 4.1 Lifelong topic modeling -- 4.2 LTM: a lifelong topic model -- 4.2.1 LTM model -- 4.2.2 Topic knowledge mining -- 4.2.3 Incorporating past knowledge -- 4.2.4 Conditional distribution of Gibbs sampler -- 4.3 AMC: a lifelong topic model for small data -- 4.3.1 Overall algorithm of AMC -- 4.3.2 Mining must-link knowledge -- 4.3.3 Mining cannot-link knowledge -- 4.3.4 Extended Pólya Urn model -- 4.3.5 Sampling distributions in Gibbs sampler -- 4.4 Lifelong information extraction -- 4.4.1 Lifelong learning through recommendation -- 4.4.2 AER algorithm -- 4.4.3 Knowledge learning -- 4.4.4 Recommendation using past knowledge -- 4.5 Lifelong-RL: lifelong relaxation labeling -- 4.5.1 Relaxation labeling -- 4.5.2 Lifelong relaxation labeling -- 4.6 Summary and evaluation datasets --

5. Lifelong semi-supervised learning for information extraction -- 5.1 NELL: a never ending language learner -- 5.2 NELL architecture -- 5.3 Extractors and learning in NELL -- 5.4 Coupling constraints in NELL -- 5.5 Summary --

6. Lifelong reinforcement learning -- 6.1 Lifelong reinforcement learning through multiple environments -- 6.1.1 Acquiring and incorporating bias -- 6.2 Hierarchical Bayesian lifelong reinforcement learning -- 6.2.1 Motivation -- 6.2.2 Hierarchical Bayesian approach -- 6.2.3 MTRL algorithm -- 6.2.4 Updating hierarchical model parameters -- 6.2.5 Sampling an MDP -- 6.3 PG-ELLA: lifelong policy gradient reinforcement learning -- 6.3.1 Policy gradient reinforcement learning -- 6.3.2 Policy gradient lifelong learning setting -- 6.3.3 Objective function and optimization -- 6.3.4 Safe policy search for lifelong learning -- 6.3.5 Cross-domain lifelong reinforcement learning -- 6.4 Summary and evaluation datasets --

7. Conclusion and future directions -- Bibliography -- Authors' biographies.

Sommario/riassunto

Lifelong Machine Learning (or Lifelong Learning) is an advanced machine learning paradigm that learns continuously, accumulates the knowledge learned in previous tasks, and uses it to help future learning. In the process, the learner becomes more and more knowledgeable and effective at learning. This learning ability is one of the hallmarks of human intelligence. However, the current dominant



machine learning paradigm learns in isolation: given a training dataset, it runs a machine learning algorithm on the dataset to produce a model. It makes no attempt to retain the learned knowledge and use it in future learning. Although this isolated learning paradigm has been very successful, it requires a large number of training examples, and is only suitable for well-defined and narrow tasks. In comparison, we humans can learn effectively with a few examples because we have accumulated so much knowledge in the past which enables us to learn with little data or effort. Lifelong learning aims to achieve this capability. As statistical machine learning matures, it is time to make a major effort to break the isolated learning tradition and to study lifelong learning to bring machine learning to new heights. Applications such as intelligent assistants, chatbots, and physical robots that interact with humans and systems in real-life environments are also calling for such lifelong learning capabilities. Without the ability to accumulate the learned knowledge and use it to learn more knowledge incrementally, a system will probably never be truly intelligent. This book serves as an introductory text and survey to lifelong learning.