1.

Record Nr.

UNISA996547955203316

Autore

Wu Hao

Titolo

Dynamic network representation based on latent factorization of tensors / / Hao Wu, Xuke Wu, Xin Luo

Pubbl/distr/stampa

Singapore : , : Springer, , [2023]

©2023

ISBN

981-19-8934-6

Edizione

[1st ed. 2023.]

Descrizione fisica

1 online resource (89 pages)

Collana

SpringerBriefs in computer science

Disciplina

515.63

Soggetti

Calculus of tensors

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Nota di bibliografia

Includes bibliographical references and index.

Nota di contenuto

Chapter 1 IntroductionChapter -- 2 Multiple Biases-Incorporated Latent Factorization of tensors -- Chapter 3 PID-Incorporated Latent Factorization of Tensors -- Chapter 4 Diverse Biases Nonnegative Latent Factorization of Tensors -- Chapter 5 ADMM-Based Nonnegative Latent Factorization of Tensors -- Chapter 6 Perspectives and Conclusion. .

Sommario/riassunto

A dynamic network is frequently encountered in various real industrial applications, such as the Internet of Things. It is composed of numerous nodes and large-scale dynamic real-time interactions among them, where each node indicates a specified entity, each directed link indicates a real-time interaction, and the strength of an interaction can be quantified as the weight of a link. As the involved nodes increase drastically, it becomes impossible to observe their full interactions at each time slot, making a resultant dynamic network High Dimensional and Incomplete (HDI). An HDI dynamic network with directed and weighted links, despite its HDI nature, contains rich knowledge regarding involved nodes’ various behavior patterns. Therefore, it is essential to study how to build efficient and effective representation learning models for acquiring useful knowledge. In this book, we first model a dynamic network into an HDI tensor and present the basic latent factorization of tensors (LFT) model. Then, we propose four representative LFT-based network representation methods. The first method integrates the short-time bias, long-time bias and



preprocessing bias to precisely represent the volatility of network data. The second method utilizes a proportion-al-integral-derivative controller to construct an adjusted instance error to achieve a higher convergence rate. The third method considers the non-negativity of fluctuating network data by constraining latent features to be non-negative and incorporating the extended linear bias. The fourth method adopts an alternating direction method of multipliers framework to build a learning model for implementing representation to dynamic networks with high preciseness and efficiency.