1.

Record Nr.

UNINA9910845485003321

Autore

Swinbank Richard

Titolo

Azure Data Factory by Example [[electronic resource] ] : Practical Implementation for Data Engineers / / by Richard Swinbank

Pubbl/distr/stampa

Berkeley, CA : , : Apress : , : Imprint : Apress, , 2024

ISBN

979-88-6880-218-8

Edizione

[2nd ed. 2024.]

Descrizione fisica

1 online resource (XXIII, 421 p. 206 illus.)

Disciplina

005.268

Soggetti

Microsoft software

Microsoft .NET Framework

Database management

Microsoft

Database Management

Lingua di pubblicazione

Inglese

Formato

Materiale a stampa

Livello bibliografico

Monografia

Nota di contenuto

1. Creating an Azure Data Factory Instance -- 2. Your First Pipeline -- 3. The Copy Data Activity -- 4. Expressions -- 5. Parameters -- 6. Controlling Flow -- 7. Data Flows -- 8. Integration Runtimes -- 9. Power Query in ADF -- 10. Publishing to ADF -- 11. Triggers -- 12. Change Monitoring -- 13. Tools and Other Services.

Sommario/riassunto

Data engineers who need to hit the ground running will use this book to build skills in Azure Data Factory v2 (ADF). The tutorial-first approach to ADF taken in this book gets you working from the first chapter, explaining key ideas naturally as you encounter them. From creating your first data factory to building complex, metadata-driven nested pipelines, the book guides you through essential concepts in Microsoft’s cloud-based ETL/ELT platform. It introduces components indispensable for the movement and transformation of data in the cloud. Then it demonstrates the tools necessary to orchestrate, monitor, and manage those components. This edition, updated for 2024, includes the latest developments to the Azure Data Factory service: Enhancements to existing pipeline activities such as Execute Pipeline, along with the introduction of new activities such as Script, and activities designed specifically to interact with Azure Synapse



Analytics. Improvements to flow control provided by activity deactivation and the Fail activity. The introduction of reusable data flow components such as user-defined functions and flowlets. Extensions to integration runtime capabilities including Managed VNet support. The ability to trigger pipelines in response to custom events. Tools for implementing boilerplate processes such as change data capture and metadata-driven data copying.