top

  Info

  • Utilizzare la checkbox di selezione a fianco di ciascun documento per attivare le funzionalità di stampa, invio email, download nei formati disponibili del (i) record.

  Info

  • Utilizzare questo link per rimuovere la selezione effettuata.
AWS Certified AI Practitioner Study Guide : Foundational (AIF-C01) Exam
AWS Certified AI Practitioner Study Guide : Foundational (AIF-C01) Exam
Autore Elango Vikram
Edizione [1st ed.]
Pubbl/distr/stampa Newark : , : John Wiley & Sons, Incorporated, , 2025
Descrizione fisica 1 online resource (322 pages)
Disciplina 006.3
Altri autori (Persone) GangasaniVivek
SubramanianShreyas
Collana Sybex Study Guide Series
Soggetto topico Artificial intelligence - Examinations
ISBN 1-394-32821-4
1-394-40669-X
1-394-32820-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover -- Half Title Page -- Title Page -- Copyright -- Contents at a Glance -- Contents -- Introduction -- Assessment Test -- Answers to Assessment Test -- Part I: Introduction to AI and ML -- Chapter 1: Basic AI Concepts and Terminology -- A Brief History of AI -- Diving Deeper into Terms You Should Know -- The Learning Paradigms: Supervised, Unsupervised, and Reinforcement Learning -- When to Use the Different Types of Learning -- The Deep Learning Revolution -- Training: Learning Model Parameters Values -- Feature Engineering and Data Preprocessing -- Evolution of Specialized Architectures for Complex Data -- The Transformer Revolution -- Generative AI -- The Relationship Among AI, ML, and Deep Learning -- Hierarchical Relationship -- Artificial Intelligence -- Machine Learning -- Deep Learning -- Similarities Among AI, ML, and DL -- Differences Among AI, ML, and DL -- Understanding Data Types in AI Models -- Labeled vs. Unlabeled Data -- Structured Data -- Tabular Data -- Time-Series Data -- Log Data -- Unstructured Data -- Text Data -- Image Data -- Video Data -- Audio Data -- Making Predictions Using Trained Models -- Batch Inference -- Real-Time Inference -- Asynchronous Inference -- Summary -- Exam Essentials -- Review Questions -- Chapter 2: Basic Concepts of Generative AI -- A New Way to Interact with AI -- From Text to Numbers: Tokens, Chunking, and Embeddings -- The Transformer Architecture and Foundation Models -- From Embeddings to Attention -- Beyond Attention: Why Transformers Are So Powerful -- Beyond Text: Multi-modal Models -- Diffusion Models -- Key Use Cases for Multi-modal AI -- Prompt Engineering -- The Upsides and Downsides of Gen AI -- Summary -- Exam Essentials -- Review Questions -- Part II: Building AI Applications with AWS -- Chapter 3: Applications of AI and ML in Real-World Use Cases.
Key Trends in AI and ML Applications -- Automation of Repetitive Tasks -- Predictive Analytics and Forecasting -- Personalization -- Enhanced Decision-Making -- Cost Optimization -- Use Cases Unsuitable for AI and ML Applications -- Choosing the Right ML Techniques for Different Use Cases -- Regression -- Data Labeling Requirements for Regression -- Regression Metrics to Evaluate -- Mapping Domain Use Cases to Regression -- Classification -- Data Labeling Requirements for Classification -- Classification Metrics to Evaluate -- Mapping Domain Use Cases to Classification -- Clustering -- Mapping Domain Use Cases to Clustering -- Clustering Metrics to Evaluate -- Use Cases and Applications for Deep Learning Algorithms -- High-Level Approach for Deep Learning Workflows -- Computer Vision (CV) Use Cases -- Natural Language Processing (NLP) Use Cases -- Generative AI Use Cases -- Consumer-Focused Applications -- Enterprise Applications -- Summary -- Exam Essentials -- Review Questions -- Chapter 4: AWS AI and ML Services -- An Overview of AWS Managed AI and ML Services -- Amazon Generative AI Service -- Amazon Bedrock -- Choice of Models -- Bedrock Playground -- Agents for Amazon Bedrock -- Amazon Bedrock Knowledge Bases -- Amazon Bedrock Data Automation -- Model Customization -- Foundation Model Evaluation -- Guardrails -- Pricing -- Example Use Case for Amazon Bedrock -- Amazon Q -- Amazon Q for Business -- Amazon Q for Developers -- AWS PartyRock -- AWS AI Services -- Amazon Comprehend -- Amazon Textract -- Amazon Transcribe -- Amazon Translate -- AWS ML Services -- Amazon SageMaker AI -- Amazon SageMaker Studio -- SageMaker JumpStart -- SageMaker Canvas -- SageMaker Studio Lab -- Pricing -- Summary -- Exam Essentials -- Review Questions -- Part III: Common GenAI Patterns -- Chapter 5: Model Selection and Prompt Engineering.
Selecting the Right Foundation Model for Your Use Case -- Decision Framework for Selecting Foundation Models -- Model Cards and Documentation -- Modality Support and Integration -- Multilingual Capabilities -- Cost-Performance Ratio -- Latency, Infrastructure, and Scale -- Model Size and Performance Benchmarks -- The Effect of Inference Parameters on Model Responses -- Temperature -- Maximum Output Length -- Top-k Sampling -- Top-p (Nucleus) Sampling -- Stop Words and Stop Sequences -- Prompt Engineering -- Prompt Engineering Fundamentals -- Providing Clear Instructions -- Constraining Outputs -- Role-Playing -- Use Examples in Context -- Model-Specific Considerations -- Handling Long Context -- Thinking Step-by-Step -- Cost-Performance Trade-offs -- Understanding Risks and Limitations in Prompt Engineering -- Summary -- Exam Essentials -- Review Questions -- Chapter 6: Generative AI Applications with RAG and Agents -- Retrieval-Augmented Generation Workflow -- The Data Ingestion Phase -- The Retrieval and Generation Phase -- Amazon Bedrock Knowledge Bases -- Amazon Bedrock Data Automation -- How Amazon Knowledge Bases Work -- Knowledge Bases Data Ingestion Workflow -- Connecting to a Data Source -- Unstructured Document Parsing Strategy -- Chunking -- Embedding Model -- Vector Store -- Knowledge Base Retrieval and Generation Workflow -- Knowledge Base Retrieve API -- Knowledge Bases RetrieveAndGenerate API -- Guardrails with Knowledge Bases -- Evaluating RAG Workflows on Amazon Bedrock -- Amazon Bedrock Agents -- Amazon Bedrock Agents Components -- Agents in Action -- Preprocessing Prompt -- Orchestration Step -- Post-processing -- Multi-agent Collaboration -- Portfolio Assistant Agent -- Stock Data Researcher Agent -- Stock News Researcher Agent -- Financial Analyst Agent -- Summary -- Exam Essentials -- Review Questions.
Chapter 7: Model Customization and Evaluation -- Overview of Customization Techniques -- Pre-training Models: Building the Foundation -- Self-supervised Learning -- Data Selection and Collection -- Steps Involved in Pre-training a Model -- GPU Memory Considerations -- Floating-point Precision and Mixed-precision Training -- Distributed Training Frameworks -- Fine-tuning -- Continuous Pre-training -- PEFT and LoRA Fine-tuning -- AWS Services for Pre-training and Fine-tuning -- Amazon SageMaker AI -- SageMaker Training Jobs -- SageMaker HyperPod -- HyperPod Training Recipes -- Amazon Bedrock -- Data Processing -- Model Evaluation -- Model Evaluation for Foundation Models -- Evaluating Models for Business Objectives -- Summary -- Exam Essentials -- Review Questions -- Part IV: Bringing AI to Production -- Chapter 8: MLOps -- MLOps Phases -- Experimentation -- Repeatable Processes -- Building Scalable Systems -- Deploying to Production -- Model Monitoring -- MLOps Pipeline -- Data Collection -- Data Preprocessing -- Model Training -- Model Evaluation -- Model Deployment -- Model Monitoring -- Automating MLOps -- SageMaker Pipelines -- Fully Managed MLFlow on SageMaker -- AWS Step Functions for Orchestration -- Apache Airflow for Workflow Scheduling -- Continuous Integration and Continuous Delivery (CI/CD -- Infrastructure as Code (IaC) for MLOps -- Integrating CI/CD and IaC with SageMaker Pipelines and Step Functions -- Reducing Technical Debt with CI/CD, Pipelines, and IaC -- SageMaker Inference -- Advanced Deployment Scenarios -- Inference Optimizations for Large Language Models -- Model Distillation -- Quantization -- Pruning -- Tensor and Expert Parallelism -- Speculative Decoding -- Flash Attention -- Paged Attention -- Summary -- Exam Essentials -- Review Questions -- Chapter 9: Implementing Responsible AI with AWS Services.
Key Principles of Responsible AI -- ML Governance with SageMaker AI -- Amazon SageMaker Model Cards -- Amazon SageMaker Model Registry -- Amazon SageMaker Pipelines and Experiments -- Amazon SageMaker Clarify -- Evaluating Foundation Models Using SageMaker Clarify -- Model Quality Checks -- Data Quality Checks -- SageMaker Model Monitoring -- Amazon Bedrock Guardrails -- The ApplyGuardrail API -- Guardrail Policies -- Amazon Bedrock Evaluations -- Automated Model Evaluation -- LLM-as-a-Judge Evaluations -- Bedrock Knowledge Bases and RAG Evaluations -- Summary -- Exam Essentials -- Review Questions -- Chapter 10: AI Security, Governance, and Compliance -- Security of AI Systems -- Adversarial Tactics Against AI Systems -- An Overview of Adversarial Techniques for AI -- Phase 1: Reconnaissance Techniques: Gathering Intel -- Phase 2: Resource Acquisition Techniques: Building the Toolkit -- Phase 3: Execution Techniques: Crafting and Launching Attacks -- Advanced Tactics: Exploiting Large Language Models (LLMs) -- Prompt Injection -- LLM Jailbreaking -- Defensive Strategies: Protecting Against Adversarial Techniques -- The Generative AI Security Scoping Matrix -- Scope 1: Consumer Applications Utilizing Public Generative AI Services -- Scope 2: Enterprise Applications Incorporating Generative AI Features -- Scope 3: Applications Built on Pre-Trained Models -- Scope 4: Fine-Tuned Models Customized with Organizational Data -- Scope 5: Self-Trained Models Developed from Scratch -- Data Governance Strategies -- Data Lifecycles in AI Systems -- Data Logging and Documentation -- Data Residency and Sovereignty Considerations -- Data Retention Policies for Generative AI Applications -- Understanding Data Retention in Generative AI -- Implementing Effective Data Retention Policies -- Data Retention on Amazon Bedrock -- Compliance and Regulatory Frameworks in AI.
Compliance Requirements for AI Systems on AWS.
Record Nr. UNINA-9911034867603321
Elango Vikram  
Newark : , : John Wiley & Sons, Incorporated, , 2025
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
Azure Data Fundamentals Certification Companion : A Complete Guide to DP-900 Exam Success / / by Naveen Kumar M
Azure Data Fundamentals Certification Companion : A Complete Guide to DP-900 Exam Success / / by Naveen Kumar M
Autore Kumar M Naveen
Edizione [1st ed. 2025.]
Pubbl/distr/stampa Berkeley, CA : , : Apress : , : Imprint : Apress, , 2025
Descrizione fisica 1 online resource (173 pages)
Disciplina 006.3
Collana Certification Study Companion Series
Soggetto topico Microsoft Azure (Computing platform) - Examinations
Artificial intelligence - Examinations
ISBN 979-88-6881-684-0
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Chapter 1: Exam Overview and Structure -- Chapter 2: Understanding Core Data Concepts -- Chapter 3: Working with Relational Data on Azure -- Chapter 4: Exploring Non-Relational Data on Azure -- Chapter 5: Analytics Workloads on Azure -- Chapter 6: Exam Preparation and Practice.
Record Nr. UNINA-9911020428603321
Kumar M Naveen  
Berkeley, CA : , : Apress : , : Imprint : Apress, , 2025
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui
IAPP AIGP Artificial Intelligence Governance Professional Study Guide
IAPP AIGP Artificial Intelligence Governance Professional Study Guide
Autore Gregory Peter H
Edizione [1st ed.]
Pubbl/distr/stampa Newark : , : John Wiley & Sons, Incorporated, , 2026
Descrizione fisica 1 online resource (481 pages)
Collana Sybex Study Guide Series
Soggetto topico Artificial intelligence - Examinations
Artificial intelligence - Law and legislation
Computer security - Examinations
Soggetto genere / forma Study guides
ISBN 1-394-36395-8
1-394-36397-4
1-394-36396-6
Formato Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione eng
Nota di contenuto Cover -- Half Title Page -- Title Page -- Copyright -- Acknowledgments -- About the Author -- About the Technical Editor -- Contents at a Glance -- Contents -- Introduction -- Assessment Test -- Answers to Assessment Test -- Part I: Foundations of AI and Governance -- Chapter 1: AI and AI Governance -- The Types of AI -- Definition of AI -- AI Capabilities -- AI Functionalities -- AI Techniques -- Risks and Harms Posed by AI -- Risks and Harms to Individuals -- Risks and Harms to Groups -- Risks and Harms to Organizations -- Risks and Harms to Society -- Characteristics of AI Requiring Governance -- Complexity -- Opacity -- Autonomy -- Speed and Scale -- Potential for Harm and Misuse -- Data Dependency -- Privacy -- Intellectual Property -- Probabilistic vs. Deterministic Outputs -- Principles of Responsible AI -- Ethics -- Challenges with Ethics -- Applications of Ethics -- Fairness -- Challenges with Fairness -- Applications of Fairness -- Safety and Reliability -- Challenges with Safety -- Applications of Safety and Reliability -- Privacy -- Challenges with Privacy -- Applications of Privacy -- Security -- Challenges with Security -- Applications of Security -- Transparency and Explainability -- Challenges with Transparency and Explainability -- Applications of Transparency and Explainability -- Accountability -- Challenges with Accountability -- Applications of Accountability -- Human-centricity -- Challenges with Human-centricity -- Applications of Human-centricity -- Summary -- Exam Essentials -- Review Questions -- Chapter 2: Organizational Readiness -- Roles and Responsibilities for AI Governance Stakeholders -- The Strategic Importance of Defined Governance Roles -- Stakeholder Categories and Their Governance Roles -- Executive Leadership -- AI Governance Board or Council -- Legal, Ethics, and Compliance Teams.
Technical and Engineering Teams -- Cybersecurity and Risk Management Teams -- Product and Business Unit Leaders -- End Users and Frontline Staff -- External Parties -- Assigning Responsibilities Across the AI Lifecycle -- Implementation Tools and Techniques -- Cross-functional Collaboration in the AI Governance Program -- Why Cross-functional Collaboration Is Critical -- Principles of Effective Cross-functional Governance -- Building the Collaborative Structure -- Working Groups and Advisory Boards -- Establish Cross-functional Use Case Reviews -- Collaboration Templates and Artifacts -- Fostering a Collaborative Culture -- External and Cross-organizational Collaboration -- Sustaining Collaboration Over Time -- Training and Awareness Program on AI Terminology, Strategy, and Governance -- Why Training and Awareness Are Foundational -- The Scope of the Training Program -- Curriculum Structure -- Example Structure and Content Areas -- Example Module Titles -- Delivering the Training -- Blended Learning -- Mandatory vs. Elective Content -- Regional and Cultural Considerations -- Maintaining Training Records -- Building Awareness Beyond Formal Training -- Tracking and Measuring Success -- Common Pitfalls and How to Avoid Them -- Sustaining the Program Over Time -- Tailoring AI Governance to Organizational Context -- Company Size: Scaling Governance to Organizational Footprint -- Small Companies and Startups (1-200 Employees) -- Mid-sized Organizations (200-5,000 Employees) -- Large Enterprises (5,000+ Employees) -- Organizational Maturity and AI Governance -- Maturity Models -- Low Maturity -- Moderate Maturity -- High Maturity -- Industry Sector-specific Governance Imperatives -- Regulated Industries -- Business- and Consumer-facing Technology -- Industrial and Manufacturing -- Public Sector and Nonprofits.
Products and Services: Governance Based on Use Case Impact -- Internal Use AI -- Decision Support AI -- Automated Decision-making -- Real-time/High-stakes AI -- Risk Tolerance: Calibrating Governance Based on Appetite for Uncertainty -- Low Risk Tolerance -- Medium Risk Tolerance -- High Risk Tolerance -- Business Alignment -- Innovation-driven Organizations -- Reputation-focused Organizations -- Cost-reduction or Efficiency-oriented Organizations -- Developers, Deployers, and Users in AI Governance -- Definitions and Role Boundaries -- AI Developers -- AI Deployers -- AI Users -- Responsibilities Across the AI Lifecycle -- Opportunities and Leverage Points -- Resource Needs and Expectations -- AI Developer -- AI Deployer -- AI User -- Governance Conflicts and Misalignments -- Developers vs. Users -- Developers vs. Deployers -- Users vs. Deployers -- Role-based Governance Controls -- Controls for Developers -- Controls for Deployers -- Controls for Users -- Role Coordination and Integration -- Summary -- Exam Essentials -- Review Questions -- Chapter 3: Updating Policies for AI -- Oversight in the Age of Autonomous Decision-making -- The Lifecycle Policy Model -- Why Lifecycle Oversight Matters -- Core Features of a Lifecycle-oriented Policy -- Use Case Assessment: Aligning Purpose and Risk -- Should AI Be Used at All? -- Risk Rating Frameworks -- Risk Management Tailored for AI -- Ethics by Design: From Principle to Practice -- Operationalizing AI Ethics -- Building Ethics into Technical Design -- Data Acquisition and Use -- Data Lineage and Documentation -- Consent, Privacy, and Synthetic Data -- Model Development: Guardrails for Creation -- Enforcing Reproducibility and Accountability -- Safe and Interpretable Model Architectures -- Training and Testing: Preparing for the Real World -- Rigorous Validation Standards.
Red Teaming and Adversarial Testing -- Test Results Are a Vital Gate -- Deployment and Monitoring: Guarding the Gate -- Pre-deployment Controls -- Post-deployment Monitoring -- Documentation and Reporting: Building Institutional Memory -- Living Documentation -- Internal and External Reporting -- Incident Management: Planning for Failure -- Defining and Escalating Incidents -- AI Playbooks Are Needed -- Root Cause and Remediation -- Evaluate and Update Existing Data Privacy and Security Policies for AI -- Why AI Disrupts Traditional Data Governance -- AI Is Data-hungry by Design -- Inference and Reidentification Risks -- Privacy Policy Gaps and AI-specific Threats -- Common Policy Shortfalls -- AI-specific Privacy Threats -- Key Policy Areas to Evaluate and Update -- Purpose Limitation in Dynamic Pipelines -- Enhanced Anonymization and Pseudonymization Standards -- Lifecycle-based Data Retention Controls -- Data Subject Rights in AI Contexts -- Right to Explanation and Access -- Right to Be Forgotten (Data Deletion) -- Security Policy Updates for AI Systems -- Expanding the Threat Model -- Securing the AI Supply Chain -- Practical Steps for Policy Revision -- Conduct a Gap Assessment -- Build Cross-functional Review Teams -- Policy Compliance Artifacts -- Policies to Manage Third-party Risk -- Why AI Multiplies Third-party Risk -- Beyond the Traditional Vendor Model -- Hidden Dependencies and Sub-tier Risks -- Core Policy Objectives for Third-party AI Risk Management -- Procurement Policy Controls for AI-enabled Solutions -- AI Risk Flagging at Procurement Intake -- AI-specific Due Diligence -- Vendor Risk Tiers -- Contracting and Legal Safeguards -- AI-specific Contractual Clauses -- Liability and Incident Handling -- AI Supply Chain Governance -- Mapping the AI Supply Chain -- Open-source AI Risk Policies -- Other Third-party AI Issues.
Contracting with Human Annotators and AI Workers -- Employee Use of Generative AI Tools -- Monitoring and Reviewing Third-party AI Risk -- Ongoing Vendor Oversight -- Triggers for Reassessment -- Summary -- Exam Essentials -- Review Questions -- Part II: Legal and Regulatory Obligations -- Chapter 4: Privacy and Data Protection Law -- Notice, Choice, Consent, and Purpose Limitation in AI -- Notice -- Unique Challenges in AI -- Choice and Consent -- Defining Consent and Its Variants -- Obstacles to Meaningful Consent in AI -- Purpose Limitation -- Purpose Creep and Reuse -- AI-specific Purpose Challenges -- Data Minimization and Privacy by Design in AI -- Data Minimization -- Data Minimization Across the AI Lifecycle -- Balancing Utility and Minimization -- Privacy by Design (PbD) -- Seven Foundational Principles of Privacy by Design -- Implementing Privacy by Design in AI Development -- Aligning Privacy by Design with Organizational Roles -- Practical Tools and Frameworks -- Practical Implications and Governance -- Operationalizing Privacy Principles in AI -- Governance Touchpoints Across the AI Lifecycle -- Privacy Roles and Responsibilities -- Documentation and Auditability -- Scaling Governance -- Data Controller Obligations in the AI Context -- Privacy Impact Assessments and Risk Management -- The Role of DPIAs in AI Development -- DPIA Lifecycle in AI Projects -- Common Pitfalls in AI DPIAs -- Aligning with Emerging Frameworks -- Using Third-party Processors in AI Projects -- Controller vs. Processor in the AI Context -- Contractual Requirements -- Processor Drift and Compliance Gaps -- Cross-border Data Transfers -- Global Data Flows in AI Systems -- AI-specific Data Transfer Challenges -- Transfer Impact Assessments (TIAs) -- Data Subject Rights and AI -- AI-specific Implementation Challenges -- Practical Solutions.
Incident Management and Breach Notification.
Record Nr. UNINA-9911058126103321
Gregory Peter H  
Newark : , : John Wiley & Sons, Incorporated, , 2026
Materiale a stampa
Lo trovi qui: Univ. Federico II
Opac: Controlla la disponibilità qui