05355nam 2201321z- 450 991057687490332120220621(CKB)5720000000008426(oapen)https://directory.doabooks.org/handle/20.500.12854/84560(oapen)doab84560(EXLCZ)99572000000000842620202206d2022 |y 0engurmn|---annantxtrdacontentcrdamediacrrdacarrierApproximate Bayesian InferenceBaselMDPI - Multidisciplinary Digital Publishing Institute20221 online resource (508 p.)3-0365-3789-9 3-0365-3790-2 Extremely popular for statistical inference, Bayesian methods are also becoming popular in machine learning and artificial intelligence problems. Bayesian estimators are often implemented by Monte Carlo methods, such as the Metropolis-Hastings algorithm of the Gibbs sampler. These algorithms target the exact posterior distribution. However, many of the modern models in statistics are simply too complex to use such methodologies. In machine learning, the volume of the data used in practice makes Monte Carlo methods too slow to be useful. On the other hand, these applications often do not require an exact knowledge of the posterior. This has motivated the development of a new generation of algorithms that are fast enough to handle huge datasets but that often target an approximation of the posterior. This book gathers 18 research papers written by Approximate Bayesian Inference specialists and provides an overview of the recent advances in these algorithms. This includes optimization-based methods (such as variational approximations) and simulation-based methods (such as ABC or Monte Carlo algorithms). The theoretical aspects of Approximate Bayesian Inference are covered, specifically the PAC-Bayes bounds and regret analysis. Applications for challenging computational problems in astrophysics, finance, medical data analysis, and computer vision area also presented.Mathematics and SciencebicsscResearch and information: generalbicsscapproximate Bayesian computationApproximate Bayesian Computationapproximate Bayesian computation (ABC)Bayesian inferenceBayesian samplingBayesian statisticsBethe free energybifurcationcomplex systemscontrol variatesdata imputationdata streamsdeep learningdifferential evolutiondifferential privacy (DP)discrete state spacedynamical systemsEdward-Sokal couplingentropyergodicityexpectation-propagationfactor graphsfixed-form variational BayesGaussiangeneralisation boundsGibbs posteriorgradient descentgreedy algorithmHamilton Monte Carlohyperparametersintegrated nested laplace approximationKullback-Leibler divergenceLangevin dynamicsLangevin Monte CarloLaplace approximationsmachine learningMarkov chainMarkov chain Monte CarloMarkov Chain Monte CarloMarkov kernelsMCMCMCMC-SAEMmean-fieldmessage passingmeta-learningMonte Carlo integrationnetwork modelingnetwork variabilityneural networksno free lunch theoremsnon-reversible dynamicsonline learningonline optimizationPAC-BayesPAC-Bayes theoryPAC-Bayes theoryparticle flowprincipal curvespriorsprobably approximately correctregret boundsRiemann Manifold Hamiltonian Monte Carlorobustnesssequential learningsequential Monte CarloSequential Monte Carlosleeping expertssparse vector technique (SVT)statistical learning theorystatistical mechanicsStiefel manifoldstochastic gradientsstochastic volatilitythinningvariable flowvariational approximationsvariational Bayesvariational free energyvariational inferencevariational message passingMathematics and ScienceResearch and information: generalAlquier Pierreedt1307630Alquier PierreothBOOK9910576874903321Approximate Bayesian Inference3028878UNINA