05340nam 2201309z- 450 991057687490332120231214132829.0(CKB)5720000000008426(oapen)https://directory.doabooks.org/handle/20.500.12854/84560(EXLCZ)99572000000000842620202206d2022 |y 0engurmn|---annantxtrdacontentcrdamediacrrdacarrierApproximate Bayesian InferenceBaselMDPI - Multidisciplinary Digital Publishing Institute20221 electronic resource (508 p.)3-0365-3789-9 3-0365-3790-2 Extremely popular for statistical inference, Bayesian methods are also becoming popular in machine learning and artificial intelligence problems. Bayesian estimators are often implemented by Monte Carlo methods, such as the Metropolis–Hastings algorithm of the Gibbs sampler. These algorithms target the exact posterior distribution. However, many of the modern models in statistics are simply too complex to use such methodologies. In machine learning, the volume of the data used in practice makes Monte Carlo methods too slow to be useful. On the other hand, these applications often do not require an exact knowledge of the posterior. This has motivated the development of a new generation of algorithms that are fast enough to handle huge datasets but that often target an approximation of the posterior. This book gathers 18 research papers written by Approximate Bayesian Inference specialists and provides an overview of the recent advances in these algorithms. This includes optimization-based methods (such as variational approximations) and simulation-based methods (such as ABC or Monte Carlo algorithms). The theoretical aspects of Approximate Bayesian Inference are covered, specifically the PAC–Bayes bounds and regret analysis. Applications for challenging computational problems in astrophysics, finance, medical data analysis, and computer vision area also presented.Research & information: generalbicsscMathematics & sciencebicsscbifurcationdynamical systemsEdward–Sokal couplingmean-fieldKullback–Leibler divergencevariational inferenceBayesian statisticsmachine learningvariational approximationsPAC-Bayesexpectation-propagationMarkov chain Monte CarloLangevin Monte Carlosequential Monte CarloLaplace approximationsapproximate Bayesian computationGibbs posteriorMCMCstochastic gradientsneural networksApproximate Bayesian Computationdifferential evolutionMarkov kernelsdiscrete state spaceergodicityMarkov chainprobably approximately correctvariational BayesBayesian inferenceMarkov Chain Monte CarloSequential Monte CarloRiemann Manifold Hamiltonian Monte Carlointegrated nested laplace approximationfixed-form variational Bayesstochastic volatilitynetwork modelingnetwork variabilityStiefel manifoldMCMC-SAEMdata imputationBethe free energyfactor graphsmessage passingvariational free energyvariational message passingapproximate Bayesian computation (ABC)differential privacy (DP)sparse vector technique (SVT)Gaussianparticle flowvariable flowLangevin dynamicsHamilton Monte Carlonon-reversible dynamicscontrol variatesthinningmeta-learninghyperparameterspriorsonline learningonline optimizationgradient descentstatistical learning theoryPAC–Bayes theorydeep learninggeneralisation boundsBayesian samplingMonte Carlo integrationPAC-Bayes theoryno free lunch theoremssequential learningprincipal curvesdata streamsregret boundsgreedy algorithmsleeping expertsentropyrobustnessstatistical mechanicscomplex systemsResearch & information: generalMathematics & scienceAlquier Pierreedt1307630Alquier PierreothBOOK9910576874903321Approximate Bayesian Inference3028878UNINA