LEADER 05293nam 2200601 a 450 001 9910484279003321 005 20200520144314.0 010 $a3-540-35296-1 024 7 $a10.1007/11776420 035 $a(CKB)1000000000283911 035 $a(SSID)ssj0000318645 035 $a(PQKBManifestationID)11249848 035 $a(PQKBTitleCode)TC0000318645 035 $a(PQKBWorkID)10310455 035 $a(PQKB)11065307 035 $a(DE-He213)978-3-540-35296-9 035 $a(MiAaPQ)EBC3068083 035 $a(PPN)123135974 035 $a(EXLCZ)991000000000283911 100 $a20060510d2006 uy 0 101 0 $aeng 135 $aurnn|008mamaa 181 $ctxt 182 $cc 183 $acr 200 10$aLearning theory $e19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006 : proceedings /$fGabor Lugosi, Hans Ulrich Simon (eds.) 205 $a1st ed. 2006. 210 $aBerlin ;$aNew York $cSpringer$dc2006 215 $a1 online resource (XII, 660 p.) 225 1 $aLecture notes in computer science. Lecture notes in artificial intelligence,$x0302-9743 ;$v4005 225 1 $aLNCS sublibrary. SL 7, Artificial intelligence 300 $aBibliographic Level Mode of Issuance: Monograph 311 $a3-540-35294-5 320 $aIncludes bibliographical references and index. 327 $aInvited Presentations -- Random Multivariate Search Trees -- On Learning and Logic -- Predictions as Statements and Decisions -- Clustering, Un-, and Semisupervised Learning -- A Sober Look at Clustering Stability -- PAC Learning Axis-Aligned Mixtures of Gaussians with No Separation Assumption -- Stable Transductive Learning -- Uniform Convergence of Adaptive Graph-Based Regularization -- Statistical Learning Theory -- The Rademacher Complexity of Linear Transformation Classes -- Function Classes That Approximate the Bayes Risk -- Functional Classification with Margin Conditions -- Significance and Recovery of Block Structures in Binary Matrices with Noise -- Regularized Learning and Kernel Methods -- Maximum Entropy Distribution Estimation with Generalized Regularization -- Unifying Divergence Minimization and Statistical Inference Via Convex Duality -- Mercer?s Theorem, Feature Maps, and Smoothing -- Learning Bounds for Support Vector Machines with Learned Kernels -- Query Learning and Teaching -- On Optimal Learning Algorithms for Multiplicity Automata -- Exact Learning Composed Classes with a Small Number of Mistakes -- DNF Are Teachable in the Average Case -- Teaching Randomized Learners -- Inductive Inference -- Memory-Limited U-Shaped Learning -- On Learning Languages from Positive Data and a Limited Number of Short Counterexamples -- Learning Rational Stochastic Languages -- Parent Assignment Is Hard for the MDL, AIC, and NML Costs -- Learning Algorithms and Limitations on Learning -- Uniform-Distribution Learnability of Noisy Linear Threshold Functions with Restricted Focus of Attention -- Discriminative Learning Can Succeed Where Generative Learning Fails -- Improved Lower Bounds for Learning Intersections of Halfspaces -- Efficient Learning Algorithms Yield Circuit Lower Bounds -- Online Aggregation -- Optimal Oracle Inequality for Aggregation of Classifiers Under Low Noise Condition -- Aggregation and Sparsity Via ?1 Penalized Least Squares -- A Randomized Online Learning Algorithm for Better Variance Control -- Online Prediction and Reinforcement Learning I -- Online Learning with Variable Stage Duration -- Online Learning Meets Optimization in the Dual -- Online Tracking of Linear Subspaces -- Online Multitask Learning -- Online Prediction and Reinforcement Learning II -- The Shortest Path Problem Under Partial Monitoring -- Tracking the Best Hyperplane with a Simple Budget Perceptron -- Logarithmic Regret Algorithms for Online Convex Optimization -- Online Variance Minimization -- Online Prediction and Reinforcement Learning III -- Online Learning with Constraints -- Continuous Experts and the Binning Algorithm -- Competing with Wild Prediction Rules -- Learning Near-Optimal Policies with Bellman-Residual Minimization Based Fitted Policy Iteration and a Single Sample Path -- Other Approaches -- Ranking with a P-Norm Push -- Subset Ranking Using Regression -- Active Sampling for Multiple Output Identification -- Improving Random Projections Using Marginal Information -- Open Problems -- Efficient Algorithms for General Active Learning -- Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints. 410 0$aLecture notes in computer science.$pLecture notes in artificial intelligence ;$v4005. 410 0$aLNCS sublibrary.$nSL 7,$pArtificial intelligence. 517 3 $aNineteenth Annual Conference on Learning Theory 517 3 $aAnnual Conference on Learning Theory 517 3 $aCOLT 2006 606 $aMachine learning$vCongresses 615 0$aMachine learning 676 $a006.3/1 701 $aLugosi$b Gabor$0441761 701 $aSimon$b Hans-Ulrich$0394314 712 12$aConference on Learning Theory 801 0$bMiAaPQ 801 1$bMiAaPQ 801 2$bMiAaPQ 906 $aBOOK 912 $a9910484279003321 996 $aLearning theory$94186437 997 $aUNINA