Vai al contenuto principale della pagina

Learning Theory [[electronic resource] ] : 19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006, Proceedings / / edited by Hans Ulrich Simon, Gábor Lugosi



(Visualizza in formato marc)    (Visualizza in BIBFRAME)

Titolo: Learning Theory [[electronic resource] ] : 19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006, Proceedings / / edited by Hans Ulrich Simon, Gábor Lugosi Visualizza cluster
Pubblicazione: Berlin, Heidelberg : , : Springer Berlin Heidelberg : , : Imprint : Springer, , 2006
Edizione: 1st ed. 2006.
Descrizione fisica: 1 online resource (XII, 660 p.)
Disciplina: 006.3/1
Soggetto topico: Artificial intelligence
Computers
Algorithms
Mathematical logic
Artificial Intelligence
Computation by Abstract Devices
Algorithm Analysis and Problem Complexity
Mathematical Logic and Formal Languages
Persona (resp. second.): SimonHans Ulrich
LugosiGábor
Note generali: Bibliographic Level Mode of Issuance: Monograph
Nota di bibliografia: Includes bibliographical references and index.
Nota di contenuto: Invited Presentations -- Random Multivariate Search Trees -- On Learning and Logic -- Predictions as Statements and Decisions -- Clustering, Un-, and Semisupervised Learning -- A Sober Look at Clustering Stability -- PAC Learning Axis-Aligned Mixtures of Gaussians with No Separation Assumption -- Stable Transductive Learning -- Uniform Convergence of Adaptive Graph-Based Regularization -- Statistical Learning Theory -- The Rademacher Complexity of Linear Transformation Classes -- Function Classes That Approximate the Bayes Risk -- Functional Classification with Margin Conditions -- Significance and Recovery of Block Structures in Binary Matrices with Noise -- Regularized Learning and Kernel Methods -- Maximum Entropy Distribution Estimation with Generalized Regularization -- Unifying Divergence Minimization and Statistical Inference Via Convex Duality -- Mercer’s Theorem, Feature Maps, and Smoothing -- Learning Bounds for Support Vector Machines with Learned Kernels -- Query Learning and Teaching -- On Optimal Learning Algorithms for Multiplicity Automata -- Exact Learning Composed Classes with a Small Number of Mistakes -- DNF Are Teachable in the Average Case -- Teaching Randomized Learners -- Inductive Inference -- Memory-Limited U-Shaped Learning -- On Learning Languages from Positive Data and a Limited Number of Short Counterexamples -- Learning Rational Stochastic Languages -- Parent Assignment Is Hard for the MDL, AIC, and NML Costs -- Learning Algorithms and Limitations on Learning -- Uniform-Distribution Learnability of Noisy Linear Threshold Functions with Restricted Focus of Attention -- Discriminative Learning Can Succeed Where Generative Learning Fails -- Improved Lower Bounds for Learning Intersections of Halfspaces -- Efficient Learning Algorithms Yield Circuit Lower Bounds -- Online Aggregation -- Optimal Oracle Inequality for Aggregation of Classifiers Under Low Noise Condition -- Aggregation and Sparsity Via ?1 Penalized Least Squares -- A Randomized Online Learning Algorithm for Better Variance Control -- Online Prediction and Reinforcement Learning I -- Online Learning with Variable Stage Duration -- Online Learning Meets Optimization in the Dual -- Online Tracking of Linear Subspaces -- Online Multitask Learning -- Online Prediction and Reinforcement Learning II -- The Shortest Path Problem Under Partial Monitoring -- Tracking the Best Hyperplane with a Simple Budget Perceptron -- Logarithmic Regret Algorithms for Online Convex Optimization -- Online Variance Minimization -- Online Prediction and Reinforcement Learning III -- Online Learning with Constraints -- Continuous Experts and the Binning Algorithm -- Competing with Wild Prediction Rules -- Learning Near-Optimal Policies with Bellman-Residual Minimization Based Fitted Policy Iteration and a Single Sample Path -- Other Approaches -- Ranking with a P-Norm Push -- Subset Ranking Using Regression -- Active Sampling for Multiple Output Identification -- Improving Random Projections Using Marginal Information -- Open Problems -- Efficient Algorithms for General Active Learning -- Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints.
Titolo autorizzato: Learning Theory  Visualizza cluster
ISBN: 3-540-35296-1
Formato: Materiale a stampa
Livello bibliografico Monografia
Lingua di pubblicazione: Inglese
Record Nr.: 996466132703316
Lo trovi qui: Univ. di Salerno
Opac: Controlla la disponibilità qui
Serie: Lecture Notes in Artificial Intelligence ; ; 4005