|
|
|
|
|
|
|
|
|
1. |
Record Nr. |
UNINA9910767552603321 |
|
|
Titolo |
Learning theory : 20th Annual Conference on Learning Theory, COLT 2007, San Diego, CA, USA, June 13-15, 2007 : proceedings / / Nader H. Bshouty, Claudio Gentile (editors) |
|
|
|
|
|
|
|
Pubbl/distr/stampa |
|
|
Berlin, Germany : , : Springer, , [2007] |
|
©2007 |
|
|
|
|
|
|
|
|
|
ISBN |
|
1-280-94078-6 |
9786610940783 |
3-540-72927-5 |
|
|
|
|
|
|
|
|
Edizione |
[1st ed. 2007.] |
|
|
|
|
|
Descrizione fisica |
|
1 online resource (644 p.) |
|
|
|
|
|
|
Collana |
|
Lecture Notes in Artificial Intelligence ; ; 4539 |
|
|
|
|
|
|
Disciplina |
|
|
|
|
|
|
Soggetti |
|
|
|
|
|
|
Lingua di pubblicazione |
|
|
|
|
|
|
Formato |
Materiale a stampa |
|
|
|
|
|
Livello bibliografico |
Monografia |
|
|
|
|
|
Note generali |
|
Description based upon print version of record. |
|
|
|
|
|
|
Nota di bibliografia |
|
Includes bibliographical references and index. |
|
|
|
|
|
|
Nota di contenuto |
|
Invited Presentations -- Property Testing: A Learning Theory Perspective -- Spectral Algorithms for Learning and Clustering -- Unsupervised, Semisupervised and Active Learning I -- Minimax Bounds for Active Learning -- Stability of k-Means Clustering -- Margin Based Active Learning -- Unsupervised, Semisupervised and Active Learning II -- Learning Large-Alphabet and Analog Circuits with Value Injection Queries -- Teaching Dimension and the Complexity of Active Learning -- Multi-view Regression Via Canonical Correlation Analysis -- Statistical Learning Theory -- Aggregation by Exponential Weighting and Sharp Oracle Inequalities -- Occam’s Hammer -- Resampling-Based Confidence Regions and Multiple Tests for a Correlated Random Vector -- Suboptimality of Penalized Empirical Risk Minimization in Classification -- Transductive Rademacher Complexity and Its Applications -- Inductive Inference -- U-Shaped, Iterative, and Iterative-with-Counter Learning -- Mind Change Optimal Learning of Bayes Net Structure -- Learning Correction Grammars -- Mitotic Classes -- Online and Reinforcement Learning I -- Regret to the Best vs. Regret to the Average -- Strategies for Prediction Under Imperfect Monitoring -- Bounded Parameter Markov Decision Processes with Average Reward Criterion -- Online and Reinforcement Learning II -- |
|
|
|
|
|
|
|
|
|
|
On-Line Estimation with the Multivariate Gaussian Distribution -- Generalised Entropy and Asymptotic Complexities of Languages -- Q-Learning with Linear Function Approximation -- Regularized Learning, Kernel Methods, SVM -- How Good Is a Kernel When Used as a Similarity Measure? -- Gaps in Support Vector Optimization -- Learning Languages with Rational Kernels -- Generalized SMO-Style Decomposition Algorithms -- Learning Algorithms and Limitations on Learning -- Learning Nested Halfspaces and Uphill Decision Trees -- An Efficient Re-scaled Perceptron Algorithm for Conic Systems -- A Lower Bound for Agnostically Learning Disjunctions -- Sketching Information Divergences -- Competing with Stationary Prediction Strategies -- Online and Reinforcement Learning III -- Improved Rates for the Stochastic Continuum-Armed Bandit Problem -- Learning Permutations with Exponential Weights -- Online and Reinforcement Learning IV -- Multitask Learning with Expert Advice -- Online Learning with Prior Knowledge -- Dimensionality Reduction -- Nonlinear Estimators and Tail Bounds for Dimension Reduction in l 1 Using Cauchy Random Projections -- Sparse Density Estimation with ?1 Penalties -- ?1 Regularization in Infinite Dimensional Feature Spaces -- Prediction by Categorical Features: Generalization Properties and Application to Feature Ranking -- Other Approaches -- Observational Learning in Random Networks -- The Loss Rank Principle for Model Selection -- Robust Reductions from Ranking to Classification -- Open Problems -- Rademacher Margin Complexity -- Open Problems in Efficient Semi-supervised PAC Learning -- Resource-Bounded Information Gathering for Correlation Clustering -- Are There Local Maxima in the Infinite-Sample Likelihood of Gaussian Mixture Estimation? -- When Is There a Free Matrix Lunch?. |
|
|
|
|
|
| |