999 resultados para Empirical minimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many applications, the training data, from which one needs to learn a classifier, is corrupted with label noise. Many standard algorithms such as SVM perform poorly in the presence of label noise. In this paper we investigate the robustness of risk minimization to label noise. We prove a sufficient condition on a loss function for the risk minimization under that loss to be tolerant to uniform label noise. We show that the 0-1 loss, sigmoid loss, ramp loss and probit loss satisfy this condition though none of the standard convex loss functions satisfy it. We also prove that, by choosing a sufficiently large value of a parameter in the loss function, the sigmoid loss, ramp loss and probit loss can be made tolerant to nonuniform label noise also if we can assume the classes to be separable under noise-free data distribution. Through extensive empirical studies, we show that risk minimization under the 0-1 loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How to refine a near-native structure to make it closer to its native conformation is an unsolved problem in protein-structure and protein-protein complex-structure prediction. In this article, we first test several scoring functions for selecting locally resampled near-native protein-protein docking conformations and then propose a computationally efficient protocol for structure refinement via local resampling and energy minimization. The proposed method employs a statistical energy function based on a Distance-scaled Ideal-gas REference state (DFIRE) as an initial filter and an empirical energy function EMPIRE (EMpirical Protein-InteRaction Energy) for optimization and re-ranking. Significant improvement of final top-1 ranked structures over initial near-native structures is observed in the ZDOCK 2.3 decoy set for Benchmark 1.0 (74% whose global rmsd reduced by 0.5 angstrom or more and only 7% increased by 0.5 angstrom or more). Less significant improvement is observed for Benchmark 2.0 (38% versus 33%). Possible reasons are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a discrete mixture model which assigns individuals, up to a probability, to either a class of random utility (RU) maximizers or a class of random regret (RR) minimizers, on the basis of their sequence of observed choices. Our proposed model advances the state of the art of RU-RR mixture models by (i) adding and simultaneously estimating a membership model which predicts the probability of belonging to a RU or RR class; (ii) adding a layer of random taste heterogeneity within each behavioural class; and (iii) deriving a welfare measure associated with the RU-RR mixture model and consistent with referendum-voting, which is the adequate mechanism of provision for such local public goods. The context of our empirical application is a stated choice experiment concerning traffic calming schemes. We find that the random parameter RU-RR mixture model not only outperforms its fixed coefficient counterpart in terms of fit-as expected-but also in terms of plausibility of membership determinants of behavioural class. In line with psychological theories of regret, we find that, compared to respondents who are familiar with the choice context (i.e. the traffic calming scheme), unfamiliar respondents are more likely to be regret minimizers than utility maximizers. © 2014 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of the global ecological crisis is presenting unique opportunities for the coordination of ethical thinking across cultural boundaries. Harm minimization as an ethical imperative operates as the ‘modus operandi’ behind both Ecologically Sustainable Design (ESD) and Buddhist practice. The architectural response to ESD is founded upon the ‘Declaration of Interdependence for a Sustainable Future’ adopted in 1993 by the International Union of Architects, of which the RAIA is a member.

Buddhism is a response to existential concerns universal to humanity. It developed as a set of principles for personal transformation known as the Four Noble Truths elucidated two and a half thousand years ago. Buddhist meditation practise ‘interrupts automatic patterns of conditioned behaviour’ recognised as the major obstacle to be overcome in any programme for change. Unsustainable egocentric behaviour is considered fundamental to our global ecological crisis and calls for radical behavioural change are increasingly being heard at the professional as well as the personal level. Emerging synergies between the Western cognitive sciences and Buddhist study of the mind increasingly validate the Tibetan Buddhist mind development phenomenon. Buddhists argue that their programme for enhancing ethical behaviour through mind development is a step-by step process of observation and analysis built upon empirical observation – a fundamental pre-requisite of any ‘scientific’ enquiry. Collaborative research programmes currently underway are an attempt to re-interpret Buddhist meditation techniques within a framework acceptable to Western scientific understanding. A truly holistic approach to harm minimization requires its consideration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To be diagnostically effective, structural magnetic resonance imaging (sMRI) must reliably distinguish a depressed individual from a healthy individual at individual scans level. One of the tasks in the automated diagnosis of depression from brain sMRI is the classification. It determines the class to which a sample belongs (i.e., depressed/not depressed, remitted/not-remitted depression) based on the values of its features. Thus far, very limited works have been reported for identification of a suitable classification algorithm for depression detection. In this paper, different types of classification algorithms are compared for effective diagnosis of depression. Ten independent classification schemas are applied and a comparative study is carried out. The algorithms are: Naïve Bayes, Support Vector Machines (SVM) with Radial Basis Function (RBF), SVM Sigmoid, J48, Random Forest, Random Tree, Voting Feature Intervals (VFI), LogitBoost, Simple KMeans Classification Via Clustering (KMeans) and Classification Via Clustering Expectation Minimization (EM) respectively. The performances of the algorithms are determined through a set of experiments on sMRI brain scans. An experimental procedure is developed to measure the performance of the tested algorithms. A classification accuracy evaluation method was employed for evaluation and comparison of the performance of the examined classifiers.

Relevância:

20.00% 20.00%

Publicador: