36 resultados para Penalization
Resumo:
A new approach that can easily incorporate any generic penalty function into the diffuse optical tomographic image reconstruction is introduced to show the utility of nonquadratic penalty functions. The penalty functions that were used include quadratic (l(2)), absolute (l(1)), Cauchy, and Geman-McClure. The regularization parameter in each of these cases was obtained automatically by using the generalized cross-validation method. The reconstruction results were systematically compared with each other via utilization of quantitative metrics, such as relative error and Pearson correlation. The reconstruction results indicate that, while the quadratic penalty may be able to provide better separation between two closely spaced targets, its contrast recovery capability is limited, and the sparseness promoting penalties, such as l(1), Cauchy, and Geman-McClure have better utility in reconstructing high-contrast and complex-shaped targets, with the Geman-McClure penalty being the most optimal one. (C) 2013 Optical Society of America
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.
Resumo:
A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.
Resumo:
In this paper, we study the behaviour of the slotted Aloha multiple access scheme with a finite number of users under different traffic loads and optimize the retransmission probability q(r) for various settings, cost objectives and policies. First, we formulate the problem as a parameter optimization problem and use certain efficient smoothed functional algorithms for finding the optimal retransmission probability parameter. Next, we propose two classes of multi-level closed-loop feedback policies (for finding in each case the retransmission probability qr that now depends on the current system state) and apply the above algorithms for finding an optimal policy within each class of policies. While one of the policy classes depends on the number of backlogged nodes in the system, the other depends on the number of time slots since the last successful transmission. The latter policies are more realistic as it is difficult to keep track of the number of backlogged nodes at each instant. We investigate the effect of increasing the number of levels in the feedback policies. Wen also investigate the effects of using different cost functions (withn and without penalization) in our algorithms and the corresponding change in the throughput and delay using these. Both of our algorithms use two-timescale stochastic approximation. One of the algorithms uses one simulation while the other uses two simulations of the system. The two-simulation algorithm is seen to perform better than the other algorithm. Optimal multi-level closed-loop policies are seen to perform better than optimal open-loop policies. The performance further improves when more levels are used in the feedback policies.
Resumo:
The text addresses the issue of information security as exemplified by clandestine collaboration and the influence exerted by the Internal Security Agency officers upon journalists. The texts analyzes the de lege lata regulations as well as the de lege ferenda ones. As for the former, the penal provisions of the Act, that is Articles 153b–153d (Chapter 10a) are applicable, whereas as for the latter, the applicable regulations are the 2013 Bill Articles numbered 197-199 (Chapter 10). In both the 2002 Act on the Internal Security Agency and Foreign Intelligence Agency as well as in the 2013 draft Bill of the Internal Security Agency, the legislator penalizes the employment by the officers of the information acquired while fulfilling or in connection with official duties for the purpose of affecting the operation of public authority bodies, entrepreneurs or broadcasters, editors-in-chief, journalists and persons conducting publishing activity. Also, the text analyzes regulations concerned with the penalization of clandestine collaboration engaged in by ABW officers with a broadcaster, editor-in-chief, a journalist and a person conducting publishing activity.
Resumo:
Optimization methods have been used in many areas of knowledge, such as Engineering, Statistics, Chemistry, among others, to solve optimization problems. In many cases it is not possible to use derivative methods, due to the characteristics of the problem to be solved and/or its constraints, for example if the involved functions are non-smooth and/or their derivatives are not know. To solve this type of problems a Java based API has been implemented, which includes only derivative-free optimization methods, and that can be used to solve both constrained and unconstrained problems. For solving constrained problems, the classic Penalty and Barrier functions were included in the API. In this paper a new approach to Penalty and Barrier functions, based on Fuzzy Logic, is proposed. Two penalty functions, that impose a progressive penalization to solutions that violate the constraints, are discussed. The implemented functions impose a low penalization when the violation of the constraints is low and a heavy penalty when the violation is high. Numerical results, obtained using twenty-eight test problems, comparing the proposed Fuzzy Logic based functions to six of the classic Penalty and Barrier functions are presented. Considering the achieved results, it can be concluded that the proposed penalty functions besides being very robust also have a very good performance.
Resumo:
Cette recherche s'intéresse aux perceptions des personnes itinérantes sur leur judiciarisation et leurs incarcérations. La judiciarisation s'opère suite à la remise de constats d'infractions en lien avec les règlementations municipales de la ville, celles de la Société des Transports de Montréal ainsi que le Code de Sécurité Routière. Elle relève donc de la procédure pénale, par opposition au code criminel et concerne des infractions mineures, souvent en lien avec des incivilités. Ultimement, la judiciarisation conduit à l'emprisonnement faute de paiement de l'amende. L'objectif de cette recherche est de mieux comprendre les perceptions à partir d'une compréhension des effets au plan matériel, des relations entretenues avec les différents acteurs socio-judiciaires et du regard que les personnes portent sur la justice à partir de leur expérience. Ancrée dans un cadre théorique fondé sur la reconnaissance (Honneth, 2000), l'expérience de judiciarisation et de l'incarcération est conçue et révélatrice d'un rapport entre la personne itinérante et le système de justice. Pour réaliser cette étude, deux méthodologies complémentaires ont été utilisées. La première s'appuie sur 29 entrevues réalisées avec des personnes itinérantes, portant sur leurs expériences de judiciarisation et sur leur expérience de rue. La seconde a consisté en une analyse statistique descriptive des dossiers judiciaires des 29 personnes, dossiers comprenant l'ensemble des infractions reprochées (criminelles et pénales) ainsi que le processus judiciaire suivi par chacune.
Resumo:
L'un des modèles d'apprentissage non-supervisé générant le plus de recherche active est la machine de Boltzmann --- en particulier la machine de Boltzmann restreinte, ou RBM. Un aspect important de l'entraînement ainsi que l'exploitation d'un tel modèle est la prise d'échantillons. Deux développements récents, la divergence contrastive persistante rapide (FPCD) et le herding, visent à améliorer cet aspect, se concentrant principalement sur le processus d'apprentissage en tant que tel. Notamment, le herding renonce à obtenir un estimé précis des paramètres de la RBM, définissant plutôt une distribution par un système dynamique guidé par les exemples d'entraînement. Nous généralisons ces idées afin d'obtenir des algorithmes permettant d'exploiter la distribution de probabilités définie par une RBM pré-entraînée, par tirage d'échantillons qui en sont représentatifs, et ce sans que l'ensemble d'entraînement ne soit nécessaire. Nous présentons trois méthodes: la pénalisation d'échantillon (basée sur une intuition théorique) ainsi que la FPCD et le herding utilisant des statistiques constantes pour la phase positive. Ces méthodes définissent des systèmes dynamiques produisant des échantillons ayant les statistiques voulues et nous les évaluons à l'aide d'une méthode d'estimation de densité non-paramétrique. Nous montrons que ces méthodes mixent substantiellement mieux que la méthode conventionnelle, l'échantillonnage de Gibbs.
Resumo:
L'analyse en composantes indépendantes (ACI) est une méthode d'analyse statistique qui consiste à exprimer les données observées (mélanges de sources) en une transformation linéaire de variables latentes (sources) supposées non gaussiennes et mutuellement indépendantes. Dans certaines applications, on suppose que les mélanges de sources peuvent être groupés de façon à ce que ceux appartenant au même groupe soient fonction des mêmes sources. Ceci implique que les coefficients de chacune des colonnes de la matrice de mélange peuvent être regroupés selon ces mêmes groupes et que tous les coefficients de certains de ces groupes soient nuls. En d'autres mots, on suppose que la matrice de mélange est éparse par groupe. Cette hypothèse facilite l'interprétation et améliore la précision du modèle d'ACI. Dans cette optique, nous proposons de résoudre le problème d'ACI avec une matrice de mélange éparse par groupe à l'aide d'une méthode basée sur le LASSO par groupe adaptatif, lequel pénalise la norme 1 des groupes de coefficients avec des poids adaptatifs. Dans ce mémoire, nous soulignons l'utilité de notre méthode lors d'applications en imagerie cérébrale, plus précisément en imagerie par résonance magnétique. Lors de simulations, nous illustrons par un exemple l'efficacité de notre méthode à réduire vers zéro les groupes de coefficients non-significatifs au sein de la matrice de mélange. Nous montrons aussi que la précision de la méthode proposée est supérieure à celle de l'estimateur du maximum de la vraisemblance pénalisée par le LASSO adaptatif dans le cas où la matrice de mélange est éparse par groupe.
Resumo:
En esta Tesis se presenta el modelo de Kou, Difusión con saltos doble exponenciales, para la valoración de opciones Call de tipo europeo sobre los precios del petróleo como activo subyacente. Se mostrarán los cálculos numéricos para la formulación de expresiones analíticas que se resolverán mediante la implementación de algoritmos numéricos eficientes que conllevaran a los precios teóricos de las opciones evaluadas. Posteriormente se discutirán las ventajas de usar métodos como la transformada de Fourier por la sencillez relativa de su programación frente a los desarrollos de otras técnicas numéricas. Este método es usado en conjunto con el ejercicio de calibración no paramétrica de regularización, que mediante la minimización de los errores al cuadrado sujeto a una penalización fundamentada en el concepto de entropía relativa, resultaran en la obtención de precios para las opciones Call sobre el petróleo considerando una mejor capacidad del modelo de asignar precios justos frente a los transados en el mercado.
Resumo:
Recent statistical data confirms that domestic violence is a structural problem of exceptional gravity. We analyze the frequent legislative changes in Brazil since 2000 as a result of social pressure for protection of abused women. Only the Law 11.340 of 2006 was well received by lawyers, judges and the public opinion. We present the innovations and peculiarities of this statute and the allegations on unconstitutionality. We discuss cases of judicial review of this law and reject the arguments of unconstitutionality. That notwithstanding, we argue that penalization decisions is the wrong way from a criminological point of view because they do not take into consideration the desires and needs of the victims.
Resumo:
Smart microgrids offer a new challenging domain for power theories and metering techniques because they include a variety of intermittent power sources which positively impact on power flow and distribution losses but may cause voltage asymmetry and frequency variation. In smart microgrids, the voltage distortion and asymmetry in presence of poly-phase nonlinear loads can be also greater than in usual distribution lines fed by the utility, thus affecting measurement accuracy and possibly causing tripping of protections. In such a context, a reconsideration of power theories is required since they form the basis for supply and load characterization. A revision of revenue metering techniques is also suggested to ensure a correct penalization of the loads for their responsibility in generating reactive power, voltage asymmetry, and distortion. This paper shows that the conservative power theory provides a suitable background to cope with smart grids characterization and metering needs. Simulation and experimental results show the properties of the proposed approach.