67 resultados para Error probability


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper explores three aspects of strategic uncertainty: its relation to risk, predictability of behavior and subjective beliefs of players. In a laboratory experiment we measure subjects certainty equivalents for three coordination games and one lottery. Behavior in coordination games is related to risk aversion, experience seeking, and age.From the distribution of certainty equivalents we estimate probabilities for successful coordination in a wide range of games. For many games, success of coordination is predictable with a reasonable error rate. The best response to observed behavior is close to the global-game solution. Comparing choices in coordination games with revealed risk aversion, we estimate subjective probabilities for successful coordination. In games with a low coordination requirement, most subjects underestimate the probability of success. In games with a high coordination requirement, most subjects overestimate this probability. Estimating probabilistic decision models, we show that the quality of predictions can be improved when individual characteristics are taken into account. Subjects behavior is consistent with probabilistic beliefs about the aggregate outcome, but inconsistent with probabilistic beliefs about individual behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a controversial debate about the effects of permanent disability benefits on labormarket behavior. In this paper we estimate equations for deserving and receiving disabilitybenefits to evaluate the award error as the difference in the probability of receiving anddeserving using survey data from Spain. Our results indicate that individuals aged between55 and 59, self-employers or working in an agricultural sector have a probability of receiving a benefit without deserving it significantly higher than the rest of individuals. We also find evidence of gender discrimination since male have a significantly higher probability of receiving a benefit without deserving it. This seems to confirm that disability benefits are being used as an instrument for exiting the labor market for some individuals approaching the early retirement or those who do not have right to retire early. Taking into account that awarding process depends on Social Security Provincial Department, this means that some departments are applying loosely the disability requirements for granting disability benefits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a new econometric estimation method for analyzing the probabilityof leaving unemployment using uncompleted spells from repeated cross-sectiondata, which can be especially useful when panel data are not available. Theproposed method-of-moments-based estimator has two important features:(1) it estimates the exit probability at the individual level and(2) it does not rely on the stationarity assumption of the inflowcomposition. We illustrate and gauge the performance of the proposedestimator using the Spanish Labor Force Survey data, and analyze the changesin distribution of unemployment between the 1980s and 1990s during a periodof labor market reform. We find that the relative probability of leavingunemployment of the short-term unemployed versus the long-term unemployedbecomes significantly higher in the 1990s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The economic literature on crime and punishment focuses on the trade-off between probability and severity of punishment, and suggests that detection probability and fines are substitutes. In this paper it is shown that, in presence of substantial underdeterrence caused by costly detection and punishment, these instruments may become complements. When offenders are poor, the deterrent value of monetary sanctions is low. Thus, the government does not invest a lot in detection. If offenders are rich, however, the deterrent value of monetary sanctions is high, so it is more profitable to prosecute them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The speed and width of front solutions to reaction-dispersal models are analyzed both analytically and numerically. We perform our analysis for Laplace and Gaussian distribution kernels, both for delayed and nondelayed models. The results are discussed in terms of the characteristic parameters of the models

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La evolución de peso en los períodos de lactación y transición de 583 lechones fue estudiada mediante un análisis estadístico, evaluando el efecto de la suplementación con ácidos grasos de cadena media (AGCM) en lechones con poco peso al nacimiento. 188 de los 375 lechones que nacieron con un peso al nacimiento (PN) &1250 g recibieron 3mL de AGCM cada 24 h durante los primeros 3 días de vida; su peso medio al destete (día 28) fue inferior respecto al grupo control (lechones no suplementados) (-114,17 g). No obstante, 106 de los 180 lechones nacidos con un PN &1000 g fueron suplementados, y su peso medio al destete y a finales de transición (día 63) fue superior respecto al grupo control (destete: +315,16 g; día 63: +775,47 g). Finalmente, los lechones suplementados con PN&800 g tuvieron los peores resultados: su diferencia de peso medio al destete fue de -177,58 g respecto al grupo control. Por lo tanto, en esta prueba fueron estudiados los lechones con un PN entre 800 y 999 g porque el grupo suplementado al destete tuvo una diferencia de peso medio considerable respecto al grupo control:+511,58 g. Asimismo, considerando una probabilidad de error inferior a 0,05, no hubieron diferencias significativas en las diferentes categorías de PN analizadas. De todas maneras, es importante destacar el alto grado de significación en la suplementación con AGCM en lechones con PN entre 800 y 999g (P=0,059). Por otra parte, el PN del grupo suplementado con PN&1000 g fue inferior que el del grupo no suplementado con PN&1000 g; esta diferencia de PN fue significativa (P=0,004) y como consecuencia el grado de significación en la suplementación con AGCM en lechones con PN entre 800 y 999 g fue inferior al esperado. Además, en esta prueba se incluyeron algunos resultados generales y también un análisis simple de supervivencia, aunque no era el objetivo principal

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new respiratory impedance estimator to minimize the error due to breathing. Its practical reliability was evaluated in a simulation using realistic signals. These signals were generated by superposing pressure and flow records obtained in two conditions: 1) when applying forced oscillation to a resistance- inertance- elastance (RIE) mechanical model; 2) when healthy subjects breathed through the unexcited forced oscillation generator. Impedances computed (4-32 Hz) from the simulated signals with the new estimator resulted in a mean value which was scarcely biased by the added breathing (errors less than 1 percent in the mean R, I , and E ) and had a small variability (coefficients of variation of R, I, and E of 1.3, 3.5, and 9.6 percent, respectively). Our results suggest that the proposed estimator reduces the error in measurement of respiratory impedance without appreciable extracomputational cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the motion of an unbound particle under the influence of a random force modeled as Gaussian colored noise with an arbitrary correlation function. We derive exact equations for the joint and marginal probability density functions and find the associated solutions. We analyze in detail anomalous diffusion behaviors along with the fractal structure of the trajectories of the particle and explore possible connections between dynamical exponents of the variance and the fractal dimension of the trajectories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the motion of a particle governed by a generalized Langevin equation. We show that, when no fluctuation-dissipation relation holds, the long-time behavior of the particle may be from stationary to superdiffusive, along with subdiffusive and diffusive. When the random force is Gaussian, we derive the exact equations for the joint and marginal probability density functions for the position and velocity of the particle and find their solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.