971 resultados para error probability
Resumo:
Using the independent particle model as our basis we present a scheme to reduce the complexity and computational effort to calculate inclusive probabilities in many-electron collision system. As an example we present an application to K - K charge transfer in collisions of 2.6 MeV Ne{^9+} on Ne. We are able to give impact parameter-dependent probabilities for many-particle states which could lead to KLL-Auger electrons after collision and we compare with experimental values.
Resumo:
Using the single-particle amplitudes from a 20-level coupled-channel calculation with ab initio relativistic self consistent LCAO-MO Dirac-Fock-Slater energy eigenvalues and matrix elements we calculate within the frame of the inclusive probability formalism impact-parameter-dependent K-hole transfer probabilities. As an example we show results for the heavy asymmetric collision system S{^15+} on Ar for impact energies from 4.7 to 16 MeV. The inclusive probability formalism which reinstates the many-particle aspect of the collision system permits a qualitative and quantitative agreement with the experiment which is not achieved by the single-particle picture.
Resumo:
The aim of this paper is the numerical treatment of a boundary value problem for the system of Stokes' equations. For this we extend the method of approximate approximations to boundary value problems. This method was introduced by V. Maz'ya in 1991 and has been used until now for the approximation of smooth functions defined on the whole space and for the approximation of volume potentials. In the present paper we develop an approximation procedure for the solution of the interior Dirichlet problem for the system of Stokes' equations in two dimensions. The procedure is based on potential theoretical considerations in connection with a boundary integral equations method and consists of three approximation steps as follows. In a first step the unknown source density in the potential representation of the solution is replaced by approximate approximations. In a second step the decay behavior of the generating functions is used to gain a suitable approximation for the potential kernel, and in a third step Nyström's method leads to a linear algebraic system for the approximate source density. For every step a convergence analysis is established and corresponding error estimates are given.
Resumo:
Graphical techniques for modeling the dependencies of randomvariables have been explored in a variety of different areas includingstatistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics.Formalisms for manipulating these models have been developedrelatively independently in these research communities. In this paper weexplore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independencenetworks (PINs). The paper contains a self-contained review of the basic principles of PINs.It is shown that the well-known forward-backward (F-B) and Viterbialgorithms for HMMs are special cases of more general inference algorithms forarbitrary PINs. Furthermore, the existence of inference and estimationalgorithms for more general graphical models provides a set of analysistools for HMM practitioners who wish to explore a richer class of HMMstructures.Examples of relatively complex models to handle sensorfusion and coarticulationin speech recognitionare introduced and treated within the graphical model framework toillustrate the advantages of the general approach.
Resumo:
Support Vector Machines Regression (SVMR) is a regression technique which has been recently introduced by V. Vapnik and his collaborators (Vapnik, 1995; Vapnik, Golowich and Smola, 1996). In SVMR the goodness of fit is measured not by the usual quadratic loss function (the mean square error), but by a different loss function called Vapnik"s $epsilon$- insensitive loss function, which is similar to the "robust" loss functions introduced by Huber (Huber, 1981). The quadratic loss function is well justified under the assumption of Gaussian additive noise. However, the noise model underlying the choice of Vapnik's loss function is less clear. In this paper the use of Vapnik's loss function is shown to be equivalent to a model of additive and Gaussian noise, where the variance and mean of the Gaussian are random variables. The probability distributions for the variance and mean will be stated explicitly. While this work is presented in the framework of SVMR, it can be extended to justify non-quadratic loss functions in any Maximum Likelihood or Maximum A Posteriori approach. It applies not only to Vapnik's loss function, but to a much broader class of loss functions.
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densities by generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression
Resumo:
In this paper, we define a new scheme to develop and evaluate protection strategies for building reliable GMPLS networks. This is based on what we have called the network protection degree (NPD). The NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability, and an a posteriori evaluation, the failure impact degree (FID), which determines the impact on the network in case of failure, in terms of packet loss and recovery time. Having mathematical formulated these components, experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms in order to offer a certain degree of protection
Resumo:
Resumen tomado de la publicaci??n
Resumo:
Resumen tomado de la publicación
Resumo:
Introducción: el gold estándar para el error refractivo es la retinoscopía. Los docentes de optometría al evaluar estudiantes, aceptan una diferencia de ±0,50D en la refracción pero no se ha evaluado estadísticamente si es adecuado para ametropías bajas y altas. El objetivo fue cuantificar el grado de concordancia interobservadores en retinoscopía estática entre docentes y estudiantes, para ametropías altas y bajas. Metodología: estudio de concordancia entre 4 observadores en 40 ojos, 20 con ametropías altas y 20 con bajas; muestreo no probabilístico por conveniencia. Análisis estadístico con coeficiente de correlación intraclase, confiabilidad 95%, poder 90%, y con método gráfico de límites de acuerdo al 95%. Resultados: concordancia para el equivalente esférico entre docentes 0,96 y entre estudiantes 0,56. En estudiantes concordancia de 0,89 para defectos refractivos bajos y docentes 0,96 para defectos altos. Concordancia entre cuatro examinadores 0,78, defectos bajos 0,86 y para altos 0,67. Margen de error entre docentes ±0,87D y estudiantes ±3,15D. En defectos bajos ±0,61D para docentes y ±0,80D para estudiantes y en defectos altos ±1,10D y ±4,22D respectivamente. Discusión: hubo mayor confiabilidad en retinoscopía entre profesionales experimentados. Se comparó la concordancia entre docentes y estudiantes, por eso puede haberse encontrado menor concordancia que la descrita por otros estudios que compararon entre profesionales a pesar haber sido elegidos por sus buenas calificaciones. Se deben formular estrategias de enseñanza que permitan reducir los márgenes de error obtenidos y mejorar la concordancia entre docentes y estudiantes.
Resumo:
La complejidad que supone abarcar el estudio de la responsabilidad patrimonial del Estado en el ámbito médico sanitario, hace preciso prestar atención a ciertos temas que resultan especialmente relevantes y que han sido decantados jurisprudencialmente por el Honorable Consejo de Estado. De esta manera el presente trabajo desarrolla temas descollantes y novedosos en materia de imputabilidad como viene a ser la prueba de la falla médica mediante la teoría "res ipsa loquitur"; la prueba del nexo causal a través de la prueba indiciaria y la teoría de la probabilidad preponderante. Así mismo se estudian los diversos tipos de daños antijurídicos que pueden darse dentro de la prestación médica a cargo del Estado, destacando especialmente la lesión al derecho a recibir una atención oportuna y eficaz, la pérdida de una oportunidad debida a la no obtención del consentimiento informado del paciente, lo que supone, a su vez, el cercenamiento del derecho de este a elegir someterse o no a determinado tratamiento, previo valoración de pros y contras de la terapia sugerida por el galeno (principio de no agravación). Así mismo se analizanlas hipótesis de daños antijurídicos derivados del error en el diagnóstico, la falla por la omisión de las entidades de control y vigilancia, falla en gineco-obstetricia, así como las hipótesis de responsabilidad objetiva del Estado por óblito quirúrgico, para finalmente tratar el tema novedoso del alea terapéutica con sus particulares características y eventual aplicabilidad en el sistema jurídico colombiano.
Resumo:
Resumen tomado de la revista