22 resultados para prior probabilities
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
A comparative performance analysis of four geolocation methods in terms of their theoretical root mean square positioning errors is provided. Comparison is established in two different ways: strict and average. In the strict type, methods are examined for a particular geometric configuration of base stations(BSs) with respect to mobile position, which determines a givennoise profile affecting the respective time-of-arrival (TOA) or timedifference-of-arrival (TDOA) estimates. In the average type, methodsare evaluated in terms of the expected covariance matrix ofthe position error over an ensemble of random geometries, so thatcomparison is geometry independent. Exact semianalytical equationsand associated lower bounds (depending solely on the noiseprofile) are obtained for the average covariance matrix of the positionerror in terms of the so-called information matrix specific toeach geolocation method. Statistical channel models inferred fromfield trials are used to define realistic prior probabilities for therandom geometries. A final evaluation provides extensive resultsrelating the expected position error to channel model parametersand the number of base stations.
Resumo:
This paper analyzes the role of traders' priors (proper versus improper) on the implications of market transparency by comparing a pre-trade transparent market with an opaque market in a set-up based on Madhavan (1996). We show that prices may be more informative in the opaque market, regardless of how priors are modelled. In contrast, the comparison of market liquidity and volatility in the two market structures are affected by prior specification. Key words: Market microstructure, Transparency, Prior information
Resumo:
The purpose of this paper is to present an approach for students to have non-traditional learning assessed for credit and introduce a tool that facilitates this process. The OCW Backpack system can connect self-learners with KNEXT assessment services to obtain college credit for prior learning. An ex post facto study based on historical data collected over the past two years at Kaplan University (KU) is presented to validate the portfolio assessment process. Cumulative GPA was compared for students who received experiential credit for learning derived from personal or professional experience with a matched sample of students with no experiential learning credits. The study found that students who received experiential credits perform better than the matched sample students on GPA. The findings validate the KU portfolio assessment process. Additionally, the results support the capability of the OCW Backpack to capture the critical information necessary to evaluate non-traditional learning for university credit.
Resumo:
A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table hasn rows and m columns and all probabilities are non-null. This kind of table can beseen as an element in the simplex of n · m parts. In this context, the marginals areidentified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclideanelements of the Aitchison geometry of the simplex can also be translated into the tableof probabilities: subspaces, orthogonal projections, distances.Two important questions are addressed: a) given a table of probabilities, which isthe nearest independent table to the initial one? b) which is the largest orthogonalprojection of a row onto a column? or, equivalently, which is the information in arow explained by a column, thus explaining the interaction? To answer these questionsthree orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independenttwo-way tables and fully dependent tables representing row-column interaction. Animportant result is that the nearest independent table is the product of the two (rowand column)-wise geometric marginal tables. A corollary is that, in an independenttable, the geometric marginals conform with the traditional (arithmetic) marginals.These decompositions can be compared with standard log-linear models.Key words: balance, compositional data, simplex, Aitchison geometry, composition,orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure,contingency table
Resumo:
We study the zero set of random analytic functions generated by a sum of the cardinal sine functions which form an orthogonal basis for the Paley-Wiener space. As a model case, we consider real-valued Gaussian coefficients. It is shown that the asymptotic probability that there is no zero in a bounded interval decays exponentially as a function of the length.
Resumo:
The log-ratio methodology makes available powerful tools for analyzing compositionaldata. Nevertheless, the use of this methodology is only possible for those data setswithout null values. Consequently, in those data sets where the zeros are present, aprevious treatment becomes necessary. Last advances in the treatment of compositionalzeros have been centered especially in the zeros of structural nature and in the roundedzeros. These tools do not contemplate the particular case of count compositional datasets with null values. In this work we deal with \count zeros" and we introduce atreatment based on a mixed Bayesian-multiplicative estimation. We use the Dirichletprobability distribution as a prior and we estimate the posterior probabilities. Then weapply a multiplicative modi¯cation for the non-zero values. We present a case studywhere this new methodology is applied.Key words: count data, multiplicative replacement, composition, log-ratio analysis
Resumo:
In this paper we study the disability transition probabilities (as well as the mortalityprobabilities) due to concurrent factors to age such as income, gender and education. Althoughit is well known that ageing and socioeconomic status influence the probability ofcausing functional disorders, surprisingly little attention has been paid to the combined effectof those factors along the individuals' life and how this affects the transition from one degreeof disability to another. The assumption that tomorrow's disability state is only a functionof the today's state is very strong, since disability is a complex variable that depends onseveral other elements than time. This paper contributes into the field in two ways: (1) byattending the distinction between the initial disability level and the process that leads tohis course (2) by addressing whether and how education, age and income differentially affectthe disability transitions. Using a Markov chain discrete model and a survival analysis, weestimate the probability by year and individual characteristics that changes the state of disabilityand the duration that it takes its progression in each case. We find that people withan initial state of disability have a higher propensity to change and take less time to transitfrom different stages. Men do that more frequently than women. Education and incomehave negative effects on transition. Moreover, we consider the disability benefits associatedto those changes along different stages of disability and therefore we offer some clues onthe potential savings of preventive actions that may delay or avoid those transitions. Onpure cost considerations, preventive programs for improvement show higher benefits thanthose for preventing deterioration, and in general terms, those focussing individuals below65 should go first. Finally the trend of disability in Spain seems not to change among yearsand regional differences are not found.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.
Resumo:
Nanocrystalline silicon layers have been obtained by thermal annealing of films sputtered in various hydrogen partial pressures. The as-deposited and crystallized films were investigated by infrared, Raman, x-ray diffraction, electron microscopy, and optical absorption techniques. The obtained data show evidence of a close correlation between the microstructure and properties of the processed material, and the hydrogen content in the as-grown deposit. The minimum stress deduced from Raman was found to correspond to the widest band gap and to a maximum hydrogen content in the basic unannealed sample. Such a structure relaxation seems to originate from the so-called "chemical annealing" thought to be due to Si-H2 species, as identified by infrared spectroscopy. The variation of the band gap has been interpreted in terms of the changes in the band tails associated with the disorder which would be induced by stress. Finally, the layers originally deposited with the highest hydrogen pressure show a lowest stress-which does not correlate with the hydrogen content and the optical band gap¿and some texturing. These features are likely related to the presence in these layers of a significant crystalline fraction already before annealing.
Resumo:
[spa] En un modelo de Poisson compuesto, definimos una estrategia de reaseguro proporcional de umbral : se aplica un nivel de retención k1 siempre que las reservas sean inferiores a un determinado umbral b, y un nivel de retención k2 en caso contrario. Obtenemos la ecuación íntegro-diferencial para la función Gerber-Shiu, definida en Gerber-Shiu -1998- en este modelo, que nos permite obtener las expresiones de la probabilidad de ruina y de la transformada de Laplace del momento de ruina para distintas distribuciones de la cuantía individual de los siniestros. Finalmente presentamos algunos resultados numéricos.
Resumo:
[spa] En un modelo de Poisson compuesto, definimos una estrategia de reaseguro proporcional de umbral : se aplica un nivel de retención k1 siempre que las reservas sean inferiores a un determinado umbral b, y un nivel de retención k2 en caso contrario. Obtenemos la ecuación íntegro-diferencial para la función Gerber-Shiu, definida en Gerber-Shiu -1998- en este modelo, que nos permite obtener las expresiones de la probabilidad de ruina y de la transformada de Laplace del momento de ruina para distintas distribuciones de la cuantía individual de los siniestros. Finalmente presentamos algunos resultados numéricos.
Resumo:
Rigorous quantum dynamics calculations of reaction rates and initial state-selected reaction probabilities of polyatomic reactions can be efficiently performed within the quantum transition state concept employing flux correlation functions and wave packet propagation utilizing the multi-configurational time-dependent Hartree approach. Here, analytical formulas and a numerical scheme extending this approach to the calculation of state-to-state reaction probabilities are presented. The formulas derived facilitate the use of three different dividing surfaces: two dividing surfaces located in the product and reactant asymptotic region facilitate full state resolution while a third dividing surface placed in the transition state region can be used to define an additional flux operator. The eigenstates of the corresponding thermal flux operator then correspond to vibrational states of the activated complex. Transforming these states to reactant and product coordinates and propagating them into the respective asymptotic region, the full scattering matrix can be obtained. To illustrate the new approach, test calculations study the D + H2(ν, j) → HD(ν′, j′) + H reaction for J = 0.
Resumo:
We develop several results on hitting probabilities of random fields which highlight the role of the dimension of the parameter space. This yields upper and lower bounds in terms of Hausdorff measure and Bessel-Riesz capacity, respectively. We apply these results to a system of stochastic wave equations in spatial dimension k >- 1 driven by a d-dimensional spatially homogeneous additive Gaussian noise that is white in time and colored in space.
Resumo:
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0