969 resultados para Probability distribution functions
Resumo:
The present work describes a new tool that helps bidders improve their competitive bidding strategies. This new tool consists of an easy-to-use graphical tool that allows the use of more complex decision analysis tools in the field of Competitive Bidding. The graphic tool described here tries to move away from previous bidding models which attempt to describe the result of an auction or a tender process by means of studying each possible bidder with probability density functions. As an illustration, the tool is applied to three practical cases. Theoretical and practical conclusions on the great potential breadth of application of the tool are also presented.
Resumo:
This paper describes the methodology of providing multiprobability predictions for proteomic mass spectrometry data. The methodology is based on a newly developed machine learning framework called Venn machines. Is allows to output a valid probability interval. The methodology is designed for mass spectrometry data. For demonstrative purposes, we applied this methodology to MALDI-TOF data sets in order to predict the diagnosis of heart disease and early diagnoses of ovarian cancer and breast cancer. The experiments showed that probability intervals are narrow, that is, the output of the multiprobability predictor is similar to a single probability distribution. In addition, probability intervals produced for heart disease and ovarian cancer data were more accurate than the output of corresponding probability predictor. When Venn machines were forced to make point predictions, the accuracy of such predictions is for the most data better than the accuracy of the underlying algorithm that outputs single probability distribution of a label. Application of this methodology to MALDI-TOF data sets empirically demonstrates the validity. The accuracy of the proposed method on ovarian cancer data rises from 66.7 % 11 months in advance of the moment of diagnosis to up to 90.2 % at the moment of diagnosis. The same approach has been applied to heart disease data without time dependency, although the achieved accuracy was not as high (up to 69.9 %). The methodology allowed us to confirm mass spectrometry peaks previously identified as carrying statistically significant information for discrimination between controls and cases.
Resumo:
The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.
Resumo:
Data from 58 strong-lensing events surveyed by the Sloan Lens ACS Survey are used to estimate the projected galaxy mass inside their Einstein radii by two independent methods: stellar dynamics and strong gravitational lensing. We perform a joint analysis of these two estimates inside models with up to three degrees of freedom with respect to the lens density profile, stellar velocity anisotropy, and line-of-sight (LOS) external convergence, which incorporates the effect of the large-scale structure on strong lensing. A Bayesian analysis is employed to estimate the model parameters, evaluate their significance, and compare models. We find that the data favor Jaffe`s light profile over Hernquist`s, but that any particular choice between these two does not change the qualitative conclusions with respect to the features of the system that we investigate. The density profile is compatible with an isothermal, being sightly steeper and having an uncertainty in the logarithmic slope of the order of 5% in models that take into account a prior ignorance on anisotropy and external convergence. We identify a considerable degeneracy between the density profile slope and the anisotropy parameter, which largely increases the uncertainties in the estimates of these parameters, but we encounter no evidence in favor of an anisotropic velocity distribution on average for the whole sample. An LOS external convergence following a prior probability distribution given by cosmology has a small effect on the estimation of the lens density profile, but can increase the dispersion of its value by nearly 40%.
Resumo:
Some observations of galaxies, and in particular dwarf galaxies, indicate a presence of cored density profiles in apparent contradiction with cusp profiles predicted by dark matter N-body simulations. We constructed an analytical model, using particle distribution functions (DFs), to show how a supernova (SN) explosion can transform a cusp density profile in a small-mass dark matter halo into a cored one. Considering the fact that an SN efficiently removes matter from the centre of the first haloes, we study the effect of mass removal through an SN perturbation in the DFs. We find that the transformation from a cusp into a cored profile occurs even for changes as small as 0.5 per cent of the total energy of the halo, which can be produced by the expulsion of matter caused by a single SN explosion.
Resumo:
We expect to observe parton saturation in a future electron-ion collider. In this Letter we discuss this expectation in more detail considering two different models which are in good agreement with the existing experimental data on nuclear structure functions. In particular, we study the predictions of saturation effects in electron-ion collisions at high energies, using a generalization for nuclear targets of the b-CGC model, which describes the ep HERA quite well. We estimate the total. longitudinal and charm structure functions in the dipole picture and compare them with the predictions obtained using collinear factorization and modern sets of nuclear parton distributions. Our results show that inclusive observables are not very useful in the search for saturation effects. In the small x region they are very difficult to disentangle from the predictions of the collinear approaches. This happens mainly because of the large uncertainties in the determination of the nuclear parton distribution functions. On the other hand, our results indicate that the contribution of diffractive processes to the total cross section is about 20% at large A and small Q(2), allowing for a detailed study of diffractive observables. The study of diffractive processes becomes essential to observe parton Saturation. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we present a study on a deterministic partially self-avoiding walk (tourist walk), which provides a novel method for texture feature extraction. The method is able to explore an image on all scales simultaneously. Experiments were conducted using different dynamics concerning the tourist walk. A new strategy, based on histograms. to extract information from its joint probability distribution is presented. The promising results are discussed and compared to the best-known methods for texture description reported in the literature. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The reconstruction of Extensive Air Showers (EAS) observed by particle detectors at the ground is based on the characteristics of observables like the lateral particle density and the arrival times. The lateral densities, inferred for different EAS components from detector data, are usually parameterised by applying various lateral distribution functions (LDFs). The LDFs are used in turn for evaluating quantities like the total number of particles or the density at particular radial distances. Typical expressions for LDFs anticipate azimuthal symmetry of the density around the shower axis. The deviations of the lateral particle density from this assumption arising from various reasons are smoothed out in the case of compact arrays like KASCADE, but not in the case of arrays like Grande, which only sample a smaller part of the azimuthal variation. KASCADE-Grande, an extension of the former KASCADE experiment, is a multi-component Extensive Air Shower (EAS) experiment located at the Karlsruhe Institute of Technology (Campus North), Germany. The lateral distributions of charged particles are deduced from the basic information provided by the Grande scintillators - the energy deposits - first in the observation plane, then in the intrinsic shower plane. In all steps azimuthal dependences should be taken into account. As the energy deposit in the scintillators is dependent on the angles of incidence of the particles, azimuthal dependences are already involved in the first step: the conversion from the energy deposits to the charged particle density. This is done by using the Lateral Energy Correction Function (LECF) that evaluates the mean energy deposited by a charged particle taking into account the contribution of other particles (e.g. photons) to the energy deposit. By using a very fast procedure for the evaluation of the energy deposited by various particles we prepared realistic LECFs depending on the angle of incidence of the shower and on the radial and azimuthal coordinates of the location of the detector. Mapping the lateral density from the observation plane onto the intrinsic shower plane does not remove the azimuthal dependences arising from geometric and attenuation effects, in particular for inclined showers. Realistic procedures for applying correction factors are developed. Specific examples of the bias due to neglecting the azimuthal asymmetries in the conversion from the energy deposit in the Grande detectors to the lateral density of charged particles in the intrinsic shower plane are given. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We have studied the molecular dynamics of one of the major macromolecules in articular cartilage, chondroitin sulfate. Applying (13)C high-resolution magic-angle spinning NMR techniques, the NMR signals of all rigid macromolecules in cartilage can be suppressed, allowing the exclusive detection of the highly mobile chondroitin sulfate. The technique is also used to detect the chondroitin sulfate in artificial tissue-engineered cartilage. The tissue-engineered material that is based on matrix producing chondrocytes cultured in a collagen gel should provide properties as close as possible to those of the natural cartilage. Nuclear relaxation times of the chondroitin sulfate were determined for both tissues. Although T(1) relaxation times are rather similar, the T(2) relaxation in tissue-engineered cartilage is significantly shorter. This suggests that the motions of chondroitin sulfate in data:rat and artificial cartilage different. The nuclear relaxation times of chondroitin sulfate in natural and tissue-engineered cartilage were modeled using a broad distribution function for the motional correlation times. Although the description of the microscopic molecular dynamics of the chondroitin sulfate in natural and artificial cartilage required the identical broad distribution functions for the correlation times of motion, significant differences in the correlation times of motion that are extracted from the model indicate that the artificial tissue does not fully meet the standards of the natural ideal. This could also be confirmed by macroscopic biomechanical elasticity measurements. Nevertheless, these results suggest that NMR is a useful tool for the investigation of the quality of artificially engineered tissue. (C) 2010 Wiley Periodicals, Inc. Biopolymers 93: 520-532, 2010.
Resumo:
Canalizing genes possess such broad regulatory power, and their action sweeps across a such a wide swath of processes that the full set of affected genes are not highly correlated under normal conditions. When not active, the controlling gene will not be predictable to any significant degree by its subject genes, either alone or in groups, since their behavior will be highly varied relative to the inactive controlling gene. When the controlling gene is active, its behavior is not well predicted by any one of its targets, but can be very well predicted by groups of genes under its control. To investigate this question, we introduce in this paper the concept of intrinsically multivariate predictive (IMP) genes, and present a mathematical study of IMP in the context of binary genes with respect to the coefficient of determination (CoD), which measures the predictive power of a set of genes with respect to a target gene. A set of predictor genes is said to be IMP for a target gene if all properly contained subsets of the predictor set are bad predictors of the target but the full predictor set predicts the target with great accuracy. We show that logic of prediction, predictive power, covariance between predictors, and the entropy of the joint probability distribution of the predictors jointly affect the appearance of IMP genes. In particular, we show that high-predictive power, small covariance among predictors, a large entropy of the joint probability distribution of predictors, and certain logics, such as XOR in the 2-predictor case, are factors that favor the appearance of IMP. The IMP concept is applied to characterize the behavior of the gene DUSP1, which exhibits control over a central, process-integrating signaling pathway, thereby providing preliminary evidence that IMP can be used as a criterion for discovery of canalizing genes.
Resumo:
Relevant results for (sub-)distribution functions related to parallel systems are discussed. The reverse hazard rate is defined using the product integral. Consequently, the restriction of absolute continuity for the involved distributions can be relaxed. The only restriction is that the sets of discontinuity points of the parallel distributions have to be disjointed. Nonparametric Bayesian estimators of all survival (sub-)distribution functions are derived. Dual to the series systems that use minimum life times as observations, the parallel systems record the maximum life times. Dirichlet multivariate processes forming a class of prior distributions are considered for the nonparametric Bayesian estimation of the component distribution functions, and the system reliability. For illustration, two striking numerical examples are presented.
Resumo:
Irradiation distribution functions based on the yearly collectible energy have been derived for two locations; Sydney, Australia which represents a mid-latitude site and Stockholm, Sweden, which represents a high latitude site. The strong skewing of collectible energy toward summer solstice at high latitudes dictates optimal collector tilt angles considerably below the polar mount. The lack of winter radiation at high latitudes indicates that the optimal acceptance angle for a stationary EW-aligned concentrator decreases as latitude increases. Furthermore concentrator design should be highly asymmetric at high latitudes.
Resumo:
We apply the concept of exchangeable random variables to the case of non-additive robability distributions exhibiting ncertainty aversion, and in the lass generated bya convex core convex non-additive probabilities, ith a convex core). We are able to rove two versions of the law of arge numbers (de Finetti's heorems). By making use of two efinitions. of independence we rove two versions of the strong law f large numbers. It turns out that e cannot assure the convergence of he sample averages to a constant. e then modal the case there is a true" probability distribution ehind the successive realizations of the uncertain random variable. In this case convergence occurs. This result is important because it renders true the intuition that it is possible "to learn" the "true" additive distribution behind an uncertain event if one repeatedly observes it (a sufficiently large number of times). We also provide a conjecture regarding the "Iearning" (or updating) process above, and prove a partia I result for the case of Dempster-Shafer updating rule and binomial trials.
Resumo:
In this paper we prove convergence to chaotic sunspot equilibrium through two learning rules used in the bounded rationality literature. The rst one shows the convergence of the actual dynamics generated by simple adaptive learning rules to a probability distribution that is close to the stationary measure of the sunspot equilibrium; since this stationary measure is absolutely continuous it results in a robust convergence to the stochastic equilibrium. The second one is based on the E-stability criterion for testing stability of rational expectations equilibrium, we show that the conditional probability distribution de ned by the sunspot equilibrium is expectational stable under a reasonable updating rule of this parameter. We also report some numerical simulations of the processes proposed.
Resumo:
Este trabalho explora um importante conceito desenvolvido por Breeden & Litzenberger para extrair informações contidas nas opções de juros no mercado brasileiro (Opção Sobre IDI), no âmbito da Bolsa de Valores, Mercadorias e Futuros de São Paulo (BM&FBOVESPA) dias antes e após a decisão do COPOM sobre a taxa Selic. O método consiste em determinar a distribuição de probabilidade através dos preços das opções sobre IDI, após o cálculo da superfície de volatilidade implícita, utilizando duas técnicas difundidas no mercado: Interpolação Cúbica (Spline Cubic) e Modelo de Black (1976). Serão analisados os quatro primeiros momentos da distribuição: valor esperado, variância, assimetria e curtose, assim como suas respectivas variações.