917 resultados para Probability distribution functions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Birnbaum and Saunders (1969a) introduced a probability distribution which is commonly used in reliability studies For the first time based on this distribution the so-called beta-Birnbaum-Saunders distribution is proposed for fatigue life modeling Various properties of the new model including expansions for the moments moment generating function mean deviations density function of the order statistics and their moments are derived We discuss maximum likelihood estimation of the model s parameters The superiority of the new model is illustrated by means of three failure real data sets (C) 2010 Elsevier B V All rights reserved

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. © 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe several simulation algorithms that yield random probability distributions with given values of risk measures. In case of vanilla risk measures, the algorithms involve combining and transforming random cumulative distribution functions or random Lorenz curves obtained by simulating rather general random probability distributions on the unit interval. A new algorithm based on the simulation of a weighted barycentres array is suggested to generate random probability distributions with a given value of the spectral risk measure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The transverse momentum dependent parton distribution/fragmentation functions (TMDs) are essential in the factorization of a number of processes like Drell-Yan scattering, vector boson production, semi-inclusive deep inelastic scattering, etc. We provide a comprehensive study of unpolarized TMDs at next-to-next-to-leading order, which includes an explicit calculation of these TMDs and an extraction of their matching coefficients onto their integrated analogues, for all flavor combinations. The obtained matching coefficients are important for any kind of phenomenology involving TMDs. In the present study each individual TMD is calculated without any reference to a specific process. We recover the known results for parton distribution functions and provide new results for the fragmentation functions. The results for the gluon transverse momentum dependent fragmentation functions are presented for the first time at one and two loops. We also discuss the structure of singularities of TMD operators and TMD matrix elements, crossing relations between TMD parton distribution functions and TMD fragmentation functions, and renormalization group equations. In addition, we consider the behavior of the matching coefficients at threshold and make a conjecture on their structure to all orders in perturbation theory.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Maintenance activities in a large-scale engineering system are usually scheduled according to the lifetimes of various components in order to ensure the overall reliability of the system. Lifetimes of components can be deduced by the corresponding probability distributions with parameters estimated from past failure data. While failure data of the components is not always readily available, the engineers have to be content with the primitive information from the manufacturers only, such as the mean and standard deviation of lifetime, to plan for the maintenance activities. In this paper, the moment-based piecewise polynomial model (MPPM) are proposed to estimate the parameters of the reliability probability distribution of the products when only the mean and standard deviation of the product lifetime are known. This method employs a group of polynomial functions to estimate the two parameters of the Weibull Distribution according to the mathematical relationship between the shape parameter of two-parameters Weibull Distribution and the ratio of mean and standard deviation. Tests are carried out to evaluate the validity and accuracy of the proposed methods with discussions on its suitability of applications. The proposed method is particularly useful for reliability-critical systems, such as railway and power systems, in which the maintenance activities are scheduled according to the expected lifetimes of the system components.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

X-ray microtomography (micro-CT) with micron resolution enables new ways of characterizing microstructures and opens pathways for forward calculations of multiscale rock properties. A quantitative characterization of the microstructure is the first step in this challenge. We developed a new approach to extract scale-dependent characteristics of porosity, percolation, and anisotropic permeability from 3-D microstructural models of rocks. The Hoshen-Kopelman algorithm of percolation theory is employed for a standard percolation analysis. The anisotropy of permeability is calculated by means of the star volume distribution approach. The local porosity distribution and local percolation probability are obtained by using the local porosity theory. Additionally, the local anisotropy distribution is defined and analyzed through two empirical probability density functions, the isotropy index and the elongation index. For such a high-resolution data set, the typical data sizes of the CT images are on the order of gigabytes to tens of gigabytes; thus an extremely large number of calculations are required. To resolve this large memory problem parallelization in OpenMP was used to optimally harness the shared memory infrastructure on cache coherent Non-Uniform Memory Access architecture machines such as the iVEC SGI Altix 3700Bx2 Supercomputer. We see adequate visualization of the results as an important element in this first pioneering study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Both environmental economists and policy makers have shown a great deal of interest in the effect of pollution abatement on environmental efficiency. In line with the modern resources available, however, no contribution is brought to the environmental economics field with the Markov chain Monte Carlo (MCMC) application, which enables simulation from a distribution of a Markov chain and simulating from the chain until it approaches equilibrium. The probability density functions gained prominence with the advantages over classical statistical methods in its simultaneous inference and incorporation of any prior information on all model parameters. This paper concentrated on this point with the application of MCMC to the database of China, the largest developing country with rapid economic growth and serious environmental pollution in recent years. The variables cover the economic output and pollution abatement cost from the year 1992 to 2003. We test the causal direction between pollution abatement cost and environmental efficiency with MCMC simulation. We found that the pollution abatement cost causes an increase in environmental efficiency through the algorithm application, which makes it conceivable that the environmental policy makers should make more substantial measures to reduce pollution in the near future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a detailed direct numerical simulation (DNS) of the two-dimensional Navier-Stokes equation with the incompressibility constraint and air-drag-induced Ekman friction; our DNS has been designed to investigate the combined effects of walls and such a friction on turbulence in forced thin films. We concentrate on the forward-cascade regime and show how to extract the isotropic parts of velocity and vorticity structure functions and hence the ratios of multiscaling exponents. We find that velocity structure functions display simple scaling, whereas their vorticity counterparts show multiscaling, and the probability distribution function of the Weiss parameter 3, which distinguishes between regions with centers and saddles, is in quantitative agreement with experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the probability distribution of the angle by which the tangent to the trajectory rotates in the course of a plane random walk. It is shown that the determination of this distribution function can be reduced to an integral equation, which can be rigorously transformed into a differential equation of Hill's type. We derive the asymptotic distribution for very long walks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiaction learning automata which update their action probabilities on the basis of the responses they get from an environment are considered in this paper. The automata update the probabilities according to whether the environment responds with a reward or a penalty. Learning automata are said to possess ergodicity of the mean if the mean action probability is the state probability (or unconditional probability) of an ergodic Markov chain. In an earlier paper [11] we considered the problem of a two-action learning automaton being ergodic in the mean (EM). The family of such automata was characterized completely by proving the necessary and sufficient conditions for automata to be EM. In this paper, we generalize the results of [11] and obtain necessary and sufficient conditions for the multiaction learning automaton to be EM. These conditions involve two families of probability updating functions. It is shown that for the automaton to be EM the two families must be linearly dependent. The vector defining the linear dependence is the only vector parameter which controls the rate of convergence of the automaton. Further, the technique for reducing the variance of the limiting distribution is discussed. Just as in the two-action case, it is shown that the set of absolutely expedient schemes and the set of schemes which possess ergodicity of the mean are mutually disjoint.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a measurement of the top quark mass with t-tbar dilepton events produced in p-pbar collisions at the Fermilab Tevatron $\sqrt{s}$=1.96 TeV and collected by the CDF II detector. A sample of 328 events with a charged electron or muon and an isolated track, corresponding to an integrated luminosity of 2.9 fb$^{-1}$, are selected as t-tbar candidates. To account for the unconstrained event kinematics, we scan over the phase space of the azimuthal angles ($\phi_{\nu_1},\phi_{\nu_2}$) of neutrinos and reconstruct the top quark mass for each $\phi_{\nu_1},\phi_{\nu_2}$ pair by minimizing a $\chi^2$ function in the t-tbar dilepton hypothesis. We assign $\chi^2$-dependent weights to the solutions in order to build a preferred mass for each event. Preferred mass distributions (templates) are built from simulated t-tbar and background events, and parameterized in order to provide continuous probability density functions. A likelihood fit to the mass distribution in data as a weighted sum of signal and background probability density functions gives a top quark mass of $165.5^{+{3.4}}_{-{3.3}}$(stat.)$\pm 3.1$(syst.) GeV/$c^2$.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The method of Wigner distribution functions, and the Weyl correspondence between quantum and classical variables, are extended from the usual kind of canonically conjugate position and momentum operators to the case of an angle and angular momentum operator pair. The sense in which one has a description of quantum mechanics using classical phase‐space language is much clarified by this extension.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Various geometrical and energetic distribution functions and other properties connected with the cage-to-cage diffusion of xenon in sodium Y zeolite have been obtained from long molecular dynamics calculations. Analysis of diffusion pathways reveals two interesting mechanisms-surface-mediated and centralized modes for cage-to-cage diffusion. The surface-mediated mode of diffusion exhibits a small positive barrier, while the centralized diffusion exhibits a negative barrier for the sorbate to diffuse across the 12-ring window. In both modes, however, the sorbate has to be activated from the adsorption site to enable it to gain mobility. The centralized diffusion additionally requires the sorbate to be free of the influence of the surface of the cage as well. The overall rate for cage-to-cage diffusion shows an Arrhenius temperature dependence with E(a) = 3 kJ/mol. It is found that the decay in the dynamical correction factor occurs on a time scale comparable to the cage residence time. The distributions of barrier heights have been calculated. Functions reflecting the distribution of the sorbate-zeolite interaction at the window and the variations of the distance between the sorbate and the centers of the parent and daughter cages are presented.