155 resultados para Probability generating function
em University of Queensland eSpace - Australia
Resumo:
We consider the statistical properties of the local density of states of a one-dimensional Dirac equation in the presence of various types of disorder with Gaussian white-noise distribution. It is shown how either the replica trick or supersymmetry can be used to calculate exactly all the moments of the local density of states.' Careful attention is paid to how the results change if the local density of states is averaged over atomic length scales. For both the replica trick and supersymmetry the problem is reduced to finding the ground state of a zero-dimensional Hamiltonian which is written solely in terms of a pair of coupled spins which are elements of u(1, 1). This ground state is explicitly found for the particular case of the Dirac equation corresponding to an infinite metallic quantum wire with a single conduction channel. The calculated moments of the local density of states agree with those found previously by Al'tshuler and Prigodin [Sov. Phys. JETP 68 (1989) 198] using a technique based on recursion relations for Feynman diagrams. (C) 2001 Elsevier Science B.V. All rights reserved.
Cavity QED analog of the harmonic-oscillator probability distribution function and quantum collapses
Resumo:
We establish a connection between the simple harmonic oscillator and a two-level atom interacting with resonant, quantized cavity and strong driving fields, which suggests an experiment to measure the harmonic-oscillator's probability distribution function. To achieve this, we calculate the Autler-Townes spectrum by coupling the system to a third level. We find that there are two different regions of the atomic dynamics depending on the ratio of the: Rabi frequency Omega (c) of the cavity field to that of the Rabi frequency Omega of the driving field. For Omega (c)
Resumo:
Mitarai [Phys. Fluids 17, 047101 (2005)] compared turbulent combustion models against homogeneous direct numerical simulations with extinction/recognition phenomena. The recently suggested multiple mapping conditioning (MMC) was not considered and is simulated here for the same case with favorable results. Implementation issues crucial for successful MMC simulations are also discussed.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.
Resumo:
The focus of the present work is the well-known feature of the probability density function (PDF) transport equations in turbulent flows-the inverse parabolicity of the equations. While it is quite common in fluid mechanics to interpret equations with direct (forward-time) parabolicity as diffusive (or as a combination of diffusion, convection and reaction), the possibility of a similar interpretation for equations with inverse parabolicity is not clear. According to Einstein's point of view, a diffusion process is associated with the random walk of some physical or imaginary particles, which can be modelled by a Markov diffusion process. In the present paper it is shown that the Markov diffusion process directly associated with the PDF equation represents a reasonable model for dealing with the PDFs of scalars but it significantly underestimates the diffusion rate required to simulate turbulent dispersion when the velocity components are considered.
Resumo:
Statistical tests of Load-Unload Response Ratio (LURR) signals are carried in order to verify statistical robustness of the previous studies using the Lattice Solid Model (MORA et al., 2002b). In each case 24 groups of samples with the same macroscopic parameters (tidal perturbation amplitude A, period T and tectonic loading rate k) but different particle arrangements are employed. Results of uni-axial compression experiments show that before the normalized time of catastrophic failure, the ensemble average LURR value rises significantly, in agreement with the observations of high LURR prior to the large earthquakes. In shearing tests, two parameters are found to control the correlation between earthquake occurrence and tidal stress. One is, A/(kT) controlling the phase shift between the peak seismicity rate and the peak amplitude of the perturbation stress. With an increase of this parameter, the phase shift is found to decrease. Another parameter, AT/k, controls the height of the probability density function (Pdf) of modeled seismicity. As this parameter increases, the Pdf becomes sharper and narrower, indicating a strong triggering. Statistical studies of LURR signals in shearing tests also suggest that except in strong triggering cases, where LURR cannot be calculated due to poor data in unloading cycles, the larger events are more likely to occur in higher LURR periods than the smaller ones, supporting the LURR hypothesis.
Resumo:
The rate of generation of fluctuations with respect to the scalar values conditioned on the mixture fraction, which significantly affects turbulent nonpremixed combustion processes, is examined. Simulation of the rate in a major mixing model is investigated and the derived equations can assist in selecting the model parameters so that the level of conditional fluctuations is better reproduced by the models. A more general formulation of the multiple mapping conditioning (MMC) model that distinguishes the reference and conditioning variables is suggested. This formulation can be viewed as a methodology of enforcing certain desired conditional properties onto conventional mixing models. Examples of constructing consistent MMC models with dissipation and velocity conditioning as well as of combining MMC with large eddy simulations (LES) are also provided. (c) 2005 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.
Resumo:
Background and aim of the study: Results of valve re-replacement (reoperation) in 898 patients undergoing aortic valve replacement with cryopreserved homograft valves between 1975 and 1998 are reported. The study aim was to provide estimates of unconditional probability of valve reoperation and cumulative incidence function (actual risk) of reoperation. Methods: Valves were implanted by subcoronary insertion (n = 500), inclusion cylinder (n = 46), and aortic root replacement (n = 352). Probability of reoperation was estimated by adopting a mixture model framework within which estimates were adjusted for two risk factors: patient age at initial replacement, and implantation technique. Results: For a patient aged 50 years, the probability of reoperation in his/her lifetime was estimated as 44% and 56% for non-root and root replacement techniques, respectively. For a patient aged 70 years, estimated probability of reoperation was 16% and 25%, respectively. Given that a reoperation is required, patients with non-root replacement have a higher hazard rate than those with root replacement (hazards ratio = 1.4), indicating that non-root replacement patients tend to undergo reoperation earlier before death than root replacement patients. Conclusion: Younger patient age and root versus non-root replacement are risk factors for reoperation. Valve durability is much less in younger patients, while root replacement patients appear more likely to live longer and hence are more likely to require reoperation.
Resumo:
The acceptance-probability-controlled simulated annealing with an adaptive move generation procedure, an optimization technique derived from the simulated annealing algorithm, is presented. The adaptive move generation procedure was compared against the random move generation procedure on seven multiminima test functions, as well as on the synthetic data, resembling the optical constants of a metal. In all cases the algorithm proved to have faster convergence and superior escaping from local minima. This algorithm was then applied to fit the model dielectric function to data for platinum and aluminum.
Resumo:
We use a stochastic patch occupancy model of invertebrates in the Mound Springs ecosystem of South Australia to assess the ability of incidence function models to detect environmental impacts on metapopulations. We assume that the probability of colonisation decreases with increasing isolation and the probability of extinction is constant across spring vents. We run the models to quasi-equilibrium, and then impose an impact by increasing the local extinction probability. We sample the output at various times pre- and postimpact, and examine the probability of detecting a significant change in population parameters. The incidence function model approach turns out to have little power to detect environmental impacts on metapopulations with small numbers of patches. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
We compare the performance of two different low-storage filter diagonalisation (LSFD) strategies in the calculation of complex resonance energies of the HO2, radical. The first is carried out within a complex-symmetric Lanczos subspace representation [H. Zhang, S.C. Smith, Phys. Chem. Chem. Phys. 3 (2001) 2281]. The second involves harmonic inversion of a real autocorrelation function obtained via a damped Chebychev recursion [V.A. Mandelshtam, H.S. Taylor, J. Chem. Phys. 107 (1997) 6756]. We find that while the Chebychev approach has the advantage of utilizing real algebra in the time-consuming process of generating the vector recursion, the Lanczos, method (using complex vectors) requires fewer iterations, especially for low-energy part of the spectrum. The overall efficiency in calculating resonances for these two methods is comparable for this challenging system. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Background Diastolic dysfunction induced by ischemia may alter transmitral blood flow, but this reflects global ventricular function, and pseudonormalization may occur with increased preload. Tissue Doppler may assess regional diastolic function and is relatively load-independent, but limited data exist regarding its application to stress testing. We sought to examine the stress response of regional diastolic parameters to dobutomine echocardiography (DbE). Methods Sixty-three patients underwent study with DbE: 20 with low probability of coronary artery disease (CAD) and 43 with CAD who underwent angiography. A standard DbE protocol was used, and segments were categorized as ischemic, scar, or normal. Color tissue Doppler was acquired at baseline and peak stress, and waveforms in the basal and mid segments were used to measure early filling (Em), late filling (Am), and E deceleration time. Significant CAD was defined by stenoses >50% vessel diameter. Results Diastolic parameters had limited feasibility because of merging of Em and Am waves at high heart rates and limited reproducibility. Nonetheless, compared with normal segments, segments subtended with significant stenoses showed a lower Em velocity at rest (6.2 +/- 2.6 cm/s vs 4.8 +/- 2.2 cm/s, P < .0001) and peak (7.5 +/- 4.2 cm/s vs 5.1 +/- 3.6 cm/s, P < .0001), Abnormal segments also showed a shorter E deceleration time (51 +/- 27 ms vs 41 +/- 27 ms, P = .0001) at base and peak. No changes were documented in Am. The same pattern was seen with segments identified as ischemic with wall motion score. However, in the absence of ischemia, segments of patients with left ventricular hypertrophy showed a lower Em velocity, with blunted Em responses to stress. Conclusion Regional diastolic function is sensitive to ischemia. However, a number of practical limitations limit the applicability of diastolic parameters for the quantification of stress echocardiography.