987 resultados para Probability generating function
Resumo:
Doutoramento em Matemática
Resumo:
This paper shows that the proposed Rician shadowed model for multi-antenna communications allows for the unification of a wide set of models, both for multiple-input multiple-output (MIMO) and single- input single-output (SISO) communications. The MIMO Rayleigh and MIMO Rician can be deduced from the MIMO Rician shadowed, and so their SISO counterparts. Other more general SISO models, besides the Rician shadowed, are included in the model, such as the κ-μ, and its recent generalization, the κ-μ shadowed model. Moreover, the SISO η-μ and Nakagami-q models are also included in the MIMO Rician shadowed model. The literature already presents the probability density function (pdf) of the Rician shadowed Gram channel matrix in terms of the well-known gamma- Wishart distribution. We here derive its moment generating function in a tractable form. Closed- form expressions for the cumulative distribution function and the pdf of the maximum eigenvalue are also carried out.
Resumo:
OBJETIVO: Estimar valores de referência e função de hierarquia de docentes em Saúde Coletiva do Brasil por meio de análise da distribuição do índice h. MÉTODOS: A partir do portal da Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, 934 docentes foram identificados em 2008, dos quais 819 foram analisados. O índice h de cada docente foi obtido na Web of Science mediante algoritmos de busca com controle para homonímias e alternativas de grafia de nome. Para cada região e para o Brasil como um todo ajustou-se função densidade de probabilidade exponencial aos parâmetros média e taxa de decréscimo por região. Foram identificadas medidas de posição e, com o complemento da função probabilidade acumulada, função de hierarquia entre autores conforme o índice h por região. RESULTADOS: Dos docentes, 29,8% não tinham qualquer registro de citação (h = 0). A média de h para o País foi 3,1, com maior média na região Sul (4,7). A mediana de h para o País foi 2,1, também com maior mediana na Sul (3,2). Para uma padronização de população de autores em cem, os primeiros colocados para o País devem ter h = 16; na estratificação por região, a primeira posição demanda valores mais altos no Nordeste, Sudeste e Sul, sendo nesta última h = 24. CONCLUSÕES: Avaliados pelos índices h da Web of Science, a maioria dos autores em Saúde Coletiva não supera h = 5. Há diferenças entres as regiões, com melhor desempenho para a Sul e valores semelhantes entre Sudeste e Nordeste.
Resumo:
The exact time-dependent solution for the stochastic equations governing the behavior of a binary self-regulating gene is presented. Using the generating function technique to rephrase the master equations in terms of partial differential equations, we show that the model is totally integrable and the analytical solutions are the celebrated confluent Heun functions. Self-regulation plays a major role in the control of gene expression, and it is remarkable that such a microscopic model is completely integrable in terms of well-known complex functions.
Resumo:
Consider N sites randomly and uniformly distributed in a d-dimensional hypercube. A walker explores this disordered medium going to the nearest site, which has not been visited in the last mu (memory) steps. The walker trajectory is composed of a transient part and a periodic part (cycle). For one-dimensional systems, travelers can or cannot explore all available space, giving rise to a crossover between localized and extended regimes at the critical memory mu(1) = log(2) N. The deterministic rule can be softened to consider more realistic situations with the inclusion of a stochastic parameter T (temperature). In this case, the walker movement is driven by a probability density function parameterized by T and a cost function. The cost function increases as the distance between two sites and favors hops to closer sites. As the temperature increases, the walker can escape from cycles that are reminiscent of the deterministic nature and extend the exploration. Here, we report an analytical model and numerical studies of the influence of the temperature and the critical memory in the exploration of one-dimensional disordered systems.
Resumo:
Context. Observations in the cosmological domain are heavily dependent on the validity of the cosmic distance-duality (DD) relation, eta = D(L)(z)(1+ z)(2)/D(A)(z) = 1, an exact result required by the Etherington reciprocity theorem where D(L)(z) and D(A)(z) are, respectively, the luminosity and angular diameter distances. In the limit of very small redshifts D(A)(z) = D(L)(z) and this ratio is trivially satisfied. Measurements of Sunyaev-Zeldovich effect (SZE) and X-rays combined with the DD relation have been used to determine D(A)(z) from galaxy clusters. This combination offers the possibility of testing the validity of the DD relation, as well as determining which physical processes occur in galaxy clusters via their shapes. Aims. We use WMAP (7 years) results by fixing the conventional Lambda CDM model to verify the consistence between the validity of DD relation and different assumptions about galaxy cluster geometries usually adopted in the literature. Methods. We assume that. is a function of the redshift parametrized by two different relations: eta(z) = 1+eta(0)z, and eta(z) = 1+eta(0)z/(1+z), where eta(0) is a constant parameter quantifying the possible departure from the strict validity of the DD relation. In order to determine the probability density function (PDF) of eta(0), we consider the angular diameter distances from galaxy clusters recently studied by two different groups by assuming elliptical (isothermal) and spherical (non-isothermal) beta models. The strict validity of the DD relation will occur only if the maximum value of eta(0) PDF is centered on eta(0) = 0. Results. It was found that the elliptical beta model is in good agreement with the data, showing no violation of the DD relation (PDF peaked close to eta(0) = 0 at 1 sigma), while the spherical (non-isothermal) one is only marginally compatible at 3 sigma. Conclusions. The present results derived by combining the SZE and X-ray surface brightness data from galaxy clusters with the latest WMAP results (7-years) favors the elliptical geometry for galaxy clusters. It is remarkable that a local property like the geometry of galaxy clusters might be constrained by a global argument provided by the cosmic DD relation.
Resumo:
We investigate a conjecture on the cover times of planar graphs by means of large Monte Carlo simulations. The conjecture states that the cover time tau (G(N)) of a planar graph G(N) of N vertices and maximal degree d is lower bounded by tau (G(N)) >= C(d)N(lnN)(2) with C(d) = (d/4 pi) tan(pi/d), with equality holding for some geometries. We tested this conjecture on the regular honeycomb (d = 3), regular square (d = 4), regular elongated triangular (d = 5), and regular triangular (d = 6) lattices, as well as on the nonregular Union Jack lattice (d(min) = 4, d(max) = 8). Indeed, the Monte Carlo data suggest that the rigorous lower bound may hold as an equality for most of these lattices, with an interesting issue in the case of the Union Jack lattice. The data for the honeycomb lattice, however, violate the bound with the conjectured constant. The empirical probability distribution function of the cover time for the square lattice is also briefly presented, since very little is known about cover time probability distribution functions in general.
Resumo:
Void fraction sensors are important instruments not only for monitoring two-phase flow, but for furnishing an important parameter for obtaining flow map pattern and two-phase flow heat transfer coefficient as well. This work presents the experimental results obtained with the analysis of two axially spaced multiple-electrode impedance sensors tested in an upward air-water two-phase flow in a vertical tube for void fraction measurements. An electronic circuit was developed for signal generation and post-treatment of each sensor signal. By phase shifting the electrodes supplying the signal, it was possible to establish a rotating electric field sweeping across the test section. The fundamental principle of using a multiple-electrode configuration is based on reducing signal sensitivity to the non-uniform cross-section void fraction distribution problem. Static calibration curves were obtained for both sensors, and dynamic signal analyses for bubbly, slug, and turbulent churn flows were carried out. Flow parameters such as Taylor bubble velocity and length were obtained by using cross-correlation techniques. As an application of the void fraction tested, vertical flow pattern identification could be established by using the probability density function technique for void fractions ranging from 0% to nearly 70%.
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
For the first time, we introduce and study some mathematical properties of the Kumaraswamy Weibull distribution that is a quite flexible model in analyzing positive data. It contains as special sub-models the exponentiated Weibull, exponentiated Rayleigh, exponentiated exponential, Weibull and also the new Kumaraswamy exponential distribution. We provide explicit expressions for the moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and Renyi entropy. The moments of the order statistics are calculated. We also discuss the estimation of the parameters by maximum likelihood. We obtain the expected information matrix. We provide applications involving two real data sets on failure times. Finally, some multivariate generalizations of the Kumaraswamy Weibull distribution are discussed. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
Time-domain reflectometry (TDR) is an important technique to obtain series of soil water content measurements in the field. Diode-segmented probes represent an improvement in TDR applicability, allowing measurements of the soil water content profile with a single probe. In this paper we explore an extensive soil water content dataset obtained by tensiometry and TDR from internal drainage experiments in two consecutive years in a tropical soil in Brazil. Comparisons between the variation patterns of the water content estimated by both methods exhibited evidences of deterioration of the TDR system during this two year period at field conditions. The results showed consistency in the variation pattern for the tensiometry data, whereas TDR estimates were inconsistent, with sensitivity decreasing over time. This suggests that difficulties may arise for the long-term use of this TDR system under tropical field conditions. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.
Resumo:
We investigate a mechanism that generates exact solutions of scalar field cosmologies in a unified way. The procedure investigated here permits to recover almost all known solutions, and allows one to derive new solutions as well. In particular, we derive and discuss one novel solution defined in terms of the Lambert function. The solutions are organised in a classification which depends on the choice of a generating function which we have denoted by x(phi) that reflects the underlying thermodynamics of the model. We also analyse and discuss the existence of form-invariance dualities between solutions. A general way of defining the latter in an appropriate fashion for scalar fields is put forward.
Resumo:
In the last decades considerations about equipments' availability became an important issue, as well as its dependence on components characteristics such as reliability and maintainability. This is particularly of outstanding importance if one is dealing with high risk industrial equipments, where these factors play an important and fundamental role in risk management when safety or huge economic values are in discussion. As availability is a function of reliability, maintainability, and maintenance support activities, the main goal is to improve one or more of these factors. This paper intends to show how maintainability can influence availability and present a methodology to select the most important attributes for maintainability using a partial Multi Criteria Decision Making (pMCDM). Improvements in maintainability can be analyzed assuming it as a probability related with a restore probability density function [g(t)].