15 resultados para Poisson Representation

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine Weddell Sea deep water mass distributions with respect to the results from three different model runs using the oceanic component of the National Center for Atmospheric Research Community Climate System Model (NCAR-CCSM). One run is inter-annually forced by corrected NCAR/NCEP fluxes, while the other two are forced with the annual cycle obtained from the same climatology. One of the latter runs includes an interactive sea-ice model. Optimum Multiparameter analysis is applied to separate the deep water masses in the Greenwich Meridian section (into the Weddell Sea only) to measure the degree of realism obtained in the simulations. First, we describe the distribution of the simulated deep water masses using observed water type indices. Since the observed indices do not provide an acceptable representation of the Weddell Sea deep water masses as expected, they are specifically adjusted for each simulation. Differences among the water masses` representations in the three simulations are quantified through their root-mean-square differences. Results point out the need for better representation (and inclusion) of ice-related processes in order to improve the oceanic characteristics and variability of dense Southern Ocean water masses in the outputs of the NCAR-CCSM model, and probably in other ocean and climate models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Conway-Maxwell Poisson (COMP) distribution as an extension of the Poisson distribution is a popular model for analyzing counting data. For the first time, we introduce a new three parameter distribution, so-called the exponential-Conway-Maxwell Poisson (ECOMP) distribution, that contains as sub-models the exponential-geometric and exponential-Poisson distributions proposed by Adamidis and Loukas (Stat Probab Lett 39:35-42, 1998) and KuAY (Comput Stat Data Anal 51:4497-4509, 2007), respectively. The new density function can be expressed as a mixture of exponential density functions. Expansions for moments, moment generating function and some statistical measures are provided. The density function of the order statistics can also be expressed as a mixture of exponential densities. We derive two formulae for the moments of order statistics. The elements of the observed information matrix are provided. Two applications illustrate the usefulness of the new distribution to analyze positive data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is to develop a Bayesian analysis for the right-censored survival data when immune or cured individuals may be present in the population from which the data is taken. In our approach the number of competing causes of the event of interest follows the Conway-Maxwell-Poisson distribution which generalizes the Poisson distribution. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the proposed model. Also, some discussions on the model selection and an illustration with a real data set are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use computer algebra to study polynomial identities for the trilinear operation [a, b, c] = abc - acb - bac + bca + cab - cba in the free associative algebra. It is known that [a, b, c] satisfies the alternating property in degree 3, no new identities in degree 5, a multilinear identity in degree 7 which alternates in 6 arguments, and no new identities in degree 9. We use the representation theory of the symmetric group to demonstrate the existence of new identities in degree 11. The only irreducible representations of dimension <400 with new identities correspond to partitions 2(5), 1 and 2(4), 1(3) and have dimensions 132 and 165. We construct an explicit new multilinear identity for partition 2(5), 1 and we demonstrate the existence of a new non-multilinear identity in which the underlying variables are permutations of a(2)b(2)c(2)d(2)e(2) f.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ACID-BASE REACTIONS: CONCEPT, REPRESENTATION AND GENERALIZATION FROM THE ENERGY INVOLVED IN TRANSFORMATIONS. Undergraduate students on the first year of Chemistry Courses are unfamiliar with the representation of acid-base reactions using the ionic equation H+ + OH- -> H2O. A chemistry class was proposed about acid-base reactions using theory and experimental evaluation of neutralization heat to discuss the energy involved when water is formed from H+ and OH- ions. The experiment is suggested using different strong acids and strong base pairs. The presentation of the theme within a chemistry class for high school teachers increased the number of individuals that saw the acid-base reaction from this perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: In the analysis of effects by cell treatment such as drug dosing, identifying changes on gene network structures between normal and treated cells is a key task. A possible way for identifying the changes is to compare structures of networks estimated from data on normal and treated cells separately. However, this approach usually fails to estimate accurate gene networks due to the limited length of time series data and measurement noise. Thus, approaches that identify changes on regulations by using time series data on both conditions in an efficient manner are demanded. Methods: We propose a new statistical approach that is based on the state space representation of the vector autoregressive model and estimates gene networks on two different conditions in order to identify changes on regulations between the conditions. In the mathematical model of our approach, hidden binary variables are newly introduced to indicate the presence of regulations on each condition. The use of the hidden binary variables enables an efficient data usage; data on both conditions are used for commonly existing regulations, while for condition specific regulations corresponding data are only applied. Also, the similarity of networks on two conditions is automatically considered from the design of the potential function for the hidden binary variables. For the estimation of the hidden binary variables, we derive a new variational annealing method that searches the configuration of the binary variables maximizing the marginal likelihood. Results: For the performance evaluation, we use time series data from two topologically similar synthetic networks, and confirm that our proposed approach estimates commonly existing regulations as well as changes on regulations with higher coverage and precision than other existing approaches in almost all the experimental settings. For a real data application, our proposed approach is applied to time series data from normal Human lung cells and Human lung cells treated by stimulating EGF-receptors and dosing an anticancer drug termed Gefitinib. In the treated lung cells, a cancer cell condition is simulated by the stimulation of EGF-receptors, but the effect would be counteracted due to the selective inhibition of EGF-receptors by Gefitinib. However, gene expression profiles are actually different between the conditions, and the genes related to the identified changes are considered as possible off-targets of Gefitinib. Conclusions: From the synthetically generated time series data, our proposed approach can identify changes on regulations more accurately than existing methods. By applying the proposed approach to the time series data on normal and treated Human lung cells, candidates of off-target genes of Gefitinib are found. According to the published clinical information, one of the genes can be related to a factor of interstitial pneumonia, which is known as a side effect of Gefitinib.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a new family of survival distributions is presented. It is derived by considering that the latent number of failure causes follows a Poisson distribution and the time for these causes to be activated follows an exponential distribution. Three different activation schemes are also considered. Moreover, we propose the inclusion of covariates in the model formulation in order to study their effect on the expected value of the number of causes and on the failure rate function. Inferential procedure based on the maximum likelihood method is discussed and evaluated via simulation. The developed methodology is illustrated on a real data set on ovarian cancer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Different representations for a control surface freeplay nonlinearity in a three degree of freedom aeroelastic system are assessed. These are the discontinuous, polynomial and hyperbolic tangent representations. The Duhamel formulation is used to model the aerodynamic loads. Assessment of the validity of these representations is performed through comparison with previous experimental observations. The results show that the instability and nonlinear response characteristics are accurately predicted when using the discontinuous and hyperbolic tangent representations. On the other hand, the polynomial representation fails to predict chaotic motions observed in the experiments. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let IaS,a"e (d) be a set of centers chosen according to a Poisson point process in a"e (d) . Let psi be an allocation of a"e (d) to I in the sense of the Gale-Shapley marriage problem, with the additional feature that every center xi aI has an appetite given by a nonnegative random variable alpha. Generalizing some previous results, we study large deviations for the distance of a typical point xaa"e (d) to its center psi(x)aI, subject to some restrictions on the moments of alpha.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To identify and compare perceptions of pain and how it is faced between men and women with central post-stroke pain. Methods: The participants were 25 men and 25 women of minimum age 30 years-old and minimum schooling level of four years, presenting central post-stroke pain for at least three months. The instruments used were: Mini-Mental State Examination; structured interview for the Brief Psychiatric Scale; Survey of Sociodemographic and Clinical Data; Visual Analogue Scale (VAS); Ways of Coping with Problems Scale (WCPS) in Scale; Revised Illness Perception Questionnaire (IPQ-R); and Beck Depression Inventory (BD). Results: A significantly greater number of women used the coping strategy "Turn to spiritual and religious activities" in WCPS. They associated their emotional state with the cause of pain in IPQ-R. "Distraction of attention" was the strategy most used by the subjects. Conclusion: Women used spiritual and religious activities more as a coping strategy and perceived their emotional state as the cause of pain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a random intercept Poisson model in which the random effect is assumed to follow a generalized log-gamma (GLG) distribution. This random effect accommodates (or captures) the overdispersion in the counts and induces within-cluster correlation. We derive the first two moments for the marginal distribution as well as the intraclass correlation. Even though numerical integration methods are, in general, required for deriving the marginal models, we obtain the multivariate negative binomial model from a particular parameter setting of the hierarchical model. An iterative process is derived for obtaining the maximum likelihood estimates for the parameters in the multivariate negative binomial model. Residual analysis is proposed and two applications with real data are given for illustration. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We prove that any two Poisson dependent elements in a free Poisson algebra and a free Poisson field of characteristic zero are algebraically dependent, thus answering positively a question from Makar-Limanov and Umirbaev (2007) [8]. We apply this result to give a new proof of the tameness of automorphisms for free Poisson algebras of rank two (see Makar-Limanov and Umirbaev (2011) [9], Makar-Limanov et al. (2009) [10]). (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis - latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electrical conductivity σ has been calculated for p-doped GaAs/Al0.3Ga0.7As and cubic GaN/Al0.3Ga0.7N thin superlattices (SLs). The calculations are done within a self-consistent approach to the k → ⋅ p → theory by means of a full six-band Luttinger-Kohn Hamiltonian, together with the Poisson equation in a plane wave representation, including exchange correlation effects within the local density approximation. It was also assumed that transport in the SL occurs through extended minibands states for each carrier, and the conductivity is calculated at zero temperature and in low-field ohmic limits by the quasi-chemical Boltzmann kinetic equation. It was shown that the particular minibands structure of the p-doped SLs leads to a plateau-like behavior in the conductivity as a function of the donor concentration and/or the Fermi level energy. In addition, it is shown that the Coulomb and exchange-correlation effects play an important role in these systems, since they determine the bending potential.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important feature in computer systems developed for the agricultural sector is to satisfy the heterogeneity of data generated in different processes. Most problems related with this heterogeneity arise from the lack of standard for different computing solutions proposed. An efficient solution for that is to create a single standard for data exchange. The study on the actual process involved in cotton production was based on a research developed by the Brazilian Agricultural Research Corporation (EMBRAPA) that reports all phases as a result of the compilation of several theoretical and practical researches related to cotton crop. The proposition of a standard starts with the identification of the most important classes of data involved in the process, and includes an ontology that is the systematization of concepts related to the production of cotton fiber and results in a set of classes, relations, functions and instances. The results are used as a reference for the development of computational tools, transforming implicit knowledge into applications that support the knowledge described. This research is based on data from the Midwest of Brazil. The choice of the cotton process as a study case comes from the fact that Brazil is one of the major players and there are several improvements required for system integration in this segment.