915 resultados para Uncertainty in Illness Theory
Resumo:
In this paper we prove some connections between the growth of a function and its Mellin transform and apply these to study an explicit example in the theory of Beurling primes.
Resumo:
We calculate the spectra of produced thermal photons in Au + Au collisions taking into account the nonequilibrium contribution to photon production due to finite shear viscosity. The evolution of the fireball is modeled by second-order as well as by divergence-type 2 + 1 dissipative hydrodynamics, both with an ideal equation of state and with one based on Lattice QCD that includes an analytical crossover. The spectrum calculated in the divergence-type theory is considerably enhanced with respect to the one calculated in the second-order theory, the difference being entirely due to differences in the viscous corrections to photon production. Our results show that the differences in hydrodynamic formalisms are an important source of uncertainty in the extraction of the value of eta/s from measured photon spectra. The uncertainty in the value of eta/s associated with different hydrodynamic models used to compute thermal photon spectra is larger than the one occurring in matching hadron elliptic flow to RHIC data. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Random effect models have been widely applied in many fields of research. However, models with uncertain design matrices for random effects have been little investigated before. In some applications with such problems, an expectation method has been used for simplicity. This method does not include the extra information of uncertainty in the design matrix is not included. The closed solution for this problem is generally difficult to attain. We therefore propose an two-step algorithm for estimating the parameters, especially the variance components in the model. The implementation is based on Monte Carlo approximation and a Newton-Raphson-based EM algorithm. As an example, a simulated genetics dataset was analyzed. The results showed that the proportion of the total variance explained by the random effects was accurately estimated, which was highly underestimated by the expectation method. By introducing heuristic search and optimization methods, the algorithm can possibly be developed to infer the 'model-based' best design matrix and the corresponding best estimates.
Resumo:
This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.
Resumo:
Esta tese de Doutorado é dedicada ao estudo de instabilidade financeira e dinâmica em Teoria Monet ária. E demonstrado que corridas banc árias são eliminadas sem custos no modelo padrão de teoria banc ária quando a popula ção não é pequena. É proposta uma extensão em que incerteza agregada é mais severa e o custo da estabilidade financeira é relevante. Finalmente, estabelece-se otimalidade de transições na distribui ção de moeda em economias em que oportunidades de trocas são escassas e heterogêneas. Em particular, otimalidade da inflação depende dos incentivos dinâmicos proporcionados por tais transi ções. O capí tulo 1 estabelece o resultado de estabilidade sem custos para economias grandes ao estudar os efeitos do tamanho populacional na an álise de corridas banc árias de Peck & Shell. No capí tulo 2, otimalidade de dinâmica é estudada no modelo de monet ário de Kiyotaki & Wright quando a sociedade é capaz de implementar uma polí tica inflacion ária. Apesar de adotar a abordagem de desenho de mecanismos, este capí tulo faz um paralelo com a an álise de Sargent & Wallace (1981) ao destacar efeitos de incentivos dinâmicos sobre a interação entre as polí ticas monet ária e fiscal. O cap ítulo 3 retoma o tema de estabilidade fi nanceira ao quanti car os custos envolvidos no desenho ótimo de um setor bancário à prova de corridas e ao propor uma estrutura informacional alternativa que possibilita bancos insolventes. A primeira an álise mostra que o esquema de estabilidade ótima exibe altas taxas de juros de longo prazo e a segunda que monitoramento imperfeito pode levar a corridas bancárias com insolvência.
Resumo:
Many years ago Zel'dovich showed how the Lagrange condition in the theory of differential equations can be utilized in the perturbation theory of quantum mechanics. Zel'dovich's method enables us to circumvent the summation over intermediate states. As compared with other similar methods, in particular the logarithmic perturbation expansion method, we emphasize that this relatively unknown method of Zel'dovich has a remarkable advantage in dealing with excited stares. That is, the ground and excited states can all be treated in the same way. The nodes of the unperturbed wavefunction do not give rise to any complication.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Predictability is related to the uncertainty in the outcome of future events during the evolution of the state of a system. The cluster weighted modeling (CWM) is interpreted as a tool to detect such an uncertainty and used it in spatially distributed systems. As such, the simple prediction algorithm in conjunction with the CWM forms a powerful set of methods to relate predictability and dimension.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
Coupled-cluster (CC) theory is one of the most successful approaches in high-accuracy quantum chemistry. The present thesis makes a number of contributions to the determination of molecular properties and excitation energies within the CC framework. The multireference CC (MRCC) method proposed by Mukherjee and coworkers (Mk-MRCC) has been benchmarked within the singles and doubles approximation (Mk-MRCCSD) for molecular equilibrium structures. It is demonstrated that Mk-MRCCSD yields reliable results for multireference cases where single-reference CC methods fail. At the same time, the present work also illustrates that Mk-MRCC still suffers from a number of theoretical problems and sometimes gives rise to results of unsatisfactory accuracy. To determine polarizability tensors and excitation spectra in the MRCC framework, the Mk-MRCC linear-response function has been derived together with the corresponding linear-response equations. Pilot applications show that Mk-MRCC linear-response theory suffers from a severe problem when applied to the calculation of dynamic properties and excitation energies: The Mk-MRCC sufficiency conditions give rise to a redundancy in the Mk-MRCC Jacobian matrix, which entails an artificial splitting of certain excited states. This finding has established a new paradigm in MRCC theory, namely that a convincing method should not only yield accurate energies, but ought to allow for the reliable calculation of dynamic properties as well. In the context of single-reference CC theory, an analytic expression for the dipole Hessian matrix, a third-order quantity relevant to infrared spectroscopy, has been derived and implemented within the CC singles and doubles approximation. The advantages of analytic derivatives over numerical differentiation schemes are demonstrated in some pilot applications.
Resumo:
The G2, G3, CBS-QB3, and CBS-APNO model chemistry methods and the B3LYP, B3P86, mPW1PW, and PBE1PBE density functional theory (DFT) methods have been used to calculate ΔH° and ΔG° values for ionic clusters of the ammonium ion complexed with water and ammonia. Results for the clusters NH4+(NH3)n and NH4+(H2O)n, where n = 1−4, are reported in this paper and compared against experimental values. Agreement with the experimental values for ΔH° and ΔG° for formation of NH4+(NH3)n clusters is excellent. Comparison between experiment and theory for formation of the NH4+(H2O)n clusters is quite good considering the uncertainty in the experimental values. The four DFT methods yield excellent agreement with experiment and the model chemistry methods when the aug-cc-pVTZ basis set is used for energetic calculations and the 6-31G* basis set is used for geometries and frequencies. On the basis of these results, we predict that all ions in the lower troposphere will be saturated with at least one complete first hydration shell of water molecules.
Resumo:
In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion (AIC) have been used, based either on the marginal or on the conditional distribution. We show that the marginal AIC is no longer an asymptotically unbiased estimator of the Akaike information, and in fact favours smaller models without random effects. For the conditional AIC, we show that ignoring estimation uncertainty in the random effects covariance matrix, as is common practice, induces a bias that leads to the selection of any random effect not predicted to be exactly zero. We derive an analytic representation of a corrected version of the conditional AIC, which avoids the high computational cost and imprecision of available numerical approximations. An implementation in an R package is provided. All theoretical results are illustrated in simulation studies, and their impact in practice is investigated in an analysis of childhood malnutrition in Zambia.
Resumo:
One of the most influential statements in the anomie theory tradition has been Merton’s argument that the volume of instrumental property crime should be higher where there is a greater imbalance between the degree of commitment to monetary success goals and the degree of commitment to legitimate means of pursing such goals. Contemporary anomie theories stimulated by Merton’s perspective, most notably Messner and Rosenfeld’s institutional anomie theory, have expanded the scope conditions by emphasizing lethal criminal violence as an outcome to which anomie theory is highly relevant, and virtually all contemporary empirical studies have focused on applying the perspective to explaining spatial variation in homicide rates. In the present paper, we argue that current explications of Merton’s theory and IAT have not adequately conveyed the relevance of the core features of the anomie perspective to lethal violence. We propose an expanded anomie model in which an unbalanced pecuniary value system – the core causal variable in Merton’s theory and IAT – translates into higher levels of homicide primarily in indirect ways by increasing levels of firearm prevalence, drug market activity, and property crime, and by enhancing the degree to which these factors stimulate lethal outcomes. Using aggregate-level data collected during the mid-to-late 1970s for a sample of relatively large social aggregates within the U.S., we find a significant effect on homicide rates of an interaction term reflecting high levels of commitment to monetary success goals and low levels of commitment to legitimate means. Virtually all of this effect is accounted for by higher levels of property crime and drug market activity that occur in areas with an unbalanced pecuniary value system. Our analysis also reveals that property crime is more apt to lead to homicide under conditions of high levels of structural disadvantage. These and other findings underscore the potential value of elaborating the anomie perspective to explicitly account for lethal violence.