917 resultados para Hierarchical bayesian space-time models
Resumo:
We investigate the influence of vacuum polarization of quantum massive fields on the scalar sector of quasinormal modes in spherically symmetric black holes. We consider the evolution of a massless scalar field on the space-time corresponding to a charged semiclassical black hole, consisting of the quantum-corrected geometry of a Reissner-Nordstrom black hole dressed by a quantum massive scalar field in the large mass limit. Using a sixth order WKB approach we find a shift in the quasinormal mode frequencies due to vacuum polarization.
Resumo:
The time evolution of the out-of-equilibrium Mott insulator is investigated numerically through calculations of space-time-resolved density and entropy profiles resulting from the release of a gas of ultracold fermionic atoms from an optical trap. For adiabatic, moderate and sudden switching-off of the trapping potential, the out-of-equilibrium dynamics of the Mott insulator is found to differ profoundly from that of the band insulator and the metallic phase, displaying a self-induced stability that is robust within a wide range of densities, system sizes and interaction strengths. The connection between the entanglement entropy and changes of phase, known for equilibrium situations, is found to extend to the out-of-equilibrium regime. Finally, the relation between the system`s long time behavior and the thermalization limit is analyzed. Copyright (C) EPLA, 2011
Resumo:
This work aims to compare the forecast efficiency of different types of methodologies applied to Brazilian Consumer inflation (IPCA). We will compare forecasting models using disaggregated and aggregated data over twelve months ahead. The disaggregated models were estimated by SARIMA and will have different levels of disaggregation. Aggregated models will be estimated by time series techniques such as SARIMA, state-space structural models and Markov-switching. The forecasting accuracy comparison will be made by the selection model procedure known as Model Confidence Set and by Diebold-Mariano procedure. We were able to find evidence of forecast accuracy gains in models using more disaggregated data
Resumo:
Recent astronomical observations (involving supernovae type Ia, cosmic background radiation anisotropy and galaxy clusters probes) have provided strong evidence that the observed universe is described by an accelerating, flat model whose space-time properties can be represented by the FriedmannRobertsonWalker (FRW) metric. However, the nature of the substance or mechanism behind the current cosmic acceleration remains unknown and its determination constitutes a challenging problem for modern cosmology. In the general relativistic description, an accelerat ing regime is usually obtained by assuming the existence of an exotic energy component endowed with negative pressure, called dark energy, which is usually represented by a cosmological constant ¤ associated to the vacuum energy density. All observational data available so far are in good agreement with the concordance cosmic ¤CDM model. Nevertheless, such models are plagued with several problems thereby inspiring many authors to propose alternative candidates in the relativistic context. In this thesis, a new kind of accelerating flat model with no dark energy and fully dominated by cold dark matter (CDM) is proposed. The number of CDM particles is not conserved and the present accelerating stage is a consequence of the negative pressure describing the irreversible process of gravitational particle creation. In order to have a transition from a decelerating to an accelerating regime at low redshifts, the matter creation rate proposed here depends on 2 parameters (y and ߯): the first one identifies a constant term of the order of H0 and the second one describes a time variation proportional to he Hubble parameter H(t). In this scenario, H0 does not need to be small in order to solve the age problem and the transition happens even if there is no matter creation during the radiation and part of the matter dominated phase (when the ß term is negligible). Like in flat ACDM scenarios, the dimming of distant type Ia supernovae can be fitted with just one free parameter, and the coincidence problem plaguing the models driven by the cosmological constant. ACDM is absent. The limits endowed with with the existence of the quasar APM 08279+5255, located at z = 3:91 and with an estimated ages between 2 and 3 Gyr are also investigated. In the simplest case (ß = 0), the model is compatible with the existence of the quasar for y > 0:56 whether the age of the quasar is 2.0 Gyr. For 3 Gyr the limit derived is y > 0:72. New limits for the formation redshift of the quasar are also established
Resumo:
In the Einstein s theory of General Relativity the field equations relate the geometry of space-time with the content of matter and energy, sources of the gravitational field. This content is described by a second order tensor, known as energy-momentum tensor. On the other hand, the energy-momentum tensors that have physical meaning are not specified by this theory. In the 700s, Hawking and Ellis set a couple of conditions, considered feasible from a physical point of view, in order to limit the arbitrariness of these tensors. These conditions, which became known as Hawking-Ellis energy conditions, play important roles in the gravitation scenario. They are widely used as powerful tools for analysis; from the demonstration of important theorems concerning to the behavior of gravitational fields and geometries associated, the gravity quantum behavior, to the analysis of cosmological models. In this dissertation we present a rigorous deduction of the several energy conditions currently in vogue in the scientific literature, such as: the Null Energy Condition (NEC), Weak Energy Condition (WEC), the Strong Energy Condition (SEC), the Dominant Energy Condition (DEC) and Null Dominant Energy Condition (NDEC). Bearing in mind the most trivial applications in Cosmology and Gravitation, the deductions were initially made for an energy-momentum tensor of a generalized perfect fluid and then extended to scalar fields with minimal and non-minimal coupling to the gravitational field. We also present a study about the possible violations of some of these energy conditions. Aiming the study of the single nature of some exact solutions of Einstein s General Relativity, in 1955 the Indian physicist Raychaudhuri derived an equation that is today considered fundamental to the study of the gravitational attraction of matter, which became known as the Raychaudhuri equation. This famous equation is fundamental for to understanding of gravitational attraction in Astrophysics and Cosmology and for the comprehension of the singularity theorems, such as, the Hawking and Penrose theorem about the singularity of the gravitational collapse. In this dissertation we derive the Raychaudhuri equation, the Frobenius theorem and the Focusing theorem for congruences time-like and null congruences of a pseudo-riemannian manifold. We discuss the geometric and physical meaning of this equation, its connections with the energy conditions, and some of its several aplications.
Resumo:
In Survival Analysis, long duration models allow for the estimation of the healing fraction, which represents a portion of the population immune to the event of interest. Here we address classical and Bayesian estimation based on mixture models and promotion time models, using different distributions (exponential, Weibull and Pareto) to model failure time. The database used to illustrate the implementations is described in Kersey et al. (1987) and it consists of a group of leukemia patients who underwent a certain type of transplant. The specific implementations used were numeric optimization by BFGS as implemented in R (base::optim), Laplace approximation (own implementation) and Gibbs sampling as implemented in Winbugs. We describe the main features of the models used, the estimation methods and the computational aspects. We also discuss how different prior information can affect the Bayesian estimates
Resumo:
Dirac-like monopoles are studied in three-dimensional Abelian Maxwell and Maxwell-Chern-Simons models. Their scalar nature is highlighted and discussed through a dimensional reduction of four-dimensional electrodynamics with electric and magnetic sources. Some general properties and similarities whether considered in Minkowski or Euclidean space are mentioned. However, by virtue of the structure of the space-time in which they are studied, a number of differences among them occur. Furthermore, we pay attention to some consequences of these objects when they act upon the usual particles. Among other subjects, special attention is given to the study of a Lorentz-violating nonminimal coupling between neutral fermions and the field generated by a monopole alone. In addition, an analogue of the Aharonov-Casher effect is discussed in this framework.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We consider massive spin 1 fields, in Riemann-Cartan space-times, described by Duffin-Kemmer-Petiau theory. We show that this approach induces a coupling between the spin 1 field and the space-time torsion which breaks the usual equivalence with the Proca theory, but that such equivalence is preserved in the context of the Teleparallel Equivalent of General Relativity.
Resumo:
We consider a field theory with target space being the two dimensional sphere S-2 and defined on the space-time S-3 x R. The Lagrangean is the square of the pull-back of the area form on S-2. It is invariant under the conformal group SO(4, 2) and the infinite dimensional group of area preserving diffeomorphisms of S-2. We construct an infinite number of exact soliton solutions with non-trivial Hopf topological charges. The solutions spin with a frequency which is bounded above by a quantity proportional to the inverse of the radius of S-3. The construction of the solutions is made possible by an ansatz which explores the conformal symmetry and a U(1) subgroup of the area preserving diffeomorphism group.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Um modelo bayesiano de regressão binária é desenvolvido para predizer óbito hospitalar em pacientes acometidos por infarto agudo do miocárdio. Métodos de Monte Carlo via Cadeias de Markov (MCMC) são usados para fazer inferência e validação. Uma estratégia para construção de modelos, baseada no uso do fator de Bayes, é proposta e aspectos de validação são extensivamente discutidos neste artigo, incluindo a distribuição a posteriori para o índice de concordância e análise de resíduos. A determinação de fatores de risco, baseados em variáveis disponíveis na chegada do paciente ao hospital, é muito importante para a tomada de decisão sobre o curso do tratamento. O modelo identificado se revela fortemente confiável e acurado, com uma taxa de classificação correta de 88% e um índice de concordância de 83%.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
North's clustering method, which is based on a much used ecological model, the nearest neighbor distance, was applied to the objective reconstruction of the chain of household-to-household transmission of variola minor (the mild form of smallpox). The discrete within-household outbreaks were considered as points which were ordered in a time sequence using a 10-40 day interval between introduction of the disease into a source household and a receptor household. The closer points in the plane were assumed to have a larger probability of being links of a chain of household-to-household spread of the disease. The five defining distances (Manhattan or city-block distance between presumptive source and receptor dwellings) were 100, 200, 300, 400 and 500 m. The subchain sets obtained with the five defining distances were compared with the subchains empirically reconstructed during the field study of the epidemic through direct investigation of personal contacts of the introductory cases with either introductory or subsequent cases from previously affected households. The criteria of fit of theoretical to empirical clusters were: (a) the number of clustered dwellings and subchains, (b) number of dwellings in a subchain and (c) position of dwellings in a subchain. The defining distance closet to the empirical findings was 200 m, which fully agrees with the travelling habits of the study population. Less close but acceptable approximations were obtained with 100, 300, 400 and 500 m. The latter two distances gave identical results, as if a clustering ceiling had been reached. It seems that North's clustering model may be used for an objective reconstruction of the chain of contagious whose links are discrete within-household outbreaks. © 1984.