976 resultados para Numerical Models
Resumo:
Context. Cluster properties can be more distinctly studied in pairs of clusters, where we expect the effects of interactions to be strong. Aims. We here discuss the properties of the double cluster Abell 1758 at a redshift z similar to 0.279. These clusters show strong evidence for merging. Methods. We analyse the optical properties of the North and South cluster of Abell 1758 based on deep imaging obtained with the Canada-France-Hawaii Telescope (CFHT) archive Megaprime/Megacam camera in the g' and r' bands, covering a total region of about 1.05 x 1.16 deg(2), or 16.1 x 17.6 Mpc(2). Our X-ray analysis is based on archive XMM-Newton images. Numerical simulations were performed using an N-body algorithm to treat the dark-matter component, a semi-analytical galaxy-formation model for the evolution of the galaxies and a grid-based hydrodynamic code with a parts per million (PPM) scheme for the dynamics of the intra-cluster medium. We computed galaxy luminosity functions (GLFs) and 2D temperature and metallicity maps of the X-ray gas, which we then compared to the results of our numerical simulations. Results. The GLFs of Abell 1758 North are well fit by Schechter functions in the g' and r' bands, but with a small excess of bright galaxies, particularly in the r' band; their faint-end slopes are similar in both bands. In contrast, the GLFs of Abell 1758 South are not well fit by Schechter functions: excesses of bright galaxies are seen in both bands; the faint-end of the GLF is not very well defined in g'. The GLF computed from our numerical simulations assuming a halo mass-luminosity relation agrees with those derived from the observations. From the X-ray analysis, the most striking features are structures in the metal distribution. We found two elongated regions of high metallicity in Abell 1758 North with two peaks towards the centre. In contrast, Abell 1758 South shows a deficit of metals in its central regions. Comparing observational results to those derived from numerical simulations, we could mimic the most prominent features present in the metallicity map and propose an explanation for the dynamical history of the cluster. We found in particular that in the metal-rich elongated regions of the North cluster, winds had been more efficient than ram-pressure stripping in transporting metal-enriched gas to the outskirts. Conclusions. We confirm the merging structure of the North and South clusters, both at optical and X-ray wavelengths.
Resumo:
Background: Bayesian mixing models have allowed for the inclusion of uncertainty and prior information in the analysis of trophic interactions using stable isotopes. Formulating prior distributions is relatively straightforward when incorporating dietary data. However, the use of data that are related, but not directly proportional, to diet (such as prey availability data) is often problematic because such information is not necessarily predictive of diet, and the information required to build a reliable prior distribution for all prey species is often unavailable. Omitting prey availability data impacts the estimation of a predator's diet and introduces the strong assumption of consumer ultrageneralism (where all prey are consumed in equal proportions), particularly when multiple prey have similar isotope values. Methodology: We develop a procedure to incorporate prey availability data into Bayesian mixing models conditional on the similarity of isotope values between two prey. If a pair of prey have similar isotope values (resulting in highly uncertain mixing model results), our model increases the weight of availability data in estimating the contribution of prey to a predator's diet. We test the utility of this method in an intertidal community against independently measured feeding rates. Conclusions: Our results indicate that our weighting procedure increases the accuracy by which consumer diets can be inferred in situations where multiple prey have similar isotope values. This suggests that the exchange of formalism for predictive power is merited, particularly when the relationship between prey availability and a predator's diet cannot be assumed for all species in a system.
Resumo:
Umbilical cord mesenchymal stromal cells (MSC) have been widely investigated for cell-based therapy studies as an alternative source to bone marrow transplantation. Umbilical cord tissue is a rich source of MSCs with potential to derivate at least muscle, cartilage, fat, and bone cells in vitro. The possibility to replace the defective muscle cells using cell therapy is a promising approach for the treatment of progressive muscular dystrophies (PMDs), independently of the specific gene mutation. Therefore, preclinical studies in different models of muscular dystrophies are of utmost importance. The main objective of the present study is to evaluate if umbilical cord MSCs have the potential to reach and differentiate into muscle cells in vivo in two animal models of PMDs. In order to address this question we injected (1) human umbilical cord tissue (hUCT) MSCs into the caudal vein of SJL mice; (2) hUCT and canine umbilical cord vein (cUCV) MSCs intra-arterially in GRMD dogs. Our results here reported support the safety of the procedure and indicate that the injected cells could engraft in the host muscle in both animal models but could not differentiate into muscle cells. These observations may provide important information aiming future therapy for muscular dystrophies.
Resumo:
We have numerically solved the Heisenberg-Langevin equations describing the propagation of quantized fields through an optically thick sample of atoms. Two orthogonal polarization components are considered for the field, and the complete Zeeman sublevel structure of the atomic transition is taken into account. Quantum fluctuations of atomic operators are included through appropriate Langevin forces. We have considered an incident field in a linearly polarized coherent state (driving field) and vacuum in the perpendicular polarization and calculated the noise spectra of the amplitude and phase quadratures of the output field for two orthogonal polarizations. We analyze different configurations depending on the total angular momentum of the ground and excited atomic states. We examine the generation of squeezing for the driving-field polarization component and vacuum squeezing of the orthogonal polarization. Entanglement of orthogonally polarized modes is predicted. Noise spectral features specific to (Zeeman) multilevel configurations are identified.
Resumo:
We consider a simple Maier-Saupe statistical model with the inclusion of disorder degrees of freedom to mimic the phase diagram of a mixture of rodlike and disklike molecules. A quenched distribution of shapes leads to a phase diagram with two uniaxial and a biaxial nematic structure. A thermalized distribution, however, which is more adequate to liquid mixtures, precludes the stability of this biaxial phase. We then use a two-temperature formalism, and assume a separation of relaxation times, to show that a partial degree of annealing is already sufficient to stabilize a biaxial nematic structure.
Resumo:
In the last decade the Sznajd model has been successfully employed in modeling some properties and scale features of both proportional and majority elections. We propose a version of the Sznajd model with a generalized bounded confidence rule-a rule that limits the convincing capability of agents and that is essential to allow coexistence of opinions in the stationary state. With an appropriate choice of parameters it can be reduced to previous models. We solved this model both in a mean-field approach (for an arbitrary number of opinions) and numerically in a Barabaacutesi-Albert network (for three and four opinions), studying the transient and the possible stationary states. We built the phase portrait for the special cases of three and four opinions, defining the attractors and their basins of attraction. Through this analysis, we were able to understand and explain discrepancies between mean-field and simulation results obtained in previous works for the usual Sznajd model with bounded confidence and three opinions. Both the dynamical system approach and our generalized bounded confidence rule are quite general and we think it can be useful to the understanding of other similar models.
Resumo:
We study a stochastic lattice model describing the dynamics of coexistence of two interacting biological species. The model comprehends the local processes of birth, death, and diffusion of individuals of each species and is grounded on interaction of the predator-prey type. The species coexistence can be of two types: With self-sustained coupled time oscillations of population densities and without oscillations. We perform numerical simulations of the model on a square lattice and analyze the temporal behavior of each species by computing the time correlation functions as well as the spectral densities. This analysis provides an appropriate characterization of the different types of coexistence. It is also used to examine linked population cycles in nature and in experiment.
Resumo:
We revisit the scaling properties of a model for nonequilibrium wetting [Phys. Rev. Lett. 79, 2710 (1997)], correcting previous estimates of the critical exponents and providing a complete scaling scheme. Moreover, we investigate a special point in the phase diagram, where the model exhibits a roughening transition related to directed percolation. We argue that in the vicinity of this point evaporation from the middle of plateaus can be interpreted as an external field in the language of directed percolation. This analogy allows us to compute the crossover exponent and to predict the form of the phase transition line close to its terminal point.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
The lightest supersymmetric particle may decay with branching ratios that correlate with neutrino oscillation parameters. In this case the CERN Large Hadron Collider (LHC) has the potential to probe the atmospheric neutrino mixing angle with sensitivity competitive to its low-energy determination by underground experiments. Under realistic detection assumptions, we identify the necessary conditions for the experiments at CERN's LHC to probe the simplest scenario for neutrino masses induced by minimal supergravity with bilinear R parity violation.
Resumo:
We study the potential of the CERN large hadron collider to probe the spin of new massive vector boson resonances predicted by Higgsless models. We consider its production via weak boson fusion which relies only on the coupling between the new resonances and the weak gauge bosons. We show that the large hadron collider will be able to unravel the spin of the particles associated with the partial restoration of unitarity in vector boson scattering for integrated luminosities of 150-560 fb(-1), depending on the new state mass and on the method used in the analyses.
Resumo:
We develop a combined hydro-kinetic approach which incorporates a hydrodynamical expansion of the systems formed in A + A collisions and their dynamical decoupling described by escape probabilities. The method corresponds to a generalized relaxation time (tau(rel)) approximation for the Boltzmann equation applied to inhomogeneous expanding systems; at small tau(rel) it also allows one to catch the viscous effects in hadronic component-hadron-resonance gas. We demonstrate how the approximation of sudden freeze-out can be obtained within this dynamical picture of continuous emission and find that hypersurfaces, corresponding to a sharp freeze-out limit, are momentum dependent. The pion m(T) spectra are computed in the developed hydro-kinetic model, and compared with those obtained from ideal hydrodynamics with the Cooper-Frye isothermal prescription. Our results indicate that there does not exist a universal freeze-out temperature for pions with different momenta, and support an earlier decoupling of higher p(T) particles. By performing numerical simulations for various initial conditions and equations of state we identify several characteristic features of the bulk QCD matter evolution preferred in view of the current analysis of heavy ion collisions at RHIC energies.
Resumo:
The nuclear gross theory, originally formulated by Takahashi and Yamada (1969 Prog. Theor. Phys. 41 1470) for the beta-decay, is applied to the electronic-neutrino nucleus reactions, employing a more realistic description of the energetics of the Gamow-Teller resonances. The model parameters are gauged from the most recent experimental data, both for beta(-)-decay and electron capture, separately for even-even, even-odd, odd-odd and odd-even nuclei. The numerical estimates for neutrino-nucleus cross-sections agree fairly well with previous evaluations done within the framework of microscopic models. The formalism presented here can be extended to the heavy nuclei mass region, where weak processes are quite relevant, which is of astrophysical interest because of its applications in supernova explosive nucleosynthesis.
Resumo:
We investigate a conjecture on the cover times of planar graphs by means of large Monte Carlo simulations. The conjecture states that the cover time tau (G(N)) of a planar graph G(N) of N vertices and maximal degree d is lower bounded by tau (G(N)) >= C(d)N(lnN)(2) with C(d) = (d/4 pi) tan(pi/d), with equality holding for some geometries. We tested this conjecture on the regular honeycomb (d = 3), regular square (d = 4), regular elongated triangular (d = 5), and regular triangular (d = 6) lattices, as well as on the nonregular Union Jack lattice (d(min) = 4, d(max) = 8). Indeed, the Monte Carlo data suggest that the rigorous lower bound may hold as an equality for most of these lattices, with an interesting issue in the case of the Union Jack lattice. The data for the honeycomb lattice, however, violate the bound with the conjectured constant. The empirical probability distribution function of the cover time for the square lattice is also briefly presented, since very little is known about cover time probability distribution functions in general.
Resumo:
We show that the common singularities present in generic modified gravity models governed by actions of the type S = integral d(4)x root-gf(R, phi, X). with X = -1/2 g(ab)partial derivative(a)phi partial derivative(b)phi, are essentially the same anisotropic instabilities associated to the hypersurface F(phi) = 0 in the case of a nonminimal coupling of the type F(phi)R, enlightening the physical origin of such singularities that typically arise in rather complex and cumbersome inhomogeneous perturbation analyses. We show, moreover, that such anisotropic instabilities typically give rise to dynamically unavoidable singularities, precluding completely the possibility of having physically viable models for which the hypersurface partial derivative f/partial derivative R = 0 is attained. Some examples are explicitly discussed.