109 resultados para REALISTIC MODELS
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
The negative pressure accompanying gravitationally-induced particle creation can lead to a cold dark matter (CDM) dominated, accelerating Universe (Lima et al. 1996 [1]) without requiring the presence of dark energy or a cosmological constant. In a recent study, Lima et al. 2008 [2] (LSS) demonstrated that particle creation driven cosmological models are capable of accounting for the SNIa observations [3] of the recent transition from a decelerating to an accelerating Universe, without the need for Dark Energy. Here we consider a class of such models where the particle creation rate is assumed to be of the form Gamma = beta H + gamma H(0), where H is the Hubble parameter and H(0) is its present value. The evolution of such models is tested at low redshift by the latest SNe Ia data provided by the Union compilation [4] and at high redshift using the value of z(eq), the redshift of the epoch of matter - radiation equality, inferred from the WMAP constraints on the early Integrated Sachs-Wolfe (ISW) effect [5]. Since the contributions of baryons and radiation were ignored in the work of LSS, we include them in our study of this class of models. The parameters of these more realistic models with continuous creation of CDM are constrained at widely-separated epochs (z(eq) approximate to 3000 and z approximate to 0) in the evolution of the Universe. The comparison of the parameter values, {beta, gamma}, determined at these different epochs reveals a tension between the values favored by the high redshift CMB constraint on z(eq) from the ISW and those which follow from the low redshift SNIa data, posing a potential challenge to this class of models. While for beta = 0 this conflict is only at less than or similar to 2 sigma, it worsens as beta increases from zero.
Resumo:
The comprehensive characterization of the structure of complex networks is essential to understand the dynamical processes which guide their evolution. The discovery of the scale-free distribution and the small-world properties of real networks were fundamental to stimulate more realistic models and to understand important dynamical processes related to network growth. However, the properties of the network borders (nodes with degree equal to 1), one of its most fragile parts, remained little investigated and understood. The border nodes may be involved in the evolution of structures such as geographical networks. Here we analyze the border trees of complex networks, which are defined as the subgraphs without cycles connected to the remainder of the network (containing cycles) and terminating into border nodes. In addition to describing an algorithm for identification of such tree subgraphs, we also consider how their topological properties can be quantified in terms of their depth and number of leaves. We investigate the properties of border trees for several theoretical models as well as real-world networks. Among the obtained results, we found that more than half of the nodes of some real-world networks belong to the border trees. A power-law with cut-off was observed for the distribution of the depth and number of leaves of the border trees. An analysis of the local role of the nodes in the border trees was also performed.
Resumo:
The behaviour of interacting ultracold Rydberg atoms in both constant electric fields and laser fields is important for designing experiments and constructing realistic models of them. In this paper, we briefly review our prior work and present new results on how electric fields affect interacting ultracold Rydberg atoms. Specifically, we address the topics of constant background electric fields on Rydberg atom pair excitation and laser-induced Stark shifts on pair excitation.
Resumo:
The lightest supersymmetric particle may decay with branching ratios that correlate with neutrino oscillation parameters. In this case the CERN Large Hadron Collider (LHC) has the potential to probe the atmospheric neutrino mixing angle with sensitivity competitive to its low-energy determination by underground experiments. Under realistic detection assumptions, we identify the necessary conditions for the experiments at CERN's LHC to probe the simplest scenario for neutrino masses induced by minimal supergravity with bilinear R parity violation.
Resumo:
We present here new results of two-dimensional hydrodynamical simulations of the eruptive events of the 1840s (the great) and the 1890s (the minor) eruptions suffered by the massive star eta Carinae (Car). The two bipolar nebulae commonly known as the Homunculus and the little Homunculus (LH) were formed from the interaction of these eruptive events with the underlying stellar wind. We assume here an interacting, non-spherical multiple-phase wind scenario to explain the shape and the kinematics of both Homunculi, but adopt a more realistic parametrization of the phases of the wind. During the 1890s eruptive event, the outflow speed decreased for a short period of time. This fact suggests that the LH is formed when the eruption ends, from the impact of the post-outburst eta Car wind (that follows the 1890s event) with the eruptive flow (rather than by the collision of the eruptive flow with the pre-outburst wind, as claimed in previous models; Gonzalez et al.). Our simulations reproduce quite well the shape and the observed expansion speed of the large Homunculus. The LH (which is embedded within the large Homunculus) becomes Rayleigh-Taylor unstable and develop filamentary structures that resemble the spatial features observed in the polar caps. In addition, we find that the interior cavity between the two Homunculi is partially filled by material that is expelled during the decades following the great eruption. This result may be connected with the observed double-shell structure in the polar lobes of the eta Car nebula. Finally, as in previous work, we find the formation of tenuous, equatorial, high-speed features that seem to be related to the observed equatorial skirt of eta Car.
Resumo:
The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.
Resumo:
In this paper, we develop a flexible cure rate survival model by assuming the number of competing causes of the event of interest to follow a compound weighted Poisson distribution. This model is more flexible in terms of dispersion than the promotion time cure model. Moreover, it gives an interesting and realistic interpretation of the biological mechanism of the occurrence of event of interest as it includes a destructive process of the initial risk factors in a competitive scenario. In other words, what is recorded is only from the undamaged portion of the original number of risk factors.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The aim of this study was to comparatively assess dental arch width, in the canine and molar regions, by means of direct measurements from plaster models, photocopies and digitized images of the models. The sample consisted of 130 pairs of plaster models, photocopies and digitized images of the models of white patients (n = 65), both genders, with Class I and Class II Division 1 malocclusions, treated by standard Edgewise mechanics and extraction of the four first premolars. Maxillary and mandibular intercanine and intermolar widths were measured by a calibrated examiner, prior to and after orthodontic treatment, using the three modes of reproduction of the dental arches. Dispersion of the data relative to pre- and posttreatment intra-arch linear measurements (mm) was represented as box plots. The three measuring methods were compared by one-way ANOVA for repeated measurements (α = 0.05). Initial / final mean values varied as follows: 33.94 to 34.29 mm / 34.49 to 34.66 mm (maxillary intercanine width); 26.23 to 26.26 mm / 26.77 to 26.84 mm (mandibular intercanine width); 49.55 to 49.66 mm / 47.28 to 47.45 mm (maxillary intermolar width) and 43.28 to 43.41 mm / 40.29 to 40.46 mm (mandibular intermolar width). There were no statistically significant differences between mean dental arch widths estimated by the three studied methods, prior to and after orthodontic treatment. It may be concluded that photocopies and digitized images of the plaster models provided reliable reproductions of the dental arches for obtaining transversal intra-arch measurements.
Resumo:
Dental impression is an important step in the preparation of prostheses since it provides the reproduction of anatomic and surface details of teeth and adjacent structures. The objective of this study was to evaluate the linear dimensional alterations in gypsum dies obtained with different elastomeric materials, using a resin coping impression technique with individual shells. A master cast made of stainless steel with fixed prosthesis characteristics with two prepared abutment teeth was used to obtain the impressions. References points (A, B, C, D, E and F) were recorded on the occlusal and buccal surfaces of abutments to register the distances. The impressions were obtained using the following materials: polyether, mercaptan-polysulfide, addition silicone, and condensation silicone. The transfer impressions were made with custom trays and an irreversible hydrocolloid material and were poured with type IV gypsum. The distances between identified points in gypsum dies were measured using an optical microscope and the results were statistically analyzed by ANOVA (p < 0.05) and Tukey's test. The mean of the distances were registered as follows: addition silicone (AB = 13.6 µm, CD=15.0 µm, EF = 14.6 µm, GH=15.2 µm), mercaptan-polysulfide (AB = 36.0 µm, CD = 36.0 µm, EF = 39.6 µm, GH = 40.6 µm), polyether (AB = 35.2 µm, CD = 35.6 µm, EF = 39.4 µm, GH = 41.4 µm) and condensation silicone (AB = 69.2 µm, CD = 71.0 µm, EF = 80.6 µm, GH = 81.2 µm). All of the measurements found in gypsum dies were compared to those of a master cast. The results demonstrated that the addition silicone provides the best stability of the compounds tested, followed by polyether, polysulfide and condensation silicone. No statistical differences were obtained between polyether and mercaptan-polysulfide materials.
Resumo:
The purpose of this study was to develop and validate equations to estimate the aboveground phytomass of a 30 years old plot of Atlantic Forest. In two plots of 100 m², a total of 82 trees were cut down at ground level. For each tree, height and diameter were measured. Leaves and woody material were separated in order to determine their fresh weights in field conditions. Samples of each fraction were oven dried at 80 °C to constant weight to determine their dry weight. Tree data were divided into two random samples. One sample was used for the development of the regression equations, and the other for validation. The models were developed using single linear regression analysis, where the dependent variable was the dry mass, and the independent variables were height (h), diameter (d) and d²h. The validation was carried out using Pearson correlation coefficient, paired t-Student test and standard error of estimation. The best equations to estimate aboveground phytomass were: lnDW = -3.068+2.522lnd (r² = 0.91; s y/x = 0.67) and lnDW = -3.676+0.951ln d²h (r² = 0.94; s y/x = 0.56).
Resumo:
The enzyme purine nucleoside phosphorylase from Schistosoma mansoni (SmPNP) is an attractive molecular target for the treatment of major parasitic infectious diseases, with special emphasis on its role in the discovery of new drugs against schistosomiasis, a tropical disease that affects millions of people worldwide. In the present work, we have determined the inhibitory potency and developed descriptor- and fragment-based quantitative structure-activity relationships (QSAR) for a series of 9-deazaguanine analogs as inhibitors of SmPNP. Significant statistical parameters (descriptor-based model: r² = 0.79, q² = 0.62, r²pred = 0.52; and fragment-based model: r² = 0.95, q² = 0.81, r²pred = 0.80) were obtained, indicating the potential of the models for untested compounds. The fragment-based model was then used to predict the inhibitory potency of a test set of compounds, and the predicted values are in good agreement with the experimental results
Resumo:
In this work we report on a comparison of some theoretical models usually used to fit the dependence on temperature of the fundamental energy gap of semiconductor materials. We used in our investigations the theoretical models of Viña, Pässler-p and Pässler-ρ to fit several sets of experimental data, available in the literature for the energy gap of GaAs in the temperature range from 12 to 974 K. Performing several fittings for different values of the upper limit of the analyzed temperature range (Tmax), we were able to follow in a systematic way the evolution of the fitting parameters up to the limit of high temperatures and make a comparison between the zero-point values obtained from the different models by extrapolating the linear dependence of the gaps at high T to T = 0 K and that determined by the dependence of the gap on isotope mass. Using experimental data measured by absorption spectroscopy, we observed the non-linear behavior of Eg(T) of GaAs for T > ΘD.
Resumo:
The aim of this study was to determine the reproducibility, reliability and validity of measurements in digital models compared to plaster models. Fifteen pairs of plaster models were obtained from orthodontic patients with permanent dentition before treatment. These were digitized to be evaluated with the program Cécile3 v2.554.2 beta. Two examiners measured three times the mesiodistal width of all the teeth present, intercanine, interpremolar and intermolar distances, overjet and overbite. The plaster models were measured using a digital vernier. The t-Student test for paired samples and interclass correlation coefficient (ICC) were used for statistical analysis. The ICC of the digital models were 0.84 ± 0.15 (intra-examiner) and 0.80 ± 0.19 (inter-examiner). The average mean difference of the digital models was 0.23 ± 0.14 and 0.24 ± 0.11 for each examiner, respectively. When the two types of measurements were compared, the values obtained from the digital models were lower than those obtained from the plaster models (p < 0.05), although the differences were considered clinically insignificant (differences < 0.1 mm). The Cécile digital models are a clinically acceptable alternative for use in Orthodontics.
Resumo:
Com o objetivo de comparar a satisfação das mulheres com a experiência do parto em três modelos assistenciais, foi realizada pesquisa descritiva, com abordagem quantitativa, em dois hospitais públicos de São Paulo, um promovendo o modelo "Típico" e o outro com um centro de parto intra-hospitalar (modelo "CPNIH") e um peri-hospitalar (modelo "CPNPH"). A amostra foi constituída por 90 puérperas, 30 de cada modelo. A comparação entre os resultados referentes à satisfação das mulheres com o atendimento prestado pelos profissionais de saúde, com a qualidade da assistência e os motivos de satisfação e insatisfação, com a indicação ou recomendação dos serviços recebidos, com a sensação de segurança no processo e com as sugestões de melhorias, mostrou que o modelo CPHPH foi o melhor avaliado, vindo em seguida o CPNIH e por último o Típico. Conclui-se que o modelo peri-hospitalar de assistência ao parto deveria receber maior apoio do SUS, por se constituir em serviço em que as mulheres se mostram satisfeitas com a atenção recebida