891 resultados para Technicolor and Composite Models
Resumo:
The time discretization in weather and climate models introduces truncation errors that limit the accuracy of the simulations. Recent work has yielded a method for reducing the amplitude errors in leapfrog integrations from first-order to fifth-order. This improvement is achieved by replacing the Robert--Asselin filter with the RAW filter and using a linear combination of the unfiltered and filtered states to compute the tendency term. The purpose of the present paper is to apply the composite-tendency RAW-filtered leapfrog scheme to semi-implicit integrations. A theoretical analysis shows that the stability and accuracy are unaffected by the introduction of the implicitly treated mode. The scheme is tested in semi-implicit numerical integrations in both a simple nonlinear stiff system and a medium-complexity atmospheric general circulation model, and yields substantial improvements in both cases. We conclude that the composite-tendency RAW-filtered leapfrog scheme is suitable for use in semi-implicit integrations.
Resumo:
There is ongoing work on conceptual modelling of such busi- ness notions as Affordance and Capability. We have found that such business notions as Affordance and Capability are constructively defned using elements and properties of exe- cutable behaviour models. In this paper, we clarify the def- initions of Affordance and Capability using Coloured Petri Nets and Protocol models.The illustrating case is the process of drug injection. We show that different behaviour modelling techniques provide different precision for definition of Affordance and Capability and clarify the conceptual models of these notions. We generalise that the behaviour models can be used to improve the precision of conceptualization.
Resumo:
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.
Resumo:
We construct and compare in this work a variety of simple models for strange stars, namely, hypothetical self-bound objects made of a cold stable version of the quark-gluon plasma. Exact, quasi-exact and numerical models are examined to find the most economical description for these objects. A simple and successful parametrization of them is given in terms of the central density, and the differences among the models are explicitly shown and discussed. In particular, we present a model starting with a Gaussian ansatz for the density profile that provides a very accurate and almost complete analytical integration of the problem, modulo a small difference for one of the metric potentials.
Resumo:
The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.
Resumo:
The absorption spectrum of the acid form of pterin in water was investigated theoretically. Different procedures using continuum, discrete, and explicit models were used to include the solvation effect on the absorption spectrum, characterized by two bands. The discrete and explicit models used Monte Carlo simulation to generate the liquid structure and time-dependent density functional theory (B3LYP/6-31G+(d)) to obtain the excitation energies. The discrete model failed to give the correct qualitative effect on the second absorption band. The continuum model, in turn, has given a correct qualitative picture and a semiquantitative description. The explicit use of 29 solvent molecules, forming a hydration shell of 6 angstrom, embedded in the electrostatic field of the remaining solvent molecules, gives absorption transitions at 3.67 and 4.59 eV in excellent agreement with the S(0)-S(1) and S(0)-S(2) absorption bands at of 3.66 and 4.59 eV, respectively, that characterize the experimental spectrum of pterin in water environment. (C) 2010 Wiley Periodicals, Inc. Int J Quantum Chem 110: 2371-2377, 2010
Resumo:
We present a new version (> 2.0) of the hglm package for fitting hierarchical generalized linear models (HGLMs) with spatially correlated random effects. CAR() and SAR() families for conditional and simultaneous autoregressive random effects were implemented. Eigen decomposition of the matrix describing the spatial structure (e.g., the neighborhood matrix) was used to transform the CAR/SAR random effects into an independent, but eteroscedastic, Gaussian random effect. A linear predictor is fitted for the random effect variance to estimate the parameters in the CAR and SAR models. This gives a computationally efficient algorithm for moderately sized problems.
Resumo:
In the last years extreme hydrometeorological phenomena have increased in number and intensity affecting the inhabitants of various regions, an example of these effects are the central basins of the Gulf of Mexico (CBGM) that they have been affected by 55.2% with floods and especially the state of Veracruz (1999-2013), leaving economic, social and environmental losses. Mexico currently lacks sufficient hydrological studies for the measurement of volumes in rivers, since is convenient to create a hydrological model (HM) suited to the quality and quantity of the geographic and climatic information that is reliable and affordable. Therefore this research compares the semi-distributed hydrological model (SHM) and the global hydrological model (GHM), with respect to the volumes of runoff and achieve to predict flood areas, furthermore, were analyzed extreme hydrometeorological phenomena in the CBGM, by modeling the Hydrologic Modeling System (HEC-HMS) which is a SHM and the Modèle Hydrologique Simplifié à I'Extrême (MOHYSE) which is a GHM, to evaluate the results and compare which model is suitable for tropical conditions to propose public policies for integrated basins management and flood prevention. Thus it was determined the temporal and spatial framework of the analyzed basins according to hurricanes and floods. It were developed the SHM and GHM models, which were calibrated, validated and compared the results to identify the sensitivity to the real model. It was concluded that both models conform to tropical conditions of the CBGM, having MOHYSE further approximation to the real model. Worth mentioning that in Mexico there is not enough information, besides there are no records of MOHYSE use in Mexico, so it can be a useful tool for determining runoff volumes. Finally, with the SHM and the GHM were generated climate change scenarios to develop risk studies creating a risk map for urban planning, agro-hydrological and territorial organization.
Resumo:
Institutions continue to face increasing pressure from faculty, students, and other concerned constituents to divest endowment holdings from perceived social injustices. In this report, investment officers and advisory committee members offer insight into institutional practices used to respond to these concerns through the adoption of socially responsible investment policies and other socially responsible investment options. Contacts offer recommendations on balancing the administration’s fiduciary responsibility to ensure maximum endowment returns with the social concerns of institutional constituents.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Light dynamics is a relevant phenomenon with respect to esthetic restorations, as incorrect analysis of the optical behavior of natural dentition may lead to potential clinical failures. The nature of incident light plays a major role in determining the amount of light transmission or reflection, and how an object is perceived depends on the nature of the light source. Natural teeth demonstrate translucency, opalescence, and fluorescence, all of which must be replicated by restorative materials in order to achieve clinical success. Translucency is the intermediary between complete opacity and complete transparency, making its analysis highly subjective. In nature, the translucency of dental enamel varies from tooth to tooth, and from individual to individual. Therefore, four important factors must be considered when appraising translucency. Presence or absence of color, thickness of the enamel, degree of translucency, and surface texture are essential components when determining translucency. State-of-the-art resin composites provide varying shades and opacities that deliver a more faithful reproduction of the chromaticity and translucency/opacity of enamel and dentin. This enables the attainment of individualized and customized composite restorations. The objective of this article is to provide a review of the phenomena of translucency and opacity in the natural dentition and composite resins, under the scope of optics, and to describe how to implement these concepts in the clinical setting.CLINICAL SIGNIFICANCEChoosing composite resins, based on optical properties alone, in order to mimic the properties of natural tooth structures, does not necessarily provide a satisfactory esthetic outcome. In many instances, failure ensues from incorrect analysis of the optical behaviors of the natural dentition as well as the improper use of restorative materials. Therefore, it is necessary to implement a technique that enables a restorative material to be utilized to its full potential to correctly replicate the natural teeth.(J Esthet Restor Dent 23:73-88, 2011).
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The aim of this work is to evaluate the effect of surface treatment with Er:YAG and Nd:YAG lasers on resin composite bond strength to recently bleached enamel. In this study, 120 bovine incisors were distributed into two groups: group C: without bleaching treatment; group B: bleached with 35% hydrogen peroxide. Each group was divided into three subgroups: subgroup N: without laser treatment; subgroup Nd: irradiation with Nd:YAG laser; subgroup Er: irradiation with Er:YAG laser. The adhesive system (Adper Single Bond 2) was then applied and composite buildups were constructed with Filtek Supreme composite. The teeth were sectioned to obtain enamel-resin sticks (1 x 1 mm) and submitted to microtensile bond testing. The data were statistically analyzed by the ANOVA and Tukey tests. The bond strength values in the bleached control group (5.57 MPa) presented a significant difference in comparison to the group bleached and irradiated with Er:YAG laser (13.18 MPa) or Nd:YAG (25.67 MPa). The non-bleached control group presented mean values of 30.92 MPa, with statistical difference of all the others groups. The use of Nd:YAG and Er:YAG lasers on bleached specimens was able to improve the bond strengths of them.