960 resultados para radiological contrast
Resumo:
This research examined the conditions under which behavioral contrast would be observed in relation to ingroup and outgroup primes. The authors tested the hypothesis that differing levels of commitment to the ingroup would predict diverging behavioral responses to outgroup but not ingroup primes. Across two studies, featuring both age and gender groups, we found that ingroup identification predicted responses to outgroup primes with higher identifiers showing an increased tendency to contrast, that is, behave less like the outgroup, and more like the ingroup. Ingroup identification did not predict responses to ingroup primes. The implications of these findings for social comparison and social identity theories are discussed. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
A radionuclide source term model has been developed which simulates the biogeochemical evolution of the Drigg low level waste (LLW) disposal site. The DRINK (DRIgg Near field Kinetic) model provides data regarding radionuclide concentrations in groundwater over a period of 100,000 years, which are used as input to assessment calculations for a groundwater pathway. The DRINK model also provides input to human intrusion and gaseous assessment calculations through simulation of the solid radionuclide inventory. These calculations are being used to support the Drigg post closure safety case. The DRINK model considers the coupled interaction of the effects of fluid flow, microbiology, corrosion, chemical reaction, sorption and radioactive decay. It represents the first direct use of a mechanistic reaction-transport model in risk assessment calculations.
Resumo:
Tethered deuterated polystyrene-block-polymethyl methacrylate films have been examined by X-ray scattering both in their native state and following treatment with ruthenium tetroxide. The use of the stain, while increasing the thickness of the films, does not significantly alter the lateral structure or periodicity of the films and provides contrast between the two blocks. Both the periodicity of the films and the structure normal to the surface have been identified following staining. Experiments were also performed on films treated by a solvent exchange process, and the effects of staining on these films are discussed.
Resumo:
Moist convection is well known to be generally more intense over continental than maritime regions, with larger updraft velocities, graupel, and lightning production. This study explores the transition from maritime to continental convection by comparing the trends in Tropical Rainfall Measuring Mission (TRMM) radar and microwave (37 and 85 GHz) observations over islands of increasing size to those simulated by a cloud-resolving model. The observed storms were essentially maritime over islands of <100 km2 and continental over islands >10 000 km2, with a gradual transition in between. Equivalent radar and microwave quantities were simulated from cloud-resolving runs of the Weather Research and Forecasting model via offline radiation codes. The model configuration was idealized, with islands represented by regions of uniform surface heat flux without orography, using a range of initial sounding conditions without strong horizontal winds or aerosols. Simulated storm strength varied with initial sounding, as expected, but also increased sharply with island size in a manner similar to observations. Stronger simulated storms were associated with higher concentrations of large hydrometeors. Although biases varied with different ice microphysical schemes, the trend was similar for all three schemes tested and was also seen in 2D and 3D model configurations. The successful reproduction of the trend with such idealized forcing supports previous suggestions that mesoscale variation in surface heating—rather than any difference in humidity, aerosol, or other aspects of the atmospheric state—is the main reason that convection is more intense over continents and large islands than over oceans. Some dynamical storm aspects, notably the peak rainfall and minimum surface pressure low, were more sensitive to surface forcing than to the atmospheric sounding or ice scheme. Large hydrometeor concentrations and simulated microwave and radar signatures, however, were at least as sensitive to initial humidity levels as to surface forcing and were more sensitive to the ice scheme. Issues with running the TRMM simulator on 2D simulations are discussed, but they appear to be less serious than sensitivities to model microphysics, which were similar in 2D and 3D. This supports the further use of 2D simulations to economically explore modeling uncertainties.
Resumo:
Previous work has demonstrated that observed and modeled climates show a near-time-invariant ratio of mean land to mean ocean surface temperature change under transient and equilibrium global warming. This study confirms this in a range of atmospheric models coupled to perturbed sea surface temperatures (SSTs), slab (thermodynamics only) oceans, and a fully coupled ocean. Away from equilibrium, it is found that the atmospheric processes that maintain the ratio cause a land-to-ocean heat transport anomaly that can be approximated using a two-box energy balance model. When climate is forced by increasing atmospheric CO2 concentration, the heat transport anomaly moves heat from land to ocean, constraining the land to warm in step with the ocean surface, despite the small heat capacity of the land. The heat transport anomaly is strongly related to the top-of-atmosphere radiative flux imbalance, and hence it tends to a small value as equilibrium is approached. In contrast, when climate is forced by prescribing changes in SSTs, the heat transport anomaly replaces ‘‘missing’’ radiative forcing over land by moving heat from ocean to land, warming the land surface. The heat transport anomaly remains substantial in steady state. These results are consistent with earlier studies that found that both land and ocean surface temperature changes may be approximated as local responses to global mean radiative forcing. The modeled heat transport anomaly has large impacts on surface heat fluxes but small impacts on precipitation, circulation, and cloud radiative forcing compared with the impacts of surface temperature change. No substantial nonlinearities are found in these atmospheric variables when the effects of forcing and surface temperature change are added.
Resumo:
A favoured method of assimilating information from state-of-the-art climate models into integrated assessment models of climate impacts is to use the transient climate response (TCR) of the climate models as an input, sometimes accompanied by a pattern matching approach to provide spatial information. More recent approaches to the problem use TCR with another independent piece of climate model output: the land-sea surface warming ratio (φ). In this paper we show why the use of φ in addition to TCR has such utility. Multiple linear regressions of surface temperature change onto TCR and φ in 22 climate models from the CMIP3 multi-model database show that the inclusion of φ explains a much greater fraction of the inter-model variance than using TCR alone. The improvement is particularly pronounced in North America and Eurasia in the boreal summer season, and in the Amazon all year round. The use of φ as the second metric is beneficial for three reasons: firstly it is uncorrelated with TCR in state-of-the-art climate models and can therefore be considered as an independent metric; secondly, because of its projected time-invariance, the magnitude of φ is better constrained than TCR in the immediate future; thirdly, the use of two variables is much simpler than approaches such as pattern scaling from climate models. Finally we show how using the latest estimates of φ from climate models with a mean value of 1.6—as opposed to previously reported values of 1.4—can significantly increase the mean time-integrated discounted damage projections in a state-of-the-art integrated assessment model by about 15 %. When compared to damages calculated without the inclusion of the land-sea warming ratio, this figure rises to 65 %, equivalent to almost 200 trillion dollars over 200 years.
Resumo:
While changes in land precipitation during the last 50 years have been attributed in part to human influences, results vary by season, are affected by data uncertainty and do not account for changes over ocean. One of the more physically robust responses of the water cycle to warming is the expected amplification of existing patterns of precipitation minus evaporation. Here, precipitation changes in wet and dry regions are analyzed from satellite data for 1988–2010, covering land and ocean. We derive fingerprints for the expected change from climate model simulations that separately track changes in wet and dry regions. The simulations used are driven with anthropogenic and natural forcings combined, and greenhouse gas forcing or natural forcing only. Results of detection and attribution analysis show that the fingerprint of combined external forcing is detectable in observations and that this intensification of the water cycle is partly attributable to greenhouse gas forcing.
Resumo:
Climate controls fire regimes through its influence on the amount and types of fuel present and their dryness. CO2 concentration constrains primary production by limiting photosynthetic activity in plants. However, although fuel accumulation depends on biomass production, and hence on CO2 concentration, the quantitative relationship between atmospheric CO2 concentration and biomass burning is not well understood. Here a fire-enabled dynamic global vegetation model (the Land surface Processes and eXchanges model, LPX) is used to attribute glacial–interglacial changes in biomass burning to an increase in CO2, which would be expected to increase primary production and therefore fuel loads even in the absence of climate change, vs. climate change effects. Four general circulation models provided last glacial maximum (LGM) climate anomalies – that is, differences from the pre-industrial (PI) control climate – from the Palaeoclimate Modelling Intercomparison Project Phase~2, allowing the construction of four scenarios for LGM climate. Modelled carbon fluxes from biomass burning were corrected for the model's observed prediction biases in contemporary regional average values for biomes. With LGM climate and low CO2 (185 ppm) effects included, the modelled global flux at the LGM was in the range of 1.0–1.4 Pg C year-1, about a third less than that modelled for PI time. LGM climate with pre-industrial CO2 (280 ppm) yielded unrealistic results, with global biomass burning fluxes similar to or even greater than in the pre-industrial climate. It is inferred that a substantial part of the increase in biomass burning after the LGM must be attributed to the effect of increasing CO2 concentration on primary production and fuel load. Today, by analogy, both rising CO2 and global warming must be considered as risk factors for increasing biomass burning. Both effects need to be included in models to project future fire risks.
Resumo:
An one-dimensional atmospheric second order closure model, coupled to an oceanic mixed layer model, is used to investigate the short term variation of the atmospheric and oceanic boundary layers in the coastal upwelling area of Cabo Frio, Brazil (23 degrees S, 42 degrees 08`W). The numerical simulations were carried out to evaluate the impact caused by the thermal contrast between atmosphere and ocean on the vertical extent and other properties of both atmospheric and oceanic boundary layers. The numerical simulations were designed taking as reference the observations carried out during the passage of a cold front that disrupted the upwelling regime in Cabo Frio in July of 1992. The simulations indicated that in 10 hours the mechanical mixing, sustained by a constant background flow of 10 in s(-1), increases the atmospheric boundary layer in 214 in when the atmosphere is initially 2 K warmer than the ocean (positive thermal contrast observed during upwelling regime). For an atmosphere initially -2 K colder than the ocean (negative thermal contrast observed during passage of the cold front), the incipient thermal convection intensifies the mechanical mixing increasing the vertical extent of the atmospheric boundary layer in 360 in. The vertical evolution of the atmospheric boundary layer is consistent with the observations carried out in Cabo Frio during upwelling condition. When the upwelling is disrupted, the discrepancy between the simulated and observed atmospheric boundary layer heights in Cabo Frio during July of 1992 increases considerably. During the period of 10 hours, the simulated oceanic mixed layer deepens 2 in and 5.4 in for positive and negative thermal contrasts of 2 K and -2 K, respectively. In the latter case, the larger vertical extent of the oceanic mixed layer is due to the presence of thermal convection in the atmospheric boundary layer, which in turn is associated to the absence of upwelling caused by the passage of cold fronts in Cabo Frio.
Resumo:
Negative anticipatory contrast (NAC) corresponds to the suppression in consumption of a first rewarding substance (e.g., saccharin 0.15%) when it is followed daily by a second preferred substance (e.g., sucrose 32%). The NAC has been interpreted as resulting from anticipation of the impending preferred reward and its comparison with the currently available first reward [Flaherty, CF., Rowan, G.A., 1985. Anticipatory contrast: within-subjects analysis. Anim. Learn. Behav. 13, 2-5]. In this context, one should expect that devaluation of the preferred substance after the establishment of the NAC would either reduce or abolish the contrast effect. However, contrary to this prediction, the results of the present study show that the NAC is insensitive to devaluation of the second, preferred, substance. This allows one to question that interpretation. The results reported in this study support the view that the NAC effect is controlled by memory of the relative value of the first solution, which is updated daily by means of both a gustatory and/or post-ingestive comparison of the first and second solutions, and memory of past pairings. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
A survey of pediatric radiological examinations was carried out in a reference pediatric hospital of the city of Sao Paulo. in order to investigate the doses to children undergoing conventional X-ray examinations. The results showed that the majority of pediatric patients are below 4 years, and that about 80% of the examinations correspond to chest projections. Doses to typical radiological examinations were measured in vivo with thermoluminescent dosimeters (LiF: Mg, Ti and LiF: Mg, Cu, P) attached to the skin of the children to determine entrance surface dose (ESD). Also homogeneous phantoms were used to obtain ESD to younger children, because the technique uses a so small kVp that the dosimeters would produce an artifact image in the patient radiograph. Four kinds of pediatric examinations were investigated: three conventional examinations (chest, skull and abdomen) and a fluoroscopic procedure (barium swallow). Relevant information about kVp and mAs values used in the examinations was collected, and we discuss how these parameters can affect the ESD. The ESD values measured in this work are compared to reference levels published by the European Commission for pediatric patients. The results obtained (third-quartile of the ESD distribution) for chest AP examinations in three age groups were: 0.056 mGy (2-4 years old); 0,068 mGy (5-9 years old)-. 0.069 mGy (10-15 years old). All of them are below the European reference level (0.100mGy). ESD values measured to the older age group in skull and abdomen AP radiographs (mean values 3.44 and 1.20mGy, respectively) are above the European reference levels (1.5mGy to skull and 1.0 mGy to abdomen). ESD values measured in the barium swallow examination reached 10 mGy in skin regions corresponding to thyroid and esophagus. It was noticed during this survey that some technicians use, improperly, X-ray fluoroscopy in conventional examinations to help them in positioning the patient. The results presented here are a preliminary survey of doses in pediatric radiological examinations and they show that it is necessary to investigate the technical parameters to perform the radiographs. to introduce practices to control pediatric patient`s doses and to improve the personnel training to perform a pediatric examination. (c) 2007 Elsevier Ltd. All rights reserved.