965 resultados para horizontal agreements
Resumo:
The role of atmospheric general circulation model (AGCM) horizontal resolution in representing the global energy budget and hydrological cycle is assessed, with the aim of improving the understanding of model uncertainties in simulating the hydrological cycle. We use two AGCMs from the UK Met Office Hadley Centre: HadGEM1-A at resolutions ranging from 270 to 60 km, and HadGEM3-A ranging from 135 to 25 km. The models exhibit a stable hydrological cycle, although too intense compared to reanalyses and observations. This over-intensity is explained by excess surface shortwave radiation, a common error in general circulation models (GCMs). This result is insensitive to resolution. However, as resolution is increased, precipitation decreases over the ocean and increases over the land. This is associated with an increase in atmospheric moisture transport from ocean to land, which changes the partitioning of moisture fluxes that contribute to precipitation over land from less local to more non-local moisture sources. The results start to converge at 60-km resolution, which underlines the excessive reliance of the mean hydrological cycle on physical parametrization (local unresolved processes) versus model dynamics (large-scale resolved processes) in coarser HadGEM1 and HadGEM3 GCMs. This finding may be valid for other GCMs, showing the necessity to analyze other chains of GCMs that may become available in the future with such a range of horizontal resolutions. Our finding supports the hypothesis that heterogeneity in model parametrization is one of the underlying causes of model disagreement in the Coupled Model Intercomparison Project (CMIP) exercises.
Resumo:
Traditional resource management has had as its main objective the optimisation of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid Markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The SORMA project aims to allow resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA’s motivation is to achieve efficient resource utilisation by maximising revenue for resource providers, and minimising the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that desired Quality of Service levels meet the expectations of market participants. This paper explains the proposed use of an Economically Enhanced Resource Manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximisation across multiple Service Level Agreements.
Resumo:
Traditional resource management has had as its main objective the optimisation of throughput, based on pa- rameters such as CPU, memory, and network bandwidth. With the appearance of Grid Markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The SORMA project aims to allow resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA’s motivation is to achieve efficient resource utilisation by maximising revenue for resource providers, and minimising the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that desired Quality of Service levels meet the expectations of market participants. This paper explains the proposed use of an Economically Enhanced Resource Manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximisation across multiple Service Level Agreements.
Resumo:
During April and May 2010 the ash cloud from the eruption of the Icelandic volcano Eyjafjallajökull caused widespread disruption to aviation over northern Europe. The location and impact of the eruption led to a wealth of observations of the ash cloud were being obtained which can be used to assess modelling of the long range transport of ash in the troposphere. The UK FAAM (Facility for Airborne Atmospheric Measurements) BAe-146-301 research aircraft overflew the ash cloud on a number of days during May. The aircraft carries a downward looking lidar which detected the ash layer through the backscatter of the laser light. In this study ash concentrations derived from the lidar are compared with simulations of the ash cloud made with NAME (Numerical Atmospheric-dispersion Modelling Environment), a general purpose atmospheric transport and dispersion model. The simulated ash clouds are compared to the lidar data to determine how well NAME simulates the horizontal and vertical structure of the ash clouds. Comparison between the ash concentrations derived from the lidar and those from NAME is used to define the fraction of ash emitted in the eruption that is transported over long distances compared to the total emission of tephra. In making these comparisons possible position errors in the simulated ash clouds are identified and accounted for. The ash layers seen by the lidar considered in this study were thin, with typical depths of 550–750 m. The vertical structure of the ash cloud simulated by NAME was generally consistent with the observed ash layers, although the layers in the simulated ash clouds that are identified with observed ash layers are about twice the depth of the observed layers. The structure of the simulated ash clouds were sensitive to the profile of ash emissions that was assumed. In terms of horizontal and vertical structure the best results were obtained by assuming that the emission occurred at the top of the eruption plume, consistent with the observed structure of eruption plumes. However, early in the period when the intensity of the eruption was low, assuming that the emission of ash was uniform with height gives better guidance on the horizontal and vertical structure of the ash cloud. Comparison of the lidar concentrations with those from NAME show that 2–5% of the total mass erupted by the volcano remained in the ash cloud over the United Kingdom.
Resumo:
Governing climate change is arguably one of the most complex problems, environmental or otherwise, that the global community has had to contend with. This chapter highlights the innovations in governance that have characterized the global climate change regime as it sought to respond to and manage these complexities, political imperatives and competing interests. We suggest that the key contestations and innovations within climate governance can be understood in terms of four themes/questions all of which relate to issues of justice and equity.
Resumo:
In order to calculate unbiased microphysical and radiative quantities in the presence of a cloud, it is necessary to know not only the mean water content but also the distribution of this water content. This article describes a study of the in-cloud horizontal inhomogeneity of ice water content, based on CloudSat data. In particular, by focusing on the relations with variables that are already available in general circulation models (GCMs), a parametrization of inhomogeneity that is suitable for inclusion in GCM simulations is developed. Inhomogeneity is defined in terms of the fractional standard deviation (FSD), which is given by the standard deviation divided by the mean. The FSD of ice water content is found to increase with the horizontal scale over which it is calculated and also with the thickness of the layer. The connection to cloud fraction is more complicated; for small cloud fractions FSD increases as cloud fraction increases while FSD decreases sharply for overcast scenes. The relations to horizontal scale, layer thickness and cloud fraction are parametrized in a relatively simple equation. The performance of this parametrization is tested on an independent set of CloudSat data. The parametrization is shown to be a significant improvement on the assumption of a single-valued global FSD
Resumo:
Recent urban air temperature increase is attributable to the climate change and heat island effects due to urbanization. This combined effects of urbanization and global warming can penetrate into the underground and elevate the subsurface temperature. In the present study, over-100 years measurements of subsurface temperature at a remote rural site were analysed, and an increasing rate of 0.17⁰C per decade at soil depth of 30cm due to climate change was identified in the UK, but the subsurface warming in an urban site showed a much higher rate of 0.85⁰C per decade at a 30cm depth and 1.18⁰C per decade at 100cm. The subsurface urban heat island (SUHI) intensity obtained at the paired urban-rural stations in London showed an unique 'U-shape', i.e. lowest in summer and highest during winter. The maximum SUHII is 3.5⁰C at 6:00 AM in December, and the minimum UHII is 0.2⁰C at 18:00PM in July. Finally, the effects of SUHI on the energy efficiency of the horizontal ground source heat pump (GSHP) were determined. Provided the same heat pump used, the installation at an urban site will maintain an overall higher COP compared with that at a rural site in all seasons, but the highest COP improvement can be achieved in winter.
Resumo:
Brazil’s recent cinematic sensation, O som ao redor/Neighboring Sounds (Kleber Mendonça Filho, 2012), displays an effective integration of form and content, as exemplified by its vertical figuration that crystallizes the devastating effects of property development and global capitalism. This chapter will attempt to unravel a two-way drive within this vertical motif: a movement off the ground, resulting in global cosmopolitanism; and another into the ground, in search of the social history and film history at its base. As I hope to demonstrate, despite the characters’ late postmodernist disconnect from local context and history, O som ao redor offers a perspicacious insight into regional and national history that contributes an original and exciting addition to Brazilian and world cinema.
Resumo:
Substantial low-frequency rainfall fluctuations occurred in the Sahel throughout the twentieth century, causing devastating drought. Modeling these low-frequency rainfall fluctuations has remained problematic for climate models for many years. Here we show using a combination of state-of-the-art rainfall observations and high-resolution global climate models that changes in organized heavy rainfall events carry most of the rainfall variability in the Sahel at multiannual to decadal time scales. Ability to produce intense, organized convection allows climate models to correctly simulate the magnitude of late-twentieth century rainfall change, underlining the importance of model resolution. Increasing model resolution allows a better coupling between large-scale circulation changes and regional rainfall processes over the Sahel. These results provide a strong basis for developing more reliable and skilful long-term predictions of rainfall (seasons to years) which could benefit many sectors in the region by allowing early adaptation to impending extremes.
Resumo:
This paper aims at two different contributions to the literature on international environmental agreements. First, we model environmental agreements as a generic situation, characterized as a Hawk-Dove game with multiple asymmetric equilibria. Second, the article applies the theory on non-cooperative games with confirmed proposals, based on an alternating proposals bargaining protocol, as a way of overcoming the usual problems of coordination and bargaining failures in environmental agreement games, due to payoff asymmetry and equilibrium multiplicity.
Resumo:
Juvenile angiofibroma is a benign fibroangiomatous tumor of relatively rare occurrence, developing most frequently in male adolescents. It has local characteristics of aggressiveness and expansion. The treatment of choice is surgical excision. In this article, the advantages and disadvantages of the surgical technique using the Le Fort I osteotomy are described, and the literature correlated with 2 case reports.
Resumo:
Objective To design, develop and set up a web-based system for enabling graphical visualization of upper limb motor performance (ULMP) of Parkinson’s disease (PD) patients to clinicians. Background Sixty-five patients diagnosed with advanced PD have used a test battery, implemented in a touch-screen handheld computer, in their home environment settings over the course of a 3-year clinical study. The test items consisted of objective measures of ULMP through a set of upper limb motor tests (finger to tapping and spiral drawings). For the tapping tests, patients were asked to perform alternate tapping of two buttons as fast and accurate as possible, first using the right hand and then the left hand. The test duration was 20 seconds. For the spiral drawing test, patients traced a pre-drawn Archimedes spiral using the dominant hand, and the test was repeated 3 times per test occasion. In total, the study database consisted of symptom assessments during 10079 test occasions. Methods Visualization of ULMP The web-based system is used by two neurologists for assessing the performance of PD patients during motor tests collected over the course of the said study. The system employs animations, scatter plots and time series graphs to visualize the ULMP of patients to the neurologists. The performance during spiral tests is depicted by animating the three spiral drawings, allowing the neurologists to observe real-time accelerations or hesitations and sharp changes during the actual drawing process. The tapping performance is visualized by displaying different types of graphs. Information presented included distribution of taps over the two buttons, horizontal tap distance vs. time, vertical tap distance vs. time, and tapping reaction time over the test length. Assessments Different scales are utilized by the neurologists to assess the observed impairments. For the spiral drawing performance, the neurologists rated firstly the ‘impairment’ using a 0 (no impairment) – 10 (extremely severe) scale, secondly three kinematic properties: ‘drawing speed’, ‘irregularity’ and ‘hesitation’ using a 0 (normal) – 4 (extremely severe) scale, and thirdly the probable ‘cause’ for the said impairment using 3 choices including Tremor, Bradykinesia/Rigidity and Dyskinesia. For the tapping performance, a 0 (normal) – 4 (extremely severe) scale is used for first rating four tapping properties: ‘tapping speed’, ‘accuracy’, ‘fatigue’, ‘arrhythmia’, and then the ‘global tapping severity’ (GTS). To achieve a common basis for assessment, initially one neurologist (DN) performed preliminary ratings by browsing through the database to collect and rate at least 20 samples of each GTS level and at least 33 samples of each ‘cause’ category. These preliminary ratings were then observed by the two neurologists (DN and PG) to be used as templates for rating of tests afterwards. In another track, the system randomly selected one test occasion per patient and visualized its items, that is tapping and spiral drawings, to the two neurologists. Statistical methods Inter-rater agreements were assessed using weighted Kappa coefficient. The internal consistency of properties of tapping and spiral drawing tests were assessed using Cronbach’s α test. One-way ANOVA test followed by Tukey multiple comparisons test was used to test if mean scores of properties of tapping and spiral drawing tests were different among GTS and ‘cause’ categories, respectively. Results When rating tapping graphs, inter-rater agreements (Kappa) were as follows: GTS (0.61), ‘tapping speed’ (0.89), ‘accuracy’ (0.66), ‘fatigue’ (0.57) and ‘arrhythmia’ (0.33). The poor inter-rater agreement when assessing “arrhythmia” may be as a result of observation of different things in the graphs, among the two raters. When rating animated spirals, both raters had very good agreement when assessing severity of spiral drawings, that is, ‘impairment’ (0.85) and irregularity (0.72). However, there were poor agreements between the two raters when assessing ‘cause’ (0.38) and time-information properties like ‘drawing speed’ (0.25) and ‘hesitation’ (0.21). Tapping properties, that is ‘tapping speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’ had satisfactory internal consistency with a Cronbach’s α coefficient of 0.77. In general, the trends of mean scores of tapping properties worsened with increasing levels of GTS. The mean scores of the four properties were significantly different to each other, only at different levels. In contrast from tapping properties, kinematic properties of spirals, that is ‘drawing speed’, ‘irregularity’ and ‘hesitation’ had a questionable consistency among them with a coefficient of 0.66. Bradykinetic spirals were associated with more impaired speed (mean = 83.7 % worse, P < 0.001) and hesitation (mean = 77.8% worse, P < 0.001), compared to dyskinetic spirals. Both these ‘cause’ categories had similar mean scores of ‘impairment’ and ‘irregularity’. Conclusions In contrast from current approaches used in clinical setting for the assessment of PD symptoms, this system enables clinicians to animate easily and realistically the ULMP of patients who at the same time are at their homes. Dynamic access of visualized motor tests may also be useful when observing and evaluating therapy-related complications such as under- and over-medications. In future, we foresee to utilize these manual ratings for developing and validating computer methods for automating the process of assessing ULMP of PD patients.