945 resultados para Runs


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Long decorrelation timescales of the annular mode are observed in the lower stratosphere. This study uses a simple dynamical model, which has been used extensively to study stratosphere-troposphere coupling, to investigate the origin of the long dynamical timescales. Several long runs of the model are completed, with different imposed thermal damping timescales in the stratosphere. The dynamical timescales of the annular mode are found to be largely insensitive to the input thermal damping timescales, producing similar dynamical timescales in all cases below 50hPa. This result suggests that the hypothesis that long timescales in the lower stratosphere are due to long radiative timescales in this region is false.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stochastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS (GEWEX Cloud System Study) community, simulating transitions between active and suppressed periods of tropical convection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The simulated annealing approach to structure solution from powder diffraction data, as implemented in the DASH program, is easily amenable to parallelization at the individual run level. Very large scale increases in speed of execution can therefore be achieved by distributing individual DASH runs over a network of computers. The GDASH program achieves this by packaging DASH in a form that enables it to run under the Univa UD Grid MP system, which harnesses networks of existing computing resources to perform calculations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The simulated annealing approach to structure solution from powder diffraction data, as implemented in the DASH program, is easily amenable to parallelization at the individual run level. Modest increases in speed of execution can therefore be achieved by executing individual DASH runs on the individual cores of CPUs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimates of soil organic carbon (SOC) stocks and changes under different land use systems can help determine vulnerability to land degradation. Such information is important for countries in and areas with high susceptibility to desertification. SOC stocks, and predicted changes between 2000 and 2030, were determined at the national scale for Jordan using The Global Environment Facility Soil Organic Carbon (GEFSOC) Modelling System. For the purpose of this study, Jordan was divided into three natural regions (The Jordan Valley, the Uplands and the Badia) and three developmental regions (North, Middle and South). Based on this division, Jordan was divided into five zones (based on the dominant land use): the Jordan Valley, the North Uplands, the Middle Uplands, the South Uplands and the Badia. This information was merged using GIS, along with a map of rainfall isohyets, to produce a map with 498 polygons. Each of these was given a unique ID, a land management unit identifier and was characterized in terms of its dominant soil type. Historical land use data, current land use and future land use change scenarios were also assembled, forming major inputs of the modelling system. The GEFSOC Modelling System was then run to produce C stocks in Jordan for the years 1990, 2000 and 2030. The results were compared with conventional methods of estimating carbon stocks, such as the mapping based SOTER method. The results of these comparisons showed that the model runs are acceptable, taking into consideration the limited availability of long-term experimental soil data that can be used to validate them. The main findings of this research show that between 2000 and 2030, SOC may increase in heavily used areas under irrigation and will likely decrease in grazed rangelands that cover most of Jordan giving an overall decrease in total SOC over time if the land is indeed used under the estimated forms of land use. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Phosphorus Indicators Tool provides a catchment-scale estimation of diffuse phosphorus (P) loss from agricultural land to surface waters using the most appropriate indicators of P loss. The Tool provides a framework that may be applied across the UK to estimate P loss, which is sensitive not only to land use and management but also to environmental factors such as climate, soil type and topography. The model complexity incorporated in the P Indicators Tool has been adapted to the level of detail in the available data and the need to reflect the impact of changes in agriculture. Currently, the Tool runs on an annual timestep and at a 1 km(2) grid scale. We demonstrate that the P Indicators Tool works in principle and that its modular structure provides a means of accounting for P loss from one layer to the next, and ultimately to receiving waters. Trial runs of the Tool suggest that modelled P delivery to water approximates measured water quality records. The transparency of the structure of the P Indicators Tool means that identification of poorly performing coefficients is possible, and further refinements of the Tool can be made to ensure it is better calibrated and subsequently validated against empirical data, as it becomes available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Decadal prediction uses climate models forced by changing greenhouse gases, as in the International Panel for Climate Change, but unlike longer range predictions they also require initialization with observations of the current climate. In particular, the upper-ocean heat content and circulation have a critical influence. Decadal prediction is still in its infancy and there is an urgent need to understand the important processes that determine predictability on these timescales. We have taken the first Hadley Centre Decadal Prediction System (DePreSys) and implemented it on several NERC institute compute clusters in order to study a wider range of initial condition impacts on decadal forecasting, eventually including the state of the land and cryosphere. The eScience methods are used to manage submission and output from the many ensemble model runs required to assess predictive skill. Early results suggest initial condition skill may extend for several years, even over land areas, but this depends sensitively on the definition used to measure skill, and alternatives are presented. The Grid for Coupled Ensemble Prediction (GCEP) system will allow the UK academic community to contribute to international experiments being planned to explore decadal climate predictability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The question of whether and how tropical Indian Ocean dipole or zonal mode (IOZM) interannual variability is independent of El Nino-Southern Oscillation (ENSO) variability in the Pacific is addressed in a comparison of twin 200-yr runs of a coupled climate model. The first is a reference simulation, and the second has ENSO-scale variability suppressed with a constraint on the tropical Pacific wind stress. The IOZM can exist in the model without ENSO, and the composite evolution of the main anomalies in the Indian Ocean in the two simulations is virtually identical. Its growth depends on a positive feedback between anomalous equatorial easterly winds, upwelling equatorial and coastal Kelvin waves reducing the thermocline depth and sea surface temperature off the coast of Sumatra, and the atmospheric dynamical response to the subsequently reduced convection. Two IOZM triggers in the boreal spring are found. The first is an anomalous Hadley circulation over the eastern tropical Indian Ocean and Maritime Continent, with an early northward penetration of the Southern Hemisphere southeasterly trades. This situation grows out of cooler sea surface temperatures in the southeastern tropical Indian Ocean left behind by a reinforcement of the late austral summer winds. The second trigger is a consequence of a zonal shift in the center of convection associated with a developing El Nino, a Walker cell anomaly. The first trigger is the only one present in the constrained simulation and is similar to the evolution of anomalies in 1994, when the IOZM occurred in the absence of a Pacific El Nino state. The presence of these two triggers-the first independent of ENSO and the second phase locking the IOZM to El Nino-allows an understanding of both the existence of IOZM events when Pacific conditions are neutral and the significant correlation between the IOZM and El Nino.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pollutant plumes with enhanced concentrations of trace gases and aerosols were observed over the southern coast of West Africa during August 2006 as part of the AMMA wet season field campaign. Plumes were observed both in the mid and upper troposphere. In this study we examined the origin of these pollutant plumes, and their potential to photochemically produce ozone (O3) downwind over the Atlantic Ocean. Their possible contribution to the Atlantic O3 maximum is also discussed. Runs using the BOLAM mesoscale model including biomass burning carbon monoxide (CO) tracers were used to confirm an origin from central African biomass burning fires. The plumes measured in the mid troposphere (MT) had significantly higher pollutant concentrations over West Africa compared to the upper tropospheric (UT) plume. The mesoscale model reproduces these differences and the two different pathways for the plumes at different altitudes: transport to the north-east of the fire region, moist convective uplift and transport to West Africa for the upper tropospheric plume versus north-west transport over the Gulf of Guinea for the mid-tropospheric plume. Lower concentrations in the upper troposphere are mainly due to enhanced mixing during upward transport. Model simulations suggest that MT and UT plumes are 16 and 14 days old respectively when measured over West Africa. The ratio of tracer concentrations at 600 hPa and 250 hPa was estimated for 14–15 August in the region of the observed plumes and compares well with the same ratio derived from observed carbon dioxide (CO2) enhancements in both plumes. It is estimated that, for the period 1–15 August, the ratio of Biomass Burning (BB) tracer concentration transported in the UT to the ones transported in the MT is 0.6 over West Africa and the equatorial South Atlantic. Runs using a photochemical trajectory model, CiTTyCAT, initialized with the observations, were used to estimate in-situ net photochemical O3 production rates in these plumes during transport downwind of West Africa. The mid-troposphere plume spreads over altitude between 1.5 and 6 km over the Atlantic Ocean. Even though the plume was old, it was still very photochemically active (mean net O3 production rates over 10 days of 2.6 ppbv/day and up to 7 ppbv/day during the first days) above 3 km especially during the first few days of transport westward. It is also shown that the impact of high aerosol loads in the MT plume on photolysis rates serves to delay the peak in modelled O3 concentrations. These results suggest that a significant fraction of enhanced O3 in mid-troposphere over the Atlantic comes from BB sources during the summer monsoon period. According to simulated occurrence of such transport, BB may be the main source for O3 enhancement in the equatorial south Atlantic MT, at least in August 2006. The upper tropospheric plume was also still photochemically active, although mean net O3 production rates were slower (1.3 ppbv/day). The results suggest that, whilst the transport of BB pollutants to the UT is variable (as shown by the mesoscale model simulations), pollution from biomass burning can make an important contribution to additional photochemical production of O3 in addition to other important sources such as nitrogen oxides (NOx) from lightning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A time series of the observed transport through an array of moorings across the Mozambique Channel is compared with that of six model runs with ocean general circulation models. In the observations, the seasonal cycle cannot be distinguished from red noise, while this cycle is dominant in the transport of the numerical models. It is found, however, that the seasonal cycles of the observations and numerical models are similar in strength and phase. These cycles have an amplitude of 5 Sv and a maximum in September, and can be explained by the yearly variation of the wind forcing. The seasonal cycle in the models is dominant because the spectral density at other frequencies is underrepresented. Main deviations from the observations are found at depths shallower than 1500 m and in the 5/y–6/y frequency range. Nevertheless, the structure of eddies in the models is close to the observed eddy structure. The discrepancy is found to be related to the formation mechanism and the formation position of the eddies. In the observations, eddies are frequently formed from an overshooting current near the mooring section, as proposed by Ridderinkhof and de Ruijter (2003) and Harlander et al. (2009). This causes an alternation of events at the mooring section, varying between a strong southward current, and the formation and passing of an eddy. This results in a large variation of transport in the frequency range of 5/y–6/y. In the models, the eddies are formed further north and propagate through the section. No alternation similar to the observations is observed, resulting in a more constant transport.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Central Brazil, the long-term sustainability of beef cattle systems is under threat over vast tracts of farming areas, as more than half of the 50 million hectares of sown pastures are suffering from degradation. Overgrazing practised to maintain high stocking rates is regarded as one of the main causes. High stocking rates are deliberate and crucial decisions taken by the farmers, which appear paradoxical, even irrational given the state of knowledge regarding the consequences of overgrazing. The phenomenon however appears inextricably linked with the objectives that farmers hold. In this research those objectives were elicited first and from their ranking two, ‘asset value of cattle (representing cattle ownership)' and ‘present value of economic returns', were chosen to develop an original bi-criteria Compromise Programming model to test various hypotheses postulated to explain the overgrazing behaviour. As part of the model a pasture productivity index is derived to estimate the pasture recovery cost. Different scenarios based on farmers' attitudes towards overgrazing, pasture costs and capital availability were analysed. The results of the model runs show that benefits from holding more cattle can outweigh the increased pasture recovery and maintenance costs. This result undermines the hypothesis that farmers practise overgrazing because they are unaware or uncaring about overgrazing costs. An appropriate approach to the problem of pasture degradation requires information on the economics, and its interplay with farmers' objectives, for a wide range of pasture recovery and maintenance methods. Seen within the context of farmers' objectives, some level of overgrazing appears rational. Advocacy of the simple ‘no overgrazing' rule is an insufficient strategy to maintain the long-term sustainability of the beef production systems in Central Brazil.