14 resultados para IBM
em CentAUR: Central Archive University of Reading - UK
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
In this paper we describe a lightweight Web portal developed for running computational jobs on a IBM JS21 Bladecenter cluster, ThamesBlue, for inferring and analyzing evolutionary histories. We first discuss the need for leveraging HPC as a enabler for molecular phylogenetics research. We go on to describe how the portal is designed to interface with existing open-source software that is typical of a HPC resource configuration, and how by design this portal is generic enough to be portable to other similarly configured compute clusters, and for other applications.
Resumo:
The Danish Eulerian Model (DEM) is a powerful air pollution model, designed to calculate the concentrations of various dangerous species over a large geographical region (e.g. Europe). It takes into account the main physical and chemical processes between these species, the actual meteorological conditions, emissions, etc.. This is a huge computational task and requires significant resources of storage and CPU time. Parallel computing is essential for the efficient practical use of the model. Some efficient parallel versions of the model were created over the past several years. A suitable parallel version of DEM by using the Message Passing Interface library (AIPI) was implemented on two powerful supercomputers of the EPCC - Edinburgh, available via the HPC-Europa programme for transnational access to research infrastructures in EC: a Sun Fire E15K and an IBM HPCx cluster. Although the implementation is in principal, the same for both supercomputers, few modifications had to be done for successful porting of the code on the IBM HPCx cluster. Performance analysis and parallel optimization was done next. Results from bench marking experiments will be presented in this paper. Another set of experiments was carried out in order to investigate the sensitivity of the model to variation of some chemical rate constants in the chemical submodel. Certain modifications of the code were necessary to be done in accordance with this task. The obtained results will be used for further sensitivity analysis Studies by using Monte Carlo simulation.
Resumo:
This paper describes a prototype grid infrastructure, called the eMinerals minigrid, for molecular simulation scientists. which is based on an integration of shared compute and data resources. We describe the key components, namely the use of Condor pools, Linux/Unix clusters with PBS and IBM's LoadLeveller job handling tools, the use of Globus for security handling, the use of Condor-G tools for wrapping globus job submit commands, Condor's DAGman tool for handling workflow, the Storage Resource Broker for handling data, and the CCLRC dataportal and associated tools for both archiving data with metadata and making data available to other workers.
Resumo:
A complete treatment of the charcterastic admittance-matrix representation of multilayer thin-film systems with absorbing media is presented. The algorithm from the systems analysis is implemented on an IBM microcomputer and some examples of filter design calculation are presented. Relevant source code in IBM Advanced Basic interpreter are also included.
Resumo:
Programming is a skill which requires knowledge of both the basic constructs of the computer language used and techniques employing these constructs. How these are used in any given application is determined intuitively, and this intuition is based on experience of programs already written. One aim of this book is to describe the techniques and give practical examples of the techniques in action - to provide some experience. Another aim of the book is to show how a program should be developed, in particular how a relatively large program should be tackled in a structured manner. These aims are accomplished essentially by describing the writing of one large program, a diagram generator package, in which a number of useful programming techniques are employed. Also, the book provides a useful program, with an in-built manual describing not only how the program works, but also how it does it, with full source code listings. This means that the user can, if required, modify the package to meet particular requirements. A floppy disk is available from the publishers containing the program, including listings of the source code. All the programs are written in Modula-2, using JPI's Top Speed Modula-2 system running on IBM-PCs and compatibles. This language was chosen as it is an ideal language for implementing large programs and it is the main language taught in the Cybernetics Department at the University of Reading. There are some aspects of the Top Speed implementation which are not standard, so suitable comments are given when these occur. Although implemented in Modula-2, many of the techniques described here are appropriate to other languages, like Pascal of C, for example. The book and programs are based on a second year undergraduate course taught at Reading to Cybernetics students, entitled Algorithms and Data Structures. Useful techniques are described for the reader to use, applications where they are appropriate are recommended, but detailed analyses of the techniques are not given.
Resumo:
Environmental building assessment tools have been developed to measure how well or poorly a building is performing, or likely to perform, against a declared set of criteria, or environmental considerations, in order to achieve sustainability principles. Knowledge of environmental building assessment tools is therefore important for successful design and construction of environmentally friendly buildings for countries. The purpose of the research is to investigate the knowledge and level of awareness of environmental building assessment tools among industry practitioners in Botswana. One hundred and seven paper-based questionnaires were delivered to industry practitioners, including architects, engineers, quantity surveyors, real estate developers and academics. Users were asked what they know about building assessment, whether they have used any building assessment tool in the past, and what they perceive as possible barriers to the implementation of environmental building assessment tools in Botswana. Sixty five were returned and statistical analysis, using IBM SPSS V19 software, was used for analysis. Almost 85 per cent of respondents indicate that they are extremely or moderately aware of environmental design. Furthermore, the results indicate that 32 per cent of respondents have gone through formal training, which suggests ‘reasonable knowledge’. This however does not correspond with the use of the tools on the ground as 69 per cent of practitioners report never to have used any environmental building assessment tool in any project. The study highlights the need to develop an assessment tool for Botswana to enhance knowledge and further improve the level of awareness of environmental issues relating to building design and construction.
Resumo:
Population modelling is increasingly recognised as a useful tool for pesticide risk assessment. For vertebrates that may ingest pesticides with their food, such as woodpigeon (Columba palumbus), population models that simulate foraging behaviour explicitly can help predicting both exposure and population-level impact. Optimal foraging theory is often assumed to explain the individual-level decisions driving distributions of individuals in the field, but it may not adequately predict spatial and temporal characteristics of woodpigeon foraging because of the woodpigeons’ excellent memory, ability to fly long distances, and distinctive flocking behaviour. Here we present an individual-based model (IBM) of the woodpigeon. We used the model to predict distributions of foraging woodpigeons that use one of six alternative foraging strategies: optimal foraging, memory-based foraging and random foraging, each with or without flocking mechanisms. We used pattern-oriented modelling to determine which of the foraging strategies is best able to reproduce observed data patterns. Data used for model evaluation were gathered during a long-term woodpigeon study conducted between 1961 and 2004 and a radiotracking study conducted in 2003 and 2004, both in the UK, and are summarised here as three complex patterns: the distributions of foraging birds between vegetation types during the year, the number of fields visited daily by individuals, and the proportion of fields revisited by them on subsequent days. The model with a memory-based foraging strategy and a flocking mechanism was the only one to reproduce these three data patterns, and the optimal foraging model produced poor matches to all of them. The random foraging strategy reproduced two of the three patterns but was not able to guarantee population persistence. We conclude that with the memory-based foraging strategy including a flocking mechanism our model is realistic enough to estimate the potential exposure of woodpigeons to pesticides. We discuss how exposure can be linked to our model, and how the model could be used for risk assessment of pesticides, for example predicting exposure and effects in heterogeneous landscapes planted seasonally with a variety of crops, while accounting for differences in land use between landscapes.
Resumo:
The potential risk of agricultural pesticides to mammals typically depends on internal concentrations within individuals, and these are determined by the amount ingested and by absorption, distribution, metabolism, and excretion (ADME). Pesticide residues ingested depend, amongst other things, on individual spatial choices which determine how much and when feeding sites and areas of pesticide application overlap, and can be calculated using individual-based models (IBMs). Internal concentrations can be calculated using toxicokinetic (TK) models, which are quantitative representations of ADME processes. Here we provide a population model for the wood mouse (Apodemus sylvaticus) in which TK submodels were incorporated into an IBM representation of individuals making choices about where to feed. This allows us to estimate the contribution of individual spatial choice and TK processes to risk. We compared the risk predicted by four IBMs: (i) “AllExposed-NonTK”: assuming no spatial choice so all mice have 100% exposure, no TK, (ii) “AllExposed-TK”: identical to (i) except that the TK processes are included where individuals vary because they have different temporal patterns of ingestion in the IBM, (iii) “Spatial-NonTK”: individual spatial choice, no TK, and (iv) “Spatial-TK”: individual spatial choice and with TK. The TK parameters for hypothetical pesticides used in this study were selected such that a conventional risk assessment would fail. Exposures were standardised using risk quotients (RQ; exposure divided by LD50 or LC50). We found that for the exposed sub-population including either spatial choice or TK reduced the RQ by 37–85%, and for the total population the reduction was 37–94%. However spatial choice and TK together had little further effect in reducing RQ. The reasons for this are that when the proportion of time spent in treated crop (PT) approaches 1, TK processes dominate and spatial choice has very little effect, and conversely if PT is small spatial choice dominates and TK makes little contribution to exposure reduction. The latter situation means that a short time spent in the pesticide-treated field mimics exposure from a small gavage dose, but TK only makes a substantial difference when the dose was consumed over a longer period. We concluded that a combined TK-IBM is most likely to bring added value to the risk assessment process when the temporal pattern of feeding, time spent in exposed area and TK parameters are at an intermediate level; for instance wood mice in foliar spray scenarios spending more time in crop fields because of better plant cover.
Resumo:
Earthworms are significant ecosystem engineers and are an important component of the diet of many vertebrates and invertebrates, so the ability to predict their distribution and abundance would have wide application in ecology, conservation and land management. Earthworm viability is known to be affected by the availability and quality of food resources, soil water conditions and temperature, but has not yet been modelled mechanistically to link effects on individuals to field population responses. Here we present a novel model capable of predicting the effects of land management and environmental conditions on the distribution and abundance of Aporrectodea caliginosa, the dominant earthworm species in agroecosystems. Our process-based approach uses individual based modelling (IBM), in which each individual has its own energy budget. Individual earthworm energy budgets follow established principles of physiological ecology and are parameterised for A. caliginosa from experimental measurements under optimal conditions. Under suboptimal conditions (e.g. food limitation, low soil temperatures and water contents) reproduction is prioritised over growth. Good model agreement to independent laboratory data on individual cocoon production and growth of body mass, under variable feeding and temperature conditions support our representation of A. caliginosa physiology through energy budgets. Our mechanistic model is able to accurately predict A. caliginosa distribution and abundance in spatially heterogeneous soil profiles representative of field study conditions. Essential here is the explicit modelling of earthworm behaviour in the soil profile. Local earthworm movement responds to a trade-off between food availability and soil water conditions, and this determines the spatiotemporal distribution of the population in the soil profile. Importantly, multiple environmental variables can be manipulated simultaneously in the model to explore earthworm population exposure and effects to combinations of stressors. Potential applications include prediction of the population-level effects of pesticides and changes in soil management e.g. conservation tillage and climate change.
Resumo:
This paper investigates the feasibility of using approximate Bayesian computation (ABC) to calibrate and evaluate complex individual-based models (IBMs). As ABC evolves, various versions are emerging, but here we only explore the most accessible version, rejection-ABC. Rejection-ABC involves running models a large number of times, with parameters drawn randomly from their prior distributions, and then retaining the simulations closest to the observations. Although well-established in some fields, whether ABC will work with ecological IBMs is still uncertain. Rejection-ABC was applied to an existing 14-parameter earthworm energy budget IBM for which the available data consist of body mass growth and cocoon production in four experiments. ABC was able to narrow the posterior distributions of seven parameters, estimating credible intervals for each. ABC’s accepted values produced slightly better fits than literature values do. The accuracy of the analysis was assessed using cross-validation and coverage, currently the best available tests. Of the seven unnarrowed parameters, ABC revealed that three were correlated with other parameters, while the remaining four were found to be not estimable given the data available. It is often desirable to compare models to see whether all component modules are necessary. Here we used ABC model selection to compare the full model with a simplified version which removed the earthworm’s movement and much of the energy budget. We are able to show that inclusion of the energy budget is necessary for a good fit to the data. We show how our methodology can inform future modelling cycles, and briefly discuss how more advanced versions of ABC may be applicable to IBMs. We conclude that ABC has the potential to represent uncertainty in model structure, parameters and predictions, and to embed the often complex process of optimizing an IBM’s structure and parameters within an established statistical framework, thereby making the process more transparent and objective.
Resumo:
Individual-based models (IBMs) can simulate the actions of individual animals as they interact with one another and the landscape in which they live. When used in spatially-explicit landscapes IBMs can show how populations change over time in response to management actions. For instance, IBMs are being used to design strategies of conservation and of the exploitation of fisheries, and for assessing the effects on populations of major construction projects and of novel agricultural chemicals. In such real world contexts, it becomes especially important to build IBMs in a principled fashion, and to approach calibration and evaluation systematically. We argue that insights from physiological and behavioural ecology offer a recipe for building realistic models, and that Approximate Bayesian Computation (ABC) is a promising technique for the calibration and evaluation of IBMs. IBMs are constructed primarily from knowledge about individuals. In ecological applications the relevant knowledge is found in physiological and behavioural ecology, and we approach these from an evolutionary perspective by taking into account how physiological and behavioural processes contribute to life histories, and how those life histories evolve. Evolutionary life history theory shows that, other things being equal, organisms should grow to sexual maturity as fast as possible, and then reproduce as fast as possible, while minimising per capita death rate. Physiological and behavioural ecology are largely built on these principles together with the laws of conservation of matter and energy. To complete construction of an IBM information is also needed on the effects of competitors, conspecifics and food scarcity; the maximum rates of ingestion, growth and reproduction, and life-history parameters. Using this knowledge about physiological and behavioural processes provides a principled way to build IBMs, but model parameters vary between species and are often difficult to measure. A common solution is to manually compare model outputs with observations from real landscapes and so to obtain parameters which produce acceptable fits of model to data. However, this procedure can be convoluted and lead to over-calibrated and thus inflexible models. Many formal statistical techniques are unsuitable for use with IBMs, but we argue that ABC offers a potential way forward. It can be used to calibrate and compare complex stochastic models and to assess the uncertainty in their predictions. We describe methods used to implement ABC in an accessible way and illustrate them with examples and discussion of recent studies. Although much progress has been made, theoretical issues remain, and some of these are outlined and discussed.