10 resultados para Active Front End
em CentAUR: Central Archive University of Reading - UK
Resumo:
In this paper we present an architecture for network and applications management, which is based on the Active Networks paradigm and shows the advantages of network programmability. The stimulus to develop this architecture arises from an actual need to manage a cluster of active nodes, where it is often required to redeploy network assets and modify nodes connectivity. In our architecture, a remote front-end of the managing entity allows the operator to design new network topologies, to check the status of the nodes and to configure them. Moreover, the proposed framework allows to explore an active network, to monitor the active applications, to query each node and to install programmable traps. In order to take advantage of the Active Networks technology, we introduce active SNMP-like MIBs and agents, which are dynamic and programmable. The programmable management agents make tracing distributed applications a feasible task. We propose a general framework that can inter-operate with any active execution environment. In this framework, both the manager and the monitor front-ends communicate with an active node (the Active Network Access Point) through the XML language. A gateway service performs the translation of the queries from XML to an active packet language and injects the code in the network. We demonstrate the implementation of an active network gateway for PLAN (Packet Language for Active Networks) in a forty active nodes testbed. Finally, we discuss an application of the active management architecture to detect the causes of network failures by tracing network events in time.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
The complexity of construction projects and the fragmentation of the construction industry undertaking those projects has effectively resulted in linear, uncoordinated and highly variable project processes in the UK construction sector. Research undertaken at the University of Salford resulted in the development of an improved project process, the Process Protocol, which considers the whole lifecycle of a construction project whilst integrating its participants under a common framework. The Process Protocol identifies the various phases of a construction project with particular emphasis on what is described in the manufacturing industry as the ‘fuzzy front end’. The participants in the process are described in terms of the activities that need to be undertaken in order to achieve a successful project and process execution. In addition, the decision-making mechanisms, from a client perspective, are illustrated and the foundations for a learning organization/industry are facilitated within a consistent Process Protocol.
Resumo:
Letter identification is a critical front end of the reading process. In general, conceptualizations of the identification process have emphasized arbitrary sets of distinctive features. However, a richer view of letter processing incorporates principles from the field of type design, including an emphasis on uniformities across letters within a font. The importance of uniformities is supported by a small body of research indicating that consistency of font increases letter identification efficiency. We review design concepts and the relevant literature, with the goal of stimulating further thinking about letter processing during reading.
Resumo:
The construction industry is widely recognised as being inherent with risk and uncertainty. This necessitates the need for effective project risk management to achieve the project objectives of time, cost and quality. A popular tool employed in projects to aid in the management of risk is a risk register. This tool documents the project risks and is often employed by the Project Manager (PM) to manage the associated risks on a project. This research aims to ascertain how widely risk registers are used by Project Managers as part of their risk management practices. To achieve this aim entailed interviewing ten PMs, to discuss their use of the risk register as a risk management tool. The results from these interviews indicated the prevalent use of this document and recognised its effectiveness in the management of project risks. The findings identified the front end and feasibility phases of a project as crucial stages for using risk registers, noting it as a vital ingredient in the risk response planning of the decision making process. Moreover, the composition of the risk register was also understood, with an insight into how PMs produce and develop this tool also ascertained. In conclusion, this research signifies the extensive use of the risk register by PMs. A majority of PMs were of the view that risk registers constitute an essential component of their project risk management practices. This suggests a need for further research on the extent to which risk registers actually help PMs to control the risks in a construction project, particularly residual risks, and how this can be improved to minimize deviations from expected outcomes.
Resumo:
A ground-based millimetre wave radar, AVTIS (All-weather Volcano Topography Imaging Sensor), has been developed for topographic monitoring. The instrument is portable and capable of measurements over ranges up to similar to 7 km through cloud and at night. In April and May 2005, AVTIS was deployed at Arenal Volcano, Costa Rica, in order to determine topographic changes associated with the advance of a lava flow. This is the first reported application of mm-wave radar technology to the measurement of lava flux rates. Three topographic data sets of the flow were acquired from observation distances of similar to 3 km over an eight day period, during which the flow front was detected to have advanced similar to 200 m. Topographic differences between the data sets indicated a flow thickness of similar to 10 m, and a dense rock equivalent lava flux of similar to 0.20 +/- 0.08 m(3) s(-1).
Resumo:
This experiment addresses the long-term effect of active immunization of goats against a recombinant ovine inhibin alpha subunit (roIHN-alpha). In late anestrus 100 mu g of roINH-alpha was administered to 40 pluriparous Boer goat does, followed, 4 weeks later, by a booster injection. Weekly blood samples were drawn to monitor the inhibin binding capacity with the aid of a radio-tracer binding assay. From the onset until 48 h after the end of each estrus, follicular development and ovulation rate were monitored at 24 h intervals by transrectal ultrasonography. Beginning in August and continuing into January, does were mated at every other estrus, and submitted to transcervical embryo collection. Seven months after the first immunization, the does were mated again and permitted to carry to term. All immunized does produced inhibin antibodies, an elevated titre being first detected 2 weeks after primary immunization. Maximum titres were reached after 6 weeks, i.e. 2 weeks after the booster injection. Thereafter, in the course of the following 32 weeks, the titre subsided gradually. The does started cycling by mid-August. At that stage the average number of follicles more than 4 mm in diameter, ovulations and total embryos and ova recovered were 14.7 (+/- 2.3), 5.3 (+/- 0.7) and 4.4 (+/- 1.0), respectively. A steady decline followed and in January the corresponding means were: 5.2 (+/- 0.6) follicles, 3.1 (+/- 0.6) ovulations and 1.2 (+/- 0.4) embryos and ova recovered. When mated toward the end of the breeding season, 85% of the does became pregnant to the first mating and 73% went to term. Healthy kids were born, the average litter size being 2.2 (+/- 0.1). In conclusion, immunization of goats against a recombinant inhibin alpha-subunit proved to be a practicable means of producing embryos for transfer purposes. After about half a year, when the inhibin antibody titre has subsided, it is possible to return the does to the breeding flock without risking complications with normal breeding activity. (c) 2009 Elsevier Inc. All rights reserved.
Resumo:
The growing energy consumption in the residential sector represents about 30% of global demand. This calls for Demand Side Management solutions propelling change in behaviors of end consumers, with the aim to reduce overall consumption as well as shift it to periods in which demand is lower and where the cost of generating energy is lower. Demand Side Management solutions require detailed knowledge about the patterns of energy consumption. The profile of electricity demand in the residential sector is highly correlated with the time of active occupancy of the dwellings; therefore in this study the occupancy patterns in Spanish properties was determined using the 2009–2010 Time Use Survey (TUS), conducted by the National Statistical Institute of Spain. The survey identifies three peaks in active occupancy, which coincide with morning, noon and evening. This information has been used to input into a stochastic model which generates active occupancy profiles of dwellings, with the aim to simulate domestic electricity consumption. TUS data were also used to identify which appliance-related activities could be considered for Demand Side Management solutions during the three peaks of occupancy.