949 resultados para Assembling (Electronic computers)
Resumo:
This paper addresses some controversial issues relating to two main questions. Firstly, we discuss 'man-in-the loop' issues in SAACS. Some people advocate this must always be so that man's decisions can override autonomic components. In this case, the system has two subsystems - man and machine. Can we, however, have a fully autonomic machine - with no man in sight; even for short periods of time? What kinds of systems require man to always be in the loop? What is the optimum balance in self-to-human control? How do we determine the optimum? How far can we go in describing self-behaviour? How does a SAACS system handle unexpected behaviour? Secondly, what are the challenges/obstacles in testing SAACS in the context of self/human dilemma? Are there any lesson to be learned from other programmes e.g. Star-wars, aviation and space explorations? What role human factors and behavioural models play whilst in interacting with SAACS?.
Resumo:
The emergent behaviour of autonomic systems, together with the scale of their deployment, impedes prediction of the full range of configuration and failure scenarios; thus it is not possible to devise management and recovery strategies to cover all possible outcomes. One solution to this problem is to embed self-managing and self-healing abilities into such applications. Traditional design approaches favour determinism, even when unnecessary. This can lead to conflicts between the non-functional requirements. Natural systems such as ant colonies have evolved cooperative, finely tuned emergent behaviours which allow the colonies to function at very large scale and to be very robust, although non-deterministic. Simple pheromone-exchange communication systems are highly efficient and are a major contribution to their success. This paper proposes that we look to natural systems for inspiration when designing architecture and communications strategies, and presents an election algorithm which encapsulates non-deterministic behaviour to achieve high scalability, robustness and stability.
Resumo:
An important factor for high-speed optical communication is the availability of ultrafast and low-noise photodetectors. Among the semiconductor photodetectors that are commonly used in today’s long-haul and metro-area fiber-optic systems, avalanche photodiodes (APDs) are often preferred over p-i-n photodiodes due to their internal gain, which significantly improves the receiver sensitivity and alleviates the need for optical pre-amplification. Unfortunately, the random nature of the very process of carrier impact ionization, which generates the gain, is inherently noisy and results in fluctuations not only in the gain but also in the time response. Recently, a theory characterizing the autocorrelation function of APDs has been developed by us which incorporates the dead-space effect, an effect that is very significant in thin, high-performance APDs. The research extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. In this research, we describe our experiences in parallelizing the code in MPI and OpenMP using CAPTools. Several array partitioning schemes and scheduling policies are implemented and tested. Our results show that the code is scalable up to 64 processors on a SGI Origin 2000 machine and has small average errors.
Resumo:
Despite the apparent simplicity of the OpenMP directive shared memory programming model and the sophisticated dependence analysis and code generation capabilities of the ParaWise/CAPO tools, experience shows that a level of expertise is required to produce efficient parallel code. In a real world application the investigation of a single loop in a generated parallel code can soon become an in-depth inspection of numerous dependencies in many routines. The additional understanding of dependencies is also needed to effectively interpret the information provided and supply the required feedback. The ParaWise Expert Assistant has been developed to automate this investigation and present questions to the user about, and in the context of, their application code. In this paper, we demonstrate that knowledge of dependence information and OpenMP are no longer essential to produce efficient parallel code with the Expert Assistant. It is hoped that this will enable a far wider audience to use the tools and subsequently, exploit the benefits of large parallel systems.
Resumo:
This paper presents a generic framework that can be used to describe study plans using meta-data. The context of this research and associated technologies and standards is presented. The approach adopted here has been developed within the mENU project that aims to provide a model for a European Networked University. The methodology for the design of the generic Framework is discussed and the main design requirements are presented. The approach adopted was based on a set of templates containing meta-data required for the description of programs of study and consisting of generic building elements annotated appropriately. The process followed to develop the templates is presented together with a set of evaluation criteria to test the suitability of the approach. The templates structure is presented and example templates are shown. A first evaluation of the approach has shown that the proposed framework can provide a flexible and competent means for the generic description of study plans for the purposes of a networked university.
Resumo:
Web services based systems have recently found their way into many applications such as e-commerce, corporate integration and e-learning. Construction of new services or introducing new functions to existing services requires composition of web services. Current approaches to service composition often require major programming effort; this is time consuming and requires considerable developer expertise. In this paper, we explore the real and rich scenarios found in e-learning where education services are offered through the Internet by networked universities to potentially millions in the world. These services are derived from existing/emerging business operation processes and commonly offered through a web interface, combined with other services such as email and ftp services, to support partial/full business processes. We identify the requirements for a generic portal framework for easy integration of existing expertise and services of individual institutions (enterprises). We examine the existing technologies and standards, and point out the gaps to be filled in designing the architecture of the framework
Resumo:
This presentation will discuss current developments in evacuation modelling and its role in and application to underground applications
A policy-definition language and prototype implementation library for policy-based autonomic systems
Resumo:
This paper presents work towards generic policy toolkit support for autonomic computing systems in which the policies themselves can be adapted dynamically and automatically. The work is motivated by three needs: the need for longer-term policy-based adaptation where the policy itself is dynamically adapted to continually maintain or improve its effectiveness despite changing environmental conditions; the need to enable non autonomics-expert practitioners to embed self-managing behaviours with low cost and risk; and the need for adaptive policy mechanisms that are easy to deploy into legacy code. A policy definition language is presented; designed to permit powerful expression of self-managing behaviours. The language is very flexible through the use of simple yet expressive syntax and semantics, and facilitates a very diverse policy behaviour space through both hierarchical and recursive uses of language elements. A prototype library implementation of the policy support mechanisms is described. The library reads and writes policies in well-formed XML script. The implementation extends the state of the art in policy-based autonomics through innovations which include support for multiple policy versions of a given policy type, multiple configuration templates, and meta-policies to dynamically select between policy instances and templates. Most significantly, the scheme supports hot-swapping between policy instances. To illustrate the feasibility and generalised applicability of these tools, two dissimilar example deployment scenarios are examined. The first is taken from an exploratory implementation of self-managing parallel processing, and is used to demonstrate the simple and efficient use of the tools. The second example demonstrates more-advanced functionality, in the context of an envisioned multi-policy stock trading scheme which is sensitive to environmental volatility
Resumo:
This paper presents an investigation into dynamic self-adjustment of task deployment and other aspects of self-management, through the embedding of multiple policies. Non-dedicated loosely-coupled computing environments, such as clusters and grids are increasingly popular platforms for parallel processing. These abundant systems are highly dynamic environments in which many sources of variability affect the run-time efficiency of tasks. The dynamism is exacerbated by the incorporation of mobile devices and wireless communication. This paper proposes an adaptive strategy for the flexible run-time deployment of tasks; to continuously maintain efficiency despite the environmental variability. The strategy centres on policy-based scheduling which is informed by contextual and environmental inputs such as variance in the round-trip communication time between a client and its workers and the effective processing performance of each worker. A self-management framework has been implemented for evaluation purposes. The framework integrates several policy-controlled, adaptive services with the application code, enabling the run-time behaviour to be adapted to contextual and environmental conditions. Using this framework, an exemplar self-managing parallel application is implemented and used to investigate the extent of the benefits of the strategy
Resumo:
This paper presents a policy definition language which forms part of a generic policy toolkit for autonomic computing systems in which the policies themselves can be modified dynamically and automatically. Targeted enhancements to the current state of practice include: policy self-adaptation where the policy itself is dynamically modified to match environmental conditions; improved support for non autonomics-expert developers; and facilitating easy deployment of adaptive policies into legacy code. The policy definition language permits powerful expression of self-managing behaviours and facilitates a diverse policy behaviour space. Features include support for multiple versions of a given policy type, multiple configuration templates, and meta policies to dynamically select between policy instances. An example deployment scenario illustrates advanced functionality in the context of a multi policy stock trading system which is sensitive to environmental volatility.
Resumo:
This panel paper sets out to discuss what self-adaptation means, and to explore the extent to which current autonomic systems exhibit truly self-adaptive behaviour. Many of the currently cited examples are clearly adaptive, but debate remains as to what extent they are simply following prescribed adaptation rules within preset bounds, and to what extent they have the ability to truly learn new behaviour. Is there a standard test that can be applied to differentiate? Is adaptive behaviour sufficient anyway? Other autonomic computing issues are also discussed.
Resumo:
A two dimensional staggered unstructured discretisation scheme for the solution of fluid flow problems has been developed. This scheme stores and solves the velocity vector resolutes normal and parallel to each cell face and other scalar variables (pressure, temperature) are stored at cell centres. The coupled momentum; continuity and energy equations are solved, using the well known pressure correction algorithm SIMPLE. The method is tested for accuracy and convergence behaviour against standard cell-centre solutions in a number of benchmark problems: The Lid-Driven Cavity, Natural Convection in a Cavity and the Melting of Gallium in a rectangular domain.
Resumo:
Distributed applications are being deployed on ever-increasing scale and with ever-increasing functionality. Due to the accompanying increase in behavioural complexity, self-management abilities, such as self-healing, have become core requirements. A key challenge is the smooth embedding of such functionality into our systems. Natural distributed systems such as ant colonies have evolved highly efficient behaviour. These emergent systems achieve high scalability through the use of low complexity communication strategies and are highly robust through large-scale replication of simple, anonymous entities. Ways to engineer this fundamentally non-deterministic behaviour for use in distributed applications are being explored. An emergent, dynamic, cluster management scheme, which forms part of a hierarchical resource management architecture, is presented. Natural biological systems, which embed self-healing behaviour at several levels, have influenced the architecture. The resulting system is a simple, lightweight and highly robust platform on which cluster-based autonomic applications can be deployed.
Proposed methodology for the use of computer simulation to enhance aircraft evacuation certification
Resumo:
In this paper a methodology for the application of computer simulation to evacuation certification of aircraft is suggested. This involves the use of computer simulation, historic certification data, component testing, and full-scale certification trials. The methodology sets out a framework for how computer simulation should be undertaken in a certification environment and draws on experience from both the marine and building industries. In addition, a phased introduction of computer models to certification is suggested. This involves as a first step the use of computer simulation in conjunction with full-scale testing. The combination of full-scale trial, computer simulation (and if necessary component testing) provides better insight into aircraft evacuation performance capabilities by generating a performance probability distribution rather than a single datum. Once further confidence in the technique is established the requirement for the full-scale demonstration could be dropped. The second step in the adoption of computer simulation for certification involves the introduction of several scenarios based on, for example, exit availability, instructed by accident analysis. The final step would be the introduction of more realistic accident scenarios. This would require the continued development of aircraft evacuation modeling technology to include additional behavioral features common in real accident scenarios.
Resumo:
The Aircraft Accident Statistics and Knowledge (AASK) database is a repository of passenger accounts from survivable aviation accidents/incidents compiled from interview data collected by agencies such as the US NTSB. Its main purpose is to store observational and anecdotal data from the actual interviews of the occupants involved in aircraft accidents. The database has wide application to aviation safety analysis, being a source of factual data regarding the evacuation process. It also plays a significant role in the development of the airEXODUS aircraft evacuation model, where insight into how people actually behave during evacuation from survivable aircraft crashes is required. This paper describes the latest version of the database (Version 4.0) and includes some analysis of passenger behavior during actual accidents/incidents.