894 resultados para HPC parallel computer architecture queues fault tolerance programmability ADAM
Resumo:
A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The design of liquid-retaining structures involves many decisions to be made by the designer based on rules of thumb, heuristics, judgement, codes of practice and previous experience. Structural design problems are often ill structured and there is a need to develop programming environments that can incorporate engineering judgement along with algorithmic tools. Recent developments in artificial intelligence have made it possible to develop an expert system that can provide expert advice to the user in the selection of design criteria and design parameters. This paper introduces the development of an expert system in the design of liquid-retaining structures using blackboard architecture. An expert system shell, Visual Rule Studio, is employed to facilitate the development of this prototype system. It is a coupled system combining symbolic processing with traditional numerical processing. The expert system developed is based on British Standards Code of Practice BS8007. Explanations are made to assist inexperienced designers or civil engineering students to learn how to design liquid-retaining structures effectively and sustainably in their design practices. The use of this expert system in disseminating heuristic knowledge and experience to practitioners and engineering students is discussed.
Resumo:
The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.
Resumo:
Information security devices must preserve security properties even in the presence of faults. This in turn requires a rigorous evaluation of the system behaviours resulting from component failures, especially how such failures affect information flow. We introduce a compositional method of static analysis for fail-secure behaviour. Our method uses reachability matrices to identify potentially undesirable information flows based on the fault modes of the system's components.
Resumo:
Our extensive research has indicated that high-school teachers are reluctant to make use of existing instructional educational software (Pollard, 2005). Even software developed in a partnership between a teacher and a software engineer is unlikely to be adopted by teachers outside the partnership (Pollard, 2005). In this paper we address these issues directly by adopting a reusable architectural design for instructional educational software which allows easy customisation of software to meet the specific needs of individual teachers. By doing this we will facilitate more teachers regularly using instructional technology within their classrooms. Our domain-specific software architecture, Interface-Activities-Model, was designed specifically to facilitate individual customisation by redefining and restructuring what constitutes an object so that they can be readily reused or extended as required. The key to this architecture is the way in which the software is broken into small generic encapsulated components with minimal domain specific behaviour. The domain specific behaviour is decoupled from the interface and encapsulated in objects which relate to the instructional material through tasks and activities. The domain model is also broken into two distinct models - Application State Model and Domainspecific Data Model. This decoupling and distribution of control gives the software designer enormous flexibility in modifying components without affecting other sections of the design. This paper sets the context of this architecture, describes it in detail, and applies it to an actual application developed to teach high-school mathematical concepts.
Resumo:
A new control algorithm using parallel braking resistor (BR) and serial fault current limiter (FCL) for power system transient stability enhancement is presented in this paper. The proposed control algorithm can prevent transient instability during first swing by immediately taking away the transient energy gained in faulted period. It can also reduce generator oscillation time and efficiently make system back to the post-fault equilibrium. The algorithm is based on a new system energy function based method to choose optimal switching point. The parallel BR and serial FCL resistor can be switched at the calculated optimal point to get the best control result. This method allows optimum dissipation of the transient energy caused by disturbance so to make system back to equilibrium in minimum time. Case studies are given to verify the efficiency and effectiveness of this new control algorithm.
Resumo:
We propose an asymmetric multi-processor SoC architecture, featuring a master CPU running uClinux, and multiple loosely-coupled slave CPUs running real-time threads assigned by the master CPU. Real-time SoC architectures often demand a compromise between a generic platform for different applications, and application-specific customizations to achieve performance requirements. Our proposed architecture offers a generic platform running a conventional embedded operating system providing a traditional software-oriented development approach, while multiple slave CPUs act as a dedicated independent real-time threads execution unit running in parallel of master CPU to achieve performance requirements. In this paper, the architecture is described, including the application / threading development environment. The performance of the architecture with several standard benchmark routines is also analysed.
Resumo:
The verification of information flow properties of security devices is difficult because it involves the analysis of schematic diagrams, artwork, embedded software, etc. In addition, a typical security device has many modes, partial information flow, and needs to be fault tolerant. We propose a new approach to the verification of such devices based upon checking abstract information flow properties expressed as graphs. This approach has been implemented in software, and successfully used to find possible paths of information flow through security devices.
Resumo:
A specialised reconfigurable architecture for telecommunication base-band processing is augmented with testing resources. The routing network is linked via virtual wire hardware modules to reduce the area occupied by connecting buses. The number of switches within the routing matrices is also minimised, which increases throughput without sacrificing flexibility. The testing algorithm was developed to systematically search for faults in the processing modules and the flexible high-speed routing network within the architecture. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This algorithm scans all possible paths within the interconnection network exhaustively and reports all faults. Strategies can be inserted to bypass minor faults