27 resultados para scalable architecture
em Greenwich Academic Literature Archive - UK
Resumo:
The conception of the FUELCON architecture, of a composite tool for the generation and validation of patterns for assigning fuel assemblies to the positions in the grid of a reactor core section, has undergone an evolution throughout the history of the project. Different options for various subtask were possible, envisioned, or actually explored or adopted. We project these successive, or even concomitant configurations of the architecture, into a meta-architecture, which quite not by chance happens to reflect basic choices in the field's history over the last decade.
Resumo:
This paper introduces a few architectural concepts from FUELGEN, that generates a "cloud" of reload patterns, like the generator in the FUELCON expert system, but unlike that generator, is based on a genetic algorithm. There are indications FUELGEN may outperform FUELCON and other tools as reported in the literature, in well-researched case studies, but careful comparisons have to be carried out. This paper complements the information in two other recent papers on FUELGEN. Moreover, a sequel project is outlined.
Resumo:
We continue the discussion of the decision points in the FUELCON metaarchitecture. Having discussed the relation of the original expert system to its sequel projects in terms of an AND/OR tree, we consider one further domain for a neural component: parameter prediction downstream of the core reload candidate pattern generator, thus, a replacement for the NOXER simulator currently in use in the project.
Resumo:
Realizing scalable performance on high performance computing systems is not straightforward for single-phenomenon codes (such as computational fluid dynamics [CFD]). This task is magnified considerably when the target software involves the interactions of a range of phenomena that have distinctive solution procedures involving different discretization methods. The problems of addressing the key issues of retaining data integrity and the ordering of the calculation procedures are significant. A strategy for parallelizing this multiphysics family of codes is described for software exploiting finite-volume discretization methods on unstructured meshes using iterative solution procedures. A mesh partitioning-based SPMD approach is used. However, since different variables use distinct discretization schemes, this means that distinct partitions are required; techniques for addressing this issue are described using the mesh-partitioning tool, JOSTLE. In this contribution, the strategy is tested for a variety of test cases under a wide range of conditions (e.g., problem size, number of processors, asynchronous / synchronous communications, etc.) using a variety of strategies for mapping the mesh partition onto the processor topology.
Resumo:
Existing election algorithms suffer limited scalability. This limit stems from the communication design which in turn stems from their fundamentally two-state behaviour. This paper presents a new election algorithm specifically designed to be highly scalable in broadcast networks whilst allowing any processing node to become coordinator with initially equal probability. To achieve this, careful attention has been paid to the communication design, and an additional state has been introduced. The design of the tri-state election algorithm has been motivated by the requirements analysis of a major research project to deliver robust scalable distributed applications, including load sharing, in hostile computing environments in which it is common for processing nodes to be rebooted frequently without notice. The new election algorithm is based in-part on a simple 'emergent' design. The science of emergence is of great relevance to developers of distributed applications because it describes how higher-level self-regulatory behaviour can arise from many participants following a small set of simple rules. The tri-state election algorithm is shown to have very low communication complexity in which the number of messages generated remains loosely-bounded regardless of scale for large systems; is highly scalable because nodes in the idle state do not transmit any messages; and because of its self-organising characteristics, is very stable.
Resumo:
Natural distributed systems are adaptive, scalable and fault-tolerant. Emergence science describes how higher-level self-regulatory behaviour arises in natural systems from many participants following simple rulesets. Emergence advocates simple communication models, autonomy and independence, enhancing robustness and self-stabilization. High-quality distributed applications such as autonomic systems must satisfy the appropriate nonfunctional requirements which include scalability, efficiency, robustness, low-latency and stability. However the traditional design of distributed applications, especially in terms of the communication strategies employed, can introduce compromises between these characteristics. This paper discusses ways in which emergence science can be applied to distributed computing, avoiding some of the compromises associated with traditionally-designed applications. To demonstrate the effectiveness of this paradigm, an emergent election algorithm is described and its performance evaluated. The design incorporates nondeterministic behaviour. The resulting algorithm has very low communication complexity, and is simultaneously very stable, scalable and robust.
Resumo:
Despite the apparent simplicity of the OpenMP directive shared memory programming model and the sophisticated dependence analysis and code generation capabilities of the ParaWise/CAPO tools, experience shows that a level of expertise is required to produce efficient parallel code. In a real world application the investigation of a single loop in a generated parallel code can soon become an in-depth inspection of numerous dependencies in many routines. The additional understanding of dependencies is also needed to effectively interpret the information provided and supply the required feedback. The ParaWise Expert Assistant has been developed to automate this investigation and present questions to the user about, and in the context of, their application code. In this paper, we demonstrate that knowledge of dependence information and OpenMP are no longer essential to produce efficient parallel code with the Expert Assistant. It is hoped that this will enable a far wider audience to use the tools and subsequently, exploit the benefits of large parallel systems.
Resumo:
This paper describes the use of a blackboard architecture for building a hybrid case based reasoning (CBR) system. The Smartfire fire field modelling package has been built using this architecture and includes a CBR component. It allows the integration into the system of qualitative spatial reasoning knowledge from domain experts. The system can be used for the automatic set-up of fire field models. This enables fire safety practitioners who are not expert in modelling techniques to use a fire modelling tool. The paper discusses the integrating powers of the architecture, which is based on a common knowledge representation comprising a metric diagram and place vocabulary and mechanisms for adaptation and conflict resolution built on the Blackboard.
Resumo:
This paper describes ways in which emergence engineering principles can be applied to the development of distributed applications. A distributed solution to the graph-colouring problem is used as a vehicle to illustrate some novel techniques. Each node acts autonomously to colour itself based only on its local view of its neighbourhood, and following a simple set of carefully tuned rules. Randomness breaks symmetry and thus enhances stability. The algorithm has been developed to enable self-configuration in wireless sensor networks, and to reflect real-world configurations the algorithm operates with 3 dimensional topologies (reflecting the propagation of radio waves and the placement of sensors in buildings, bridge structures etc.). The algorithm’s performance is evaluated and results presented. It is shown to be simultaneously highly stable and scalable whilst achieving low convergence times. The use of eavesdropping gives rise to low interaction complexity and high efficiency in terms of the communication overheads.
Resumo:
This paper presents an investigation into applying Case-Based Reasoning to Multiple Heterogeneous Case Bases using agents. The adaptive CBR process and the architecture of the system are presented. A case study is presented to illustrate and evaluate the approach. The process of creating and maintaining the dynamic data structures is discussed. The similarity metrics employed by the system are used to support the process of optimisation of the collaboration between the agents which is based on the use of a blackboard architecture. The blackboard architecture is shown to support the efficient collaboration between the agents to achieve an efficient overall CBR solution, while using case-based reasoning methods to allow the overall system to adapt and “learn” new collaborative strategies for achieving the aims of the overall CBR problem solving process.
Resumo:
This paper describes a highly flexible component architecture, primarily designed for automotive control systems, that supports distributed dynamically- configurable context-aware behaviour. The architecture enforces a separation of design-time and run-time concerns, enabling almost all decisions concerning runtime composition and adaptation to be deferred beyond deployment. Dynamic context management contributes to flexibility. The architecture is extensible, and can embed potentially many different self-management decision technologies simultaneously. The mechanism that implements the run-time configuration has been designed to be very robust, automatically and silently handling problems arising from the evaluation of self- management logic and ensuring that in the worst case the dynamic aspects of the system collapse down to static behavior in totally predictable ways.
Resumo:
This paper proposes a vehicular control system architecture that supports self-configuration. The architecture is based on dynamic mapping of processes and services to resources to meet the challenges of future demanding use-scenarios in which systems must be flexible to exhibit context-aware behaviour and to permit customization. The architecture comprises a number of low-level services that provide the required system functionalities, which include automatic discovery and incorporation of new devices, self-optimisation to best-use the processing, storage and communication resources available, and self-diagnostics. The benefits and challenges of dynamic configuration and the automatic inclusion of users' Consumer Electronic (CE) devices are briefly discussed. The dynamic configuration and control-theoretic technologies used are described in outline and the way in which the demands of highly flexible dynamic configuration and highly robust operation are simultaneously met without compromise, is explained. A number of generic use-cases have been identified, each with several specific use-case scenarios. One generic use-case is described to provide an insight into the extent of the flexible reconfiguration facilitated by the architecture.
Resumo:
This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development Environments (DECADE). A brief discussion sets the background for IoT, and the development of the distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and quantitative analysis carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service architecture, combining a distributed data warehouse, web services for analysis agents, ontology agents and a verification engine, with a centrally verified outcome database maintained by certifying body for qualification/professional status.
Resumo:
Does art connect the individual psyche to history and culture? Psyche and the Arts challenges existing ideas about the relationship between Jung and art, and offers exciting new dimensions to key issues such as the role of image in popular culture, and the division of psyche and matter in art form. Divided into three sections - Getting into Art, Challenging the Critical Space and Interpreting Art in the World - the text shows how Jungian ideas can work with the arts to illuminate both psychological theory and aesthetic response. Psyche and the Arts offers new critical visions of literature, film, music, architecture and painting, as something alive in the experience of creators and audiences challenging previous Jungian criticism. This approach demonstrates Jung’s own belief that art is a healing response to collective cultural norms. This diverse yet focused collection from international contributors invites the reader to seek personal and cultural value in the arts, and will be essential reading for Jungian analysts, trainees and those more generally interested in the arts. [From the Publisher]