16 resultados para cloud-based computing

em Greenwich Academic Literature Archive - UK


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes work towards the deployment of flexible self-management into real-time embedded systems. A challenging project which focuses specifically on the development of a dynamic, adaptive automotive middleware is described, and the specific self-management requirements of this project are discussed. These requirements have been identified through the refinement of a wide-ranging set of use cases requiring context-sensitive behaviours. A sample of these use-cases is presented to illustrate the extent of the demands for self-management. The strategy that has been adopted to achieve self-management, based on the use of policies is presented. The embedded and real-time nature of the target system brings the constraints that dynamic adaptation capabilities must not require changes to the run-time code (except during hot update of complete binary modules), adaptation decisions must have low latency, and because the target platforms are resource-constrained the self-management mechanism have low resource requirements (especially in terms of processing and memory). Policy-based computing is thus and ideal candidate for achieving the self-management because the policy itself is loaded at run-time and can be replaced or changed in the future in the same way that a data file is loaded. Policies represent a relatively low complexity and low risk means of achieving self-management, with low run-time costs. Policies can be stored internally in ROM (such as default policies) as well as externally to the system. The architecture of a designed-for-purpose powerful yet lightweight policy library is described. A suitable evaluation platform, supporting the whole life-cycle of feasibility analysis, concept evaluation, development, rigorous testing and behavioural validation has been devised and is described.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes work towards the deployment of self-managing capabilities into an advanced middleware for automotive systems. The middleware will support a range of futuristic use-cases requiring context-awareness and dynamic system configuration. Several use-cases are described and their specific context-awareness requirements identified. The discussion is accompanied by a justification for the selection of policy-based computing as the autonomics technique to drive the self-management. The specific policy technology to be deployed is described briefly, with a focus on its specific features that are of direct relevance to the middleware project. A selected use-case is explored in depth to illustrate the extent of dynamic behaviour achievable in the proposed middleware architecture, which is composed of several policy-configured services. An early demonstration application which facilitates concept evaluation is presented and a sequence of typical device-discovery events is worked through

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a methodology for embedding dynamic behaviour into software components. The implications and system architecture requirements to support this adaptivity are discussed. This work is part of a European Commission funded and industry supported project to produce a reconfigurable middleware for use in automotive systems. Such systems must be trustable against illegal internal behaviour and activity with external origins, additional devices for example. Policy-based computing is used here as an example of embedded logic. A key contribution of this work is the way in which static and dynamic aspects of the system are interfaced, such that the behaviour can be changed very flexibly (even during run-time), without modification, recompilation or redeployment of the embedded application code. An implementation of these concepts is presented, focussing on achieving trust in the use of dynamic behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most common parallelisation strategy for many Computational Mechanics (CM) (typified by Computational Fluid Dynamics (CFD) applications) which use structured meshes, involves a 1D partition based upon slabs of cells. However, many CFD codes employ pipeline operations in their solution procedure. For parallelised versions of such codes to scale well they must employ two (or more) dimensional partitions. This paper describes an algorithmic approach to the multi-dimensional mesh partitioning in code parallelisation, its implementation in a toolkit for almost automatically transforming scalar codes to parallel form, and its testing on a range of ‘real-world’ FORTRAN codes. The concept of multi-dimensional partitioning is straightforward, but non-trivial to represent as a sufficiently generic algorithm so that it can be embedded in a code transformation tool. The results of the tests on fine real-world codes demonstrate clear improvements in parallel performance and scalability (over a 1D partition). This is matched by a huge reduction in the time required to develop the parallel versions when hand coded – from weeks/months down to hours/days.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The manufacture of materials products involves the control of a range of interacting physical phenomena. The material to be used is synthesised and then manipulated into some component form. The structure and properties of the final component are influenced by both interactions of continuum-scale phenomena and those at an atomistic-scale level. Moreover, during the processing phase there are some properties that cannot be measured (typically the liquid-solid phase change). However, it seems there is a potential to derive properties and other features from atomistic-scale simulations that are of key importance at the continuum scale. Some of the issues that need to be resolved in this context focus upon computational techniques and software tools facilitating: (i) the multiphysics modeling at continuum scale; (ii) the interaction and appropriate degrees of coupling between the atomistic through microstructure to continuum scale; and (iii) the exploitation of high-performance parallel computing power delivering simulation results in a practical time period. This paper discusses some of the attempts to address each of the above issues, particularly in the context of materials processing for manufacture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the architecture of the case based reasoning (CBR) component of Smartfire, a fire field modelling tool for use by members of the Fire Safety Engineering community who are not expert in modelling techniques. The CBR system captures the qualitative reasoning of an experienced modeller in the assessment of room geometries so as to set up the important initial parameters of the problem. The system relies on two important reasoning principles obtained from the expert: 1) there is a natural hierarchical retrieval mechanism which may be employed; and 2) much of the reasoning on a qualitative level is linear in nature, although the computational solution of the problem is non-linear. The paper describes the qualitative representation of geometric room information on which the system is based, and the principles on which the CBR system operates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a framework for Historical Case-Based Reasoning (HCBR) which allows the expression of both relative and absolute temporal knowledge, representing case histories in the real world. The formalism is founded on a general temporal theory that accommodates both points and intervals as primitive time elements. A case history is formally defined as a collection of (time-independent) elemental cases, together with its corresponding temporal reference. Case history matching is two-fold, i.e., there are two similarity values need to be computed: the non-temporal similarity degree and the temporal similarity degree. On the one hand, based on elemental case matching, the non-temporal similarity degree between case histories is defined by means of computing the unions and intersections of the involved elemental cases. On the other hand, by means of the graphical presentation of temporal references, the temporal similarity degree in case history matching is transformed into conventional graph similarity measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From the model geometry creation to the model analysis, the stages in between such as mesh generation are the most manpower intensive phase in a mesh-based computational mechanics simulation process. On the other hand the model analysis is the most computing intensive phase. Advanced computational hardware and software have significantly reduced the computing time - and more importantly the trend is downward. With the kind of models envisaged coming, which are larger, more complex in geometry and modelling, and multiphysics, there is no clear trend that the manpower intensive phase is to decrease significantly in time - in the present way of operation it is more likely to increase with model complexity. In this paper we address this dilemma in collaborating components for models in electronic packaging application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as fire-field modelling. This has traditionally been achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this article we demonstrate how typical office-based PCs attached to a Local Area Network has the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. It was found that good speedups could be achieved on homogeneous networks of PCs, for example a problem composed of ~100,000 cells would run 9.3 times faster on a network of 12 800MHz PCs than on a single 800MHz PC. It was also found that a network of eight 3.2GHz Pentium 4 PCs would run 7.04 times faster than a single 3.2GHz Pentium computer. A dynamic load balancing scheme was also devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact between the parallel processing task and other computer users on the network was minimized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents work towards generic policy toolkit support for autonomic computing systems in which the policies themselves can be adapted dynamically and automatically. The work is motivated by three needs: the need for longer-term policy-based adaptation where the policy itself is dynamically adapted to continually maintain or improve its effectiveness despite changing environmental conditions; the need to enable non autonomics-expert practitioners to embed self-managing behaviours with low cost and risk; and the need for adaptive policy mechanisms that are easy to deploy into legacy code. A policy definition language is presented; designed to permit powerful expression of self-managing behaviours. The language is very flexible through the use of simple yet expressive syntax and semantics, and facilitates a very diverse policy behaviour space through both hierarchical and recursive uses of language elements. A prototype library implementation of the policy support mechanisms is described. The library reads and writes policies in well-formed XML script. The implementation extends the state of the art in policy-based autonomics through innovations which include support for multiple policy versions of a given policy type, multiple configuration templates, and meta-policies to dynamically select between policy instances and templates. Most significantly, the scheme supports hot-swapping between policy instances. To illustrate the feasibility and generalised applicability of these tools, two dissimilar example deployment scenarios are examined. The first is taken from an exploratory implementation of self-managing parallel processing, and is used to demonstrate the simple and efficient use of the tools. The second example demonstrates more-advanced functionality, in the context of an envisioned multi-policy stock trading scheme which is sensitive to environmental volatility

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a policy definition language which forms part of a generic policy toolkit for autonomic computing systems in which the policies themselves can be modified dynamically and automatically. Targeted enhancements to the current state of practice include: policy self-adaptation where the policy itself is dynamically modified to match environmental conditions; improved support for non autonomics-expert developers; and facilitating easy deployment of adaptive policies into legacy code. The policy definition language permits powerful expression of self-managing behaviours and facilitates a diverse policy behaviour space. Features include support for multiple versions of a given policy type, multiple configuration templates, and meta policies to dynamically select between policy instances. An example deployment scenario illustrates advanced functionality in the context of a multi policy stock trading system which is sensitive to environmental volatility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a result parallel algorithms on high performance machines become one way out. In this study we partition the matching search, which occupies the majority of the work in a fractal video compression process, into small tasks and implement them in two distributed computing environments, one using DCOM and the other using .NET Remoting technology, based on a local area network consists of loosely coupled PCs. Experimental results show that the parallel algorithm is able to achieve a high speedup in these distributed environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a mechanism for representing and recognizing case history patterns with rich internal temporal aspects. A case history is characterized as a collection of elemental cases as in conventional case-based reasoning systems, together with the corresponding temporal constraints that can be relative and/or with absolute values. A graphical representation for case histories is proposed as a directed, partially weighted and labeled simple graph. In terms of such a graphical representation, an eigen-decomposition graph matching algorithm is proposed for recognizing case history patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents innovative work in the development of policy-based autonomic computing. The core of the work is a powerful and flexible policy-expression language AGILE, which facilitates run-time adaptable policy configuration of autonomic systems. AGILE also serves as an integrating platform for other self-management technologies including signal processing, automated trend analysis and utility functions. Each of these technologies has specific advantages and applicability to different types of dynamic adaptation. The AGILE platform enables seamless interoperability of the different technologies to each perform various aspects of self-management within a single application. The various technologies are implemented as object components. Self-management behaviour is specified using the policy language semantics to bind the various components together as required. Since the policy semantics support run-time re-configuration, the self-management architecture is dynamically composable. Additional benefits include the standardisation of the application programmer interface, terminology and semantics, and only a single point of embedding is required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we shall critically examine a special class of graph matching algorithms that follow the approach of node-similarity measurement. A high-level algorithm framework, namely node-similarity graph matching framework (NSGM framework), is proposed, from which, many existing graph matching algorithms can be subsumed, including the eigen-decomposition method of Umeyama, the polynomial-transformation method of Almohamad, the hubs and authorities method of Kleinberg, and the kronecker product successive projection methods of Wyk, etc. In addition, improved algorithms can be developed from the NSGM framework with respects to the corresponding results in graph theory. As the observation, it is pointed out that, in general, any algorithm which can be subsumed from NSGM framework fails to work well for graphs with non-trivial auto-isomorphism structure.