13 resultados para Time-sharing computer systems

em Greenwich Academic Literature Archive - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The demands of the process of engineering design, particularly for structural integrity, have exploited computational modelling techniques and software tools for decades. Frequently, the shape of structural components or assemblies is determined to optimise the flow distribution or heat transfer characteristics, and to ensure that the structural performance in service is adequate. From the perspective of computational modelling these activities are typically separated into: • fluid flow and the associated heat transfer analysis (possibly with chemical reactions), based upon Computational Fluid Dynamics (CFD) technology • structural analysis again possibly with heat transfer, based upon finite element analysis (FEA) techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work explores the impact of response time distributions on high-rise building evacuation. The analysis utilises response times extracted from printed accounts and interviews of evacuees from the WTC North Tower evacuation of 11 September 2001. Evacuation simulations produced using these “real” response time distributions are compared with simulations produced using instant and engineering response time distributions. Results suggest that while typical engineering approximations to the response time distribution may produce reasonable evacuation times for up to 90% of the building population, using this approach may underestimate total evacuation times by as much as 61%. These observations are applicable to situations involving large high-rise buildings in which travel times are generally expected to be greater than response times

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the influence of exit availability on evacuation time for a narrow body aircraft under certification trial conditions using computer simulation. A narrow body aircraft which has previously passed the certification trial is used as the test configuration. While maintaining the certification requirement of 50% of the available exits, six different exit configurations are examined. These include the standard certification configuration (one exit from each exit pair) and five other exit configurations based on commonly occurring exit combinations found in accidents. These configurations are based on data derived from the AASK database and the evacuation simulations are performed using the airEXODUS evacuation simulation software. The results show that the certification practice of using half the available exits predominately down one side of the aircraft is neither statistically relevant nor challenging. For the aircraft cabin layout examined, the exit configuration used in certification trial produces the shortest egress times. Furthermore, three of the six exit combinations investigated result in predicted egress times in excess of 90 seconds, suggesting that the aircraft would not satisfy the certification requirement under these conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the influence of exit availability on evacuation time for narrow body aircraft under certification trial conditions using computer simulation. A narrow body aircraft which has previously passed the certification trial is used as the test configuration. While maintaining the certification requirement of 50% of the available exits, six different configurations are examined. These include the standard certification and five other exit configurations based on commonly occurring exit combinations found in accidents. These configurations are based on data derived from the AASK database and the evacuation simulations are performed using the airEXODUS evacuation software. The results show that the certification practise of using half of the available exits predominately down one side of the aircraft is neither statistically relevant nor challenging. For the aircraft cabin layout examined, the exit configuration used in certification trial produces the shortest egress times. Furthermore, three of the six exit combinations investigated result in predicted egress times in excess of 90 seconds, suggesting that the aircraft would not satisfy the certification requirement under these conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generally speaking, the term temporal logic refers to any system of rules and symbolism for representing and reasoning about propositions qualified in terms of time. In computer science, particularly in the domain of Artificial Intelligence, there are mainly two known approaches to the representation of temporal information: modal logic approaches including tense logic and hybrid temporal logic, and predicate logic approaches including temporal arguement method and reified temporal logic. On one hand, while tense logic, hybrid temporal logic and temporal argument method enjoy formal theoretical foundations, their expressiveness has been criticised as not power enough for representing general temporal knowledge; on the other hand, although reified temporal logic provides greater expressive power, most of the current systems following the temporal reification lack of complete and sound axiomatic theories. With there observations in mind, a new reified temporal logic with clear syntax and semantics in terms of a sound and complete axiomatic formalism is introduced in this paper, which retains all the expressive power of temporal reification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem for distributing unstructured meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut edge weight in the graph with the aim of minimising the parallel communication overhead, but recently there has been a perceived need to take into account the communications network of the parallel machine. For example the increasing use of SMP clusters (systems of multiprocessor compute nodes with very fast intra-node communications but relatively slow inter-node networks) suggest the use of hierarchical network models. Indeed this requirement is exacerbated in the early experiments with meta-computers (multiple supercomputers combined together, in extreme cases over inter-continental networks). In this paper therefore, we modify a multilevel algorithm in order to minimise a cost function based on a model of the communications network. Several network models and variants of the algorithm are tested and we establish that it is possible to successfully guide the optimisation to reflect the chosen architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes an interactive parallelisation toolkit that can be used to generate parallel code suitable for either a distributed memory system (using message passing) or a shared memory system (using OpenMP). This study focuses on how the toolkit is used to parallelise a complex heterogeneous ocean modelling code within a few hours for use on a shared memory parallel system. The generated parallel code is essentially the serial code with OpenMP directives added to express the parallelism. The results show that substantial gains in performance can be achieved over the single thread version with very little effort.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational analysis software is now widely accepted as a key industrial tool for plant design and process analysis. This is due in part to increased accuracy in the models, larger and faster computer systems and better graphical interfaces that allow easy use of the technology by engineers. The use of computational modelling to test new ideas and analyse current processes helps to take the guesswork out of industrial process design and offers attractive cost savings. An overview of computer-based modelling techniques as applied to the materials processing industry is presented and examples of their application are provided in the contexts of the mixing and refining of lead bullion and the manufacture of lead ingots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract not available

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The performance of loadsharing algorithms for heterogeneous distributed systems is investigated by simulation. The systems considered are networks of workstations (nodes) which differ in processing power. Two parameters are proposed for characterising system heterogeneity, namely the variance and skew of the distribution of processing power among the network nodes. A variety of networks are investigated, with the same number of nodes and total processing power, but with the processing power distributed differently among the nodes. Two loadsharing algorithms are evaluated, at overall system loadings of 50% and 90%, using job response time as the performance metric. Comparison is made with the ideal situation of ‘perfect sharing’, where it is assumed that the communication delays are zero and that complete knowledge is available about job lengths and the loading at the different nodes, so that an arriving job can be sent to the node where it will be completed in the shortest time. The algorithms studied are based on those already in use for homogeneous networks, but were adapted to take account of system heterogeneity. Both algorithms take into account the differences in the processing powers of the nodes in their location policies, but differ in the extent to which they ‘discriminate’ against the slower nodes. It is seen that the relative performance of the two is strongly influenced by the system utilisation and the distribution of processing power among the nodes.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper describes a methodology for deploying flexible dynamic configuration into embedded systems whilst preserving the reliability advantages of static systems. The methodology is based on the concept of decision points (DP) which are strategically placed to achieve fine-grained distribution of self-management logic to meet application-specific requirements. DP logic can be changed easily, and independently of the host component, enabling self-management behavior to be deferred beyond the point of system deployment. A transparent Dynamic Wrapper mechanism (DW) automatically detects and handles problems arising from the evaluation of self-management logic within each DP and ensures that the dynamic aspects of the system collapse down to statically defined default behavior to ensure safety and correctness despite failures. Dynamic context management contributes to flexibility, and removes the need for design-time binding of context providers and consumers, thus facilitating run-time composition and incremental component upgrade.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper describes a highly flexible component architecture, primarily designed for automotive control systems, that supports distributed dynamically- configurable context-aware behaviour. The architecture enforces a separation of design-time and run-time concerns, enabling almost all decisions concerning runtime composition and adaptation to be deferred beyond deployment. Dynamic context management contributes to flexibility. The architecture is extensible, and can embed potentially many different self-management decision technologies simultaneously. The mechanism that implements the run-time configuration has been designed to be very robust, automatically and silently handling problems arising from the evaluation of self- management logic and ensuring that in the worst case the dynamic aspects of the system collapse down to static behavior in totally predictable ways.