957 resultados para Sinoatrial Node
Resumo:
Virtual manufacturing and design assessment increasingly involve the simulation of interacting phenomena, sic. multi-physics, an activity which is very computationally intensive. This chapter describes an attempt to address the parallel issues associated with a multi-physics simulation approach based upon a range of compatible procedures operating on one mesh using a single database - the distinct physics solvers can operate separately or coupled on sub-domains of the whole geometric space. Moreover, the finite volume unstructured mesh solvers use different discretization schemes (and, particularly, different ‘nodal’ locations and control volumes). A two-level approach to the parallelization of this simulation software is described: the code is restructured into parallel form on the basis of the mesh partitioning alone, that is, without regard to the physics. However, at run time, the mesh is partitioned to achieve a load balance, by considering the load per node/element across the whole domain. The latter of course is determined by the problem specific physics at a particular location.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem for distributing unstructured meshes onto parallel computers. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. To date these algorithms have been used almost exclusively to minimise the cut edge weight in the graph with the aim of minimising the parallel communication overhead, but recently there has been a perceived need to take into account the communications network of the parallel machine. For example the increasing use of SMP clusters (systems of multiprocessor compute nodes with very fast intra-node communications but relatively slow inter-node networks) suggest the use of hierarchical network models. Indeed this requirement is exacerbated in the early experiments with meta-computers (multiple supercomputers combined together, in extreme cases over inter-continental networks). In this paper therefore, we modify a multilevel algorithm in order to minimise a cost function based on a model of the communications network. Several network models and variants of the algorithm are tested and we establish that it is possible to successfully guide the optimisation to reflect the chosen architecture.
Resumo:
The factors that are driving the development and use of grids and grid computing, such as size, dynamic features, distribution and heterogeneity, are also pushing to the forefront service quality issues. These include performance, reliability and security. Although grid middleware can address some of these issues on a wider scale, it has also become imperative to ensure adequate service provision at local level. Load sharing in clusters can contribute to the provision of a high quality service, by exploiting both static and dynamic information. This paper is concerned with the presentation of a load sharing scheme, that can satisfy grid computing requirements. It follows a proactive, non preemptive and distributed approach. Load information is gathered continuously before it is needed, and a task is allocated to the most appropriate node for execution. Performance and reliability are enhanced by the decentralised nature of the scheme and the symmetric roles of the nodes. In addition, the scheme exhibits transparency characteristics that facilitate integration with the grid.
Resumo:
Existing election algorithms suffer limited scalability. This limit stems from the communication design which in turn stems from their fundamentally two-state behaviour. This paper presents a new election algorithm specifically designed to be highly scalable in broadcast networks whilst allowing any processing node to become coordinator with initially equal probability. To achieve this, careful attention has been paid to the communication design, and an additional state has been introduced. The design of the tri-state election algorithm has been motivated by the requirements analysis of a major research project to deliver robust scalable distributed applications, including load sharing, in hostile computing environments in which it is common for processing nodes to be rebooted frequently without notice. The new election algorithm is based in-part on a simple 'emergent' design. The science of emergence is of great relevance to developers of distributed applications because it describes how higher-level self-regulatory behaviour can arise from many participants following a small set of simple rules. The tri-state election algorithm is shown to have very low communication complexity in which the number of messages generated remains loosely-bounded regardless of scale for large systems; is highly scalable because nodes in the idle state do not transmit any messages; and because of its self-organising characteristics, is very stable.
Resumo:
This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.
Resumo:
Optimisation in wireless sensor networks is necessary due to the resource constraints of individual devices, bandwidth limits of the communication channel, relatively high probably of sensor failure, and the requirement constraints of the deployed applications in potently highly volatile environments. This paper presents BioANS, a protocol designed to optimise a wireless sensor network for resource efficiency as well as to meet a requirement common to a whole class of WSN applications - namely that the sensor nodes are dynamically selected on some qualitative basis, for example the quality by which they can provide the required context information. The design of BioANS has been inspired by the communication mechanisms that have evolved in natural systems. The protocol tolerates randomness in its environment, including random message loss, and incorporates a non-deterministic ’delayed-bids’ mechanism. A simulation model is used to explore the protocol’s performance in a wide range of WSN configurations. Characteristics evaluated include tolerance to sensor node density and message loss, communication efficiency, and negotiation latency .
Resumo:
This paper describes a protocol for dynamically configuring wireless sensor nodes into logical clusters. The concept is to be able to inject an overlay configuration into an ad-hoc network of sensor nodes or similar devices, and have the network configure itself organically. The devices are arbitrarily deployed and have initially have no information whatsoever concerning physical location, topology, density or neighbourhood. The Emergent Cluster Overlay (ECO) protocol is totally self-configuring and has several novel features, including nodes self-determining their mobility based on patterns of neighbour discovery, and that the target cluster size is specified externally (by the sensor network application) and is not directly coupled to radio communication range or node packing density. Cluster head nodes are automatically assigned as part of the cluster configuration process, at no additional cost. ECO is ideally suited to applications of wireless sensor networks in which localized groups of sensors act cooperatively to provide a service. This includes situations where service dilution is used (dynamically identifying redundant nodes to conserve their resources).
Resumo:
We discuss the application of the multilevel (ML) refinement technique to the Vehicle Routing Problem (VRP), and compare it to its single-level (SL) counterpart. Multilevel refinement recursively coarsens to create a hierarchy of approximations to the problem and refines at each level. A SL heuristic, termed the combined node-exchange composite heuristic (CNCH), is developed first to solve instances of the VRP. A ML version (the ML-CNCH) is then created, using the construction and improvement heuristics of the CNCH at each level. Experimentation is used to find a suitable combination, which extends the global view of these heuristics. Results comparing both SL and ML are presented.
Resumo:
This paper describes ways in which emergence engineering principles can be applied to the development of distributed applications. A distributed solution to the graph-colouring problem is used as a vehicle to illustrate some novel techniques. Each node acts autonomously to colour itself based only on its local view of its neighbourhood, and following a simple set of carefully tuned rules. Randomness breaks symmetry and thus enhances stability. The algorithm has been developed to enable self-configuration in wireless sensor networks, and to reflect real-world configurations the algorithm operates with 3 dimensional topologies (reflecting the propagation of radio waves and the placement of sensors in buildings, bridge structures etc.). The algorithm’s performance is evaluated and results presented. It is shown to be simultaneously highly stable and scalable whilst achieving low convergence times. The use of eavesdropping gives rise to low interaction complexity and high efficiency in terms of the communication overheads.
Resumo:
Zaha Hadid's Kartal Pendik Masterplan (2006) for a new city centre on the east bank of Istanbul proposes the redevelopment of an abandoned industrial site located in a crucial infrastructural node between Europe and Asia as a connecting system between the neighbouring areas of Kartal in the west and Pendik in the east. The project is organised on what its architects call a soft grid, a flexible and adaptable grid that allows it to articulate connections and differences of form, density and use within the same spatial structure [1]. Its final overall design constitutes only one of the many possible configurations that the project may take in response to the demands of the different areas included in the masterplan, and is produced from a script that is able to generate both built volumes and open spaces, skyscrapers as well as parks. The soft grid in fact produces a ‘becoming’ rather than a finite and definitive form: its surface space does not look like a grid, but is derived from a grid operation which is best explained by the project presentation in video animation. The grid here is a process of ‘gridding’, enacted according to ancient choreographed linear movements of measuring, defining, adjusting, reconnecting spaces through an articulated surface rather than superimposed on an ignored given like an indifferent colonising carpet.
Resumo:
Previous studies have revealed considerable interobserver and intraobserver variation in the histological classification of preinvasive cervical squamous lesions. The aim of the present study was to develop a decision support system (DSS) for the histological interpretation of these lesions. Knowledge and uncertainty were represented in the form of a Bayesian belief network that permitted the storage of diagnostic knowledge and, for a given case, the collection of evidence in a cumulative manner that provided a final probability for the possible diagnostic outcomes. The network comprised 8 diagnostic histological features (evidence nodes) that were each independently linked to the diagnosis (decision node) by a conditional probability matrix. Diagnostic outcomes comprised normal; koilocytosis; and cervical intraepithelial neoplasia (CIN) 1, CIN II, and CIN M. For each evidence feature, a set of images was recorded that represented the full spectrum of change for that feature. The system was designed to be interactive in that the histopathologist was prompted to enter evidence into the network via a specifically designed graphical user interface (i-Path Diagnostics, Belfast, Northern Ireland). Membership functions were used to derive the relative likelihoods for the alternative feature outcomes, the likelihood vector was entered into the network, and the updated diagnostic belief was computed for the diagnostic outcomes and displayed. A cumulative probability graph was generated throughout the diagnostic process and presented on screen. The network was tested on 50 cervical colposcopic biopsy specimens, comprising 10 cases each of normal, koilocytosis, CIN 1, CIN H, and CIN III. These had been preselected by a consultant gynecological pathologist. Using conventional morphological assessment, the cases were classified on 2 separate occasions by 2 consultant and 2 junior pathologists. The cases were also then classified using the DSS on 2 occasions by the 4 pathologists and by 2 medical students with no experience in cervical histology. Interobserver and intraobserver agreement using morphology and using the DSS was calculated with K statistics. Intraobserver reproducibility using conventional unaided diagnosis was reasonably good (kappa range, 0.688 to 0.861), but interobserver agreement was poor (kappa range, 0.347 to 0.747). Using the DSS improved overall reproducibility between individuals. Using the DSS, however, did not enhance the diagnostic performance of junior pathologists when comparing their DSS-based diagnosis against an experienced consultant. However, the generation of a cumulative probability graph also allowed a comparison of individual performance, how individual features were assessed in the same case, and how this contributed to diagnostic disagreement between individuals. Diagnostic features such as nuclear pleomorphism were shown to be particularly problematic and poorly reproducible. DSSs such as this therefore not only have a role to play in enhancing decision making but also in the study of diagnostic protocol, education, self-assessment, and quality control. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
This paper presents the results of feasibility study of a novel concept of power system on-line collaborative voltage stability control. The proposal of the on-line collaboration between power system controllers is to enhance their overall performance and efficiency to cope with the increasing operational uncertainty of modern power systems. In the paper, the framework of proposed on-line collaborative voltage stability control is firstly presented, which is based on the deployment of multi-agent systems and real-time communication for on-line collaborative control. Then two of the most important issues in implementing the proposed on-line collaborative voltage stability control are addressed: (1) Error-tolerant communication protocol for fast information exchange among multiple intelligent agents; (2) Deployment of multi-agent systems by using graph theory to implement power system post-emergency control. In the paper, the proposed on-line collaborative voltage stability control is tested in the example 10-machine 39-node New England power system. Results of feasibility study from simulation are given considering the low-probability power system cascading faults.
Resumo:
Los avances que están produciéndose en el ámbito académico con el surgimiento de herramientas de la Web 2.0 y el empleo masivo por parte de los estudiantes de la redes sociales para comunicarse entre ellos, está haciendo que el panorama educativo se encuentre ante unos desafíos a los que tiene que dar respuesta. La investigación que aquí se presenta tuvo como objetivo principal analizar el estado del empleo de la redes sociales por parte alumnado universitario, así como los posibles malos hábitos y usos problemáticos de las mismas. Se utilizó como instrumento de recogida de información un cuestionario “ad hoc” con un total de 23 ítems. Se concluye que el alumnado en general no posee malos hábitos en el empleo de las redes sociales, igualmente los resultados obtenidos ponen de manifiesto que su utilización no está plenamente integrada en las instituciones universitarias de educación superior, así como que los estudiantes no las emplean/usan como herramienta fundamental para las resolución de cuestiones académicas.
Resumo:
This paper presents a new method for calculating the individual generators’ shares in line flows, line losses and loads. The method is described and illustrated on active power flows, but it can be applied in the same way to reactive power flows. Starting from a power flow solution, the line flow matrix is formed. This matrix is used for identifying node types, tracing the power flow from generators downstream to loads, and to determine generators’ participation factors to lines and loads. Neither exhaustive search nor matrix inversion is required. Hence, the method is claimed to be the least computationally demanding amongst all of the similar methods.
Resumo:
The future convergence of voice, video and data applications on the Internet requires that next generation technology provides bandwidth and delay guarantees. Current technology trends are moving towards scalable aggregate-based systems where applications are grouped together and guarantees are provided at the aggregate level only. This solution alone is not enough for interactive video applications with sub-second delay bounds. This paper introduces a novel packet marking scheme that controls the end-to-end delay of an individual flow as it traverses a network enabled to supply aggregate- granularity Quality of Service (QoS). IPv6 Hop-by-Hop extension header fields are used to track the packet delay encountered at each network node and autonomous decisions are made on the best queuing strategy to employ. The results of network simulations are presented and it is shown that when the proposed mechanism is employed the requested delay bound is met with a 20% reduction in resource reservation and no packet loss in the network.