884 resultados para Distributed network protocol


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A local area network that can support both voice and data packets offers economic advantages due to the use of only a single network for both types of traffic, greater flexibility to changing user demands, and it also enables efficient use to be made of the transmission capacity. The latter aspect is very important in local broadcast networks where the capacity is a scarce resource, for example mobile radio. This research has examined two types of local broadcast network, these being the Ethernet-type bus local area network and a mobile radio network with a central base station. With such contention networks, medium access control (MAC) protocols are required to gain access to the channel. MAC protocols must provide efficient scheduling on the channel between the distributed population of stations who want to transmit. No access scheme can exceed the performance of a single server queue, due to the spatial distribution of the stations. Stations cannot in general form a queue without using part of the channel capacity to exchange protocol information. In this research, several medium access protocols have been examined and developed in order to increase the channel throughput compared to existing protocols. However, the established performance measures of average packet time delay and throughput cannot adequately characterise protocol performance for packet voice. Rather, the percentage of bits delivered within a given time bound becomes the relevant performance measure. Performance evaluation of the protocols has been examined using discrete event simulation and in some cases also by mathematical modelling. All the protocols use either implicit or explicit reservation schemes, with their efficiency dependent on the fact that many voice packets are generated periodically within a talkspurt. Two of the protocols are based on the existing 'Reservation Virtual Time CSMA/CD' protocol, which forms a distributed queue through implicit reservations. This protocol has been improved firstly by utilising two channels, a packet transmission channel and a packet contention channel. Packet contention is then performed in parallel with a packet transmission to increase throughput. The second protocol uses variable length packets to reduce the contention time between transmissions on a single channel. A third protocol developed, is based on contention for explicit reservations. Once a station has achieved a reservation, it maintains this effective queue position for the remainder of the talkspurt and transmits after it has sensed the transmission from the preceeding station within the queue. In the mobile radio environment, adaptions to the protocols were necessary in order that their operation was robust to signal fading. This was achieved through centralised control at a base station, unlike the local area network versions where the control was distributed at the stations. The results show an improvement in throughput compared to some previous protocols. Further work includes subjective testing to validate the protocols' effectiveness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Fibre Distributed Data Interface (FDDI) represents the new generation of local area networks (LANs). These high speed LANs are capable of supporting up to 500 users over a 100 km distance. User traffic is expected to be as diverse as file transfers, packet voice and video. As the proliferation of FDDI LANs continues, the need to interconnect these LANs arises. FDDI LAN interconnection can be achieved in a variety of different ways. Some of the most commonly used today are public data networks, dial up lines and private circuits. For applications that can potentially generate large quantities of traffic, such as an FDDI LAN, it is cost effective to use a private circuit leased from the public carrier. In order to send traffic from one LAN to another across the leased line, a routing algorithm is required. Much research has been done on the Bellman-Ford algorithm and many implementations of it exist in computer networks. However, due to its instability and problems with routing table loops it is an unsatisfactory algorithm for interconnected FDDI LANs. A new algorithm, termed ISIS which is being standardized by the ISO provides a far better solution. ISIS will be implemented in many manufacturers routing devices. In order to make the work as practical as possible, this algorithm will be used as the basis for all the new algorithms presented. The ISIS algorithm can be improved by exploiting information that is dropped by that algorithm during the calculation process. A new algorithm, called Down Stream Path Splits (DSPS), uses this information and requires only minor modification to some of the ISIS routing procedures. DSPS provides a higher network performance, with very little additional processing and storage requirements. A second algorithm, also based on the ISIS algorithm, generates a massive increase in network performance. This is achieved by selecting alternative paths through the network in times of heavy congestion. This algorithm may select the alternative path at either the originating node, or any node along the path. It requires more processing and memory storage than DSPS, but generates a higher network power. The final algorithm combines the DSPS algorithm with the alternative path algorithm. This is the most flexible and powerful of the algorithms developed. However, it is somewhat complex and requires a fairly large storage area at each node. The performance of the new routing algorithms is tested in a comprehensive model of interconnected LANs. This model incorporates the transport through physical layers and generates random topologies for routing algorithm performance comparisons. Using this model it is possible to determine which algorithm provides the best performance without introducing significant complexity and storage requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An investigation is carried out into the design of a small local computer network for eventual implementation on the University of Aston campus. Microprocessors are investigated as a possible choice for use as a node controller for reasons of cost and reliability. Since the network will be local, high speed lines of megabit order are proposed. After an introduction to several well known networks, various aspects of networks are discussed including packet switching, functions of a node and host-node protocol. Chapter three develops the network philosophy with an introduction to microprocessors. Various organisations of microprocessors into multicomputer and multiprocessor systems are discussed, together with methods of achieving reliabls computing. Chapter four presents the simulation model and its implentation as a computer program. The major modelling effort is to study the behaviour of messages queueing for access to the network and the message delay experienced on the network. Use is made of spectral analysis to determine the sampling frequency while Sxponentially Weighted Noving Averages are used for data smoothing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Erbium-doped fibre amplifiers (EDFA’s) are a key technology for the design of all optical communication systems and networks. The superiority of EDFAs lies in their negligible intermodulation distortion across high speed multichannel signals, low intrinsic losses, slow gain dynamics, and gain in a wide range of optical wavelengths. Due to long lifetime in excited states, EDFAs do not oppose the effect of cross-gain saturation. The time characteristics of the gain saturation and recovery effects are between a few hundred microseconds and 10 milliseconds. However, in wavelength division multiplexed (WDM) optical networks with EDFAs, the number of channels traversing an EDFA can change due to the faulty link of the network or the system reconfiguration. It has been found that, due to the variation in channel number in the EDFAs chain, the output system powers of surviving channels can change in a very short time. Thus, the power transient is one of the problems deteriorating system performance. In this thesis, the transient phenomenon in wavelength routed WDM optical networks with EDFA chains was investigated. The task was performed using different input signal powers for circuit switched networks. A simulator for the EDFA gain dynamicmodel was developed to compute the magnitude and speed of the power transients in the non-self-saturated EDFA both single and chained. The dynamic model of the self-saturated EDFAs chain and its simulator were also developed to compute the magnitude and speed of the power transients and the Optical signal-to-noise ratio (OSNR). We found that the OSNR transient magnitude and speed are a function of both the output power transient and the number of EDFAs in the chain. The OSNR value predicts the level of the quality of service in the related network. It was found that the power transients for both self-saturated and non-self-saturated EDFAs are close in magnitude in the case of gain saturated EDFAs networks. Moreover, the cross-gain saturation also degrades the performance of the packet switching networks due to varying traffic characteristics. The magnitude and the speed of output power transients increase along the EDFAs chain. An investigation was done on the asynchronous transfer mode (ATM) or the WDM Internet protocol (WDM-IP) traffic networks using different traffic patterns based on the Pareto and Poisson distribution. The simulator is used to examine the amount and speed of the power transients in Pareto and Poisson distributed traffic at different bit rates, with specific focus on 2.5 Gb/s. It was found from numerical and statistical analysis that the power swing increases if the time interval of theburst-ON/burst-OFF is long in the packet bursts. This is because the gain dynamics is fast during strong signal pulse or with long duration pulses, which is due to the stimulatedemission avalanche depletion of the excited ions. Thus, an increase in output power levelcould lead to error burst which affects the system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To investigate the dynamics of communication within the primary somatosensory neuronal network. Methods: Multichannel EEG responses evoked by median nerve stimulation were recorded from six healthy participants. We investigated the directional connectivity of the evoked responses by assessing the Partial Directed Coherence (PDC) among five neuronal nodes (brainstem, thalamus and three in the primary sensorimotor cortex), which had been identified by using the Functional Source Separation (FSS) algorithm. We analyzed directional connectivity separately in the low (1-200. Hz, LF) and high (450-750. Hz, HF) frequency ranges. Results: LF forward connectivity showed peaks at 16, 20, 30 and 50. ms post-stimulus. An estimate of the strength of connectivity was modulated by feedback involving cortical and subcortical nodes. In HF, forward connectivity showed peaks at 20, 30 and 50. ms, with no apparent feedback-related strength changes. Conclusions: In this first non-invasive study in humans, we documented directional connectivity across subcortical and cortical somatosensory pathway, discriminating transmission properties within LF and HF ranges. Significance: The combined use of FSS and PDC in a simple protocol such as median nerve stimulation sheds light on how high and low frequency components of the somatosensory evoked response are functionally interrelated in sustaining somatosensory perception in healthy individuals. Thus, these components may potentially be explored as biomarkers of pathological conditions. © 2012 International Federation of Clinical Neurophysiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuroimaging studies have consistently shown that working memory (WM) tasks engage a distributed neural network that primarily includes the dorsolateral prefrontal cortex, the parietal cortex, and the anterior cingulate cortex. The current challenge is to provide a mechanistic account of the changes observed in regional activity. To achieve this, we characterized neuroplastic responses in effective connectivity between these regions at increasing WM loads using dynamic causal modeling of functional magnetic resonance imaging data obtained from healthy individuals during a verbal n-back task. Our data demonstrate that increasing memory load was associated with (a) right-hemisphere dominance, (b) increasing forward (i.e., posterior to anterior) effective connectivity within the WM network, and (c) reduction in individual variability in WM network architecture resulting in the right-hemisphere forward model reaching an exceedance probability of 99% in the most demanding condition. Our results provide direct empirical support that task difficulty, in our case WM load, is a significant moderator of short-term plasticity, complementing existing theories of task-related reduction in variability in neural networks. Hum Brain Mapp, 2013. © 2013 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Link adaptation (LA) plays an important role in adapting an IEEE 802.11 network to wireless link conditions and maximizing its capacity. However, there is a lack of theoretic analysis of IEEE 802.11 LA algorithms. In this article, we propose a Markov chain model for an 802.11 LA algorithm (ONOE algorithm), aiming to identify the problems and finding the space of improvement for LA algorithms. We systematically model the impacts of frame corruption and collision on IEEE 802.11 network performance. The proposed analytic model was verified by computer simulations. With the analytic model, it can be observed that ONOE algorithm performance is highly dependent on the initial bit rate and parameter configurations. The algorithm may perform badly even under light channel congestion, and thus, ONOE algorithm parameters should be configured carefully to ensure a satisfactory system performance. Copyright © 2011 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose an approach based on self-interested autonomous cameras, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to grow the vision graph during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online which permits the addition and removal cameras to the network during runtime and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multi-camera calibration can be avoided. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background—The molecular mechanisms underlying similarities and differences between physiological and pathological left ventricular hypertrophy (LVH) are of intense interest. Most previous work involved targeted analysis of individual signaling pathways or screening of transcriptomic profiles. We developed a network biology approach using genomic and proteomic data to study the molecular patterns that distinguish pathological and physiological LVH. Methods and Results—A network-based analysis using graph theory methods was undertaken on 127 genome-wide expression arrays of in vivo murine LVH. This revealed phenotype-specific pathological and physiological gene coexpression networks. Despite >1650 common genes in the 2 networks, network structure is significantly different. This is largely because of rewiring of genes that are differentially coexpressed in the 2 networks; this novel concept of differential wiring was further validated experimentally. Functional analysis of the rewired network revealed several distinct cellular pathways and gene sets. Deeper exploration was undertaken by targeted proteomic analysis of mitochondrial, myofilament, and extracellular subproteomes in pathological LVH. A notable finding was that mRNA–protein correlation was greater at the cellular pathway level than for individual loci. Conclusions—This first combined gene network and proteomic analysis of LVH reveals novel insights into the integrated pathomechanisms that distinguish pathological versus physiological phenotypes. In particular, we identify differential gene wiring as a major distinguishing feature of these phenotypes. This approach provides a platform for the investigation of potentially novel pathways in LVH and offers a freely accessible protocol (http://sites.google.com/site/cardionetworks) for similar analyses in other cardiovascular diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ALBA 2002 Call for Papers asks the question ‘How do organizational learning and knowledge management contribute to organizational innovation and change?’. Intuitively, we would argue, the answer should be relatively straightforward as links between learning and change, and knowledge management and innovation, have long been commonly assumed to exist. On the basis of this assumption, theories of learning tend to focus ‘within organizations’, and assume a transfer of learning from individual to organization which in turn leads to change. However, empirically, we find these links are more difficult to articulate. Organizations exist in complex embedded economic, political, social and institutional systems, hence organizational change (or innovation) may be influenced by learning in this wider context. Based on our research in this wider interorganizational setting, we first make the case for the notion of network learning that we then explore to develop our appreciation of change in interorganizational networks, and how it may be facilitated. The paper begins with a brief review of lite rature on learning in the organizational and interorganizational context which locates our stance on organizational learning versus the learning organization, and social, distributed versus technical, centred views of organizational learning and knowledge. Developing from the view that organizational learning is “a normal, if problematic, process in every organization” (Easterby-Smith, 1997: 1109), we introduce the notion of network learning: learning by a group of organizations as a group. We argue this is also a normal, if problematic, process in organizational relationships (as distinct from interorganizational learning), which has particular implications for network change. Part two of the paper develops our analysis, drawing on empirical data from two studies of learning. The first study addresses the issue of learning to collaborate between industrial customers and suppliers, leading to the case for network learning. The second, larger scale study goes on to develop this theme, examining learning around several major change issues in a healthcare service provider network. The learning processes and outcomes around the introduction of a particularly controversial and expensive technology are described, providing a rich and contrasting case with the first study. In part three, we then discuss the implications of this work for change, and for facilitating change. Conclusions from the first study identify potential interventions designed to facilitate individual and organizational learning within the customer organization to develop individual and organizational ‘capacity to collaborate’. Translated to the network example, we observe that network change entails learning at all levels – network, organization, group and individual. However, presenting findings in terms of interventions is less meaningful in an interorganizational network setting given: the differences in authority structures; the less formalised nature of the network setting; and the importance of evaluating performance at the network rather than organizational level. Academics challenge both the idea of managing change and of managing networks. Nevertheless practitioners are faced with the issue of understanding and in fluencing change in the network setting. Thus we conclude that a network learning perspective is an important development in our understanding of organizational learning, capability and change, locating this in the wider context in which organizations are embedded. This in turn helps to develop our appreciation of facilitating change in interorganizational networks, both in terms of change issues (such as introducing a new technology), and change orientation and capability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dedicated short-range communications (DSRC) are a promising vehicle communication technique for collaborative road safety applications (CSA). However, road safety applications require highly reliable and timely wireless communications, which present big challenges to DSRC based vehicle networks on effective and robust quality of services (QoS) provisioning due to the random channel access method applied in the DSRC technique. In this paper we examine the QoS control problem for CSA in the DSRC based vehicle networks and presented an overview of the research work towards the QoS control problem. After an analysis of the system application requirements and the DSRC vehicle network features, we propose a framework for cooperative and adaptive QoS control, which is believed to be a key for the success of DSRC on supporting effective collaborative road safety applications. A core design in the proposed QoS control framework is that network feedback and cross-layer design are employed to collaboratively achieve targeted QoS. A design example of cooperative and adaptive rate control scheme is implemented and evaluated, with objective of illustrating the key ideas in the framework. Simulation results demonstrate the effectiveness of proposed rate control schemes in providing highly available and reliable channel for emergency safety messages. © 2013 Wenyang Guan et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Medium access control (MAC) protocols have a large impact on the achievable system performance for wireless ad hoc networks. Because of the limitations of existing analytical models for ad hoc networks, many researchers have opted to study the impact of MAC protocols via discreteevent simulations. However, as the network scenarios, traffic patterns and physical layer techniques may change significantly, simulation alone is not efficient to get insights into the impacts of MAC protocols on system performance. In this paper, we analyze the performance of IEEE 802.11 distributed coordination function (DCF) in multihop network scenario. We are particularly interested in understanding how physical layer techniques may affect the MAC protocol performance. For this purpose, the features of interference range is studied and taken into account of the analytical model. Simulations with OPNET show the effectiveness of the proposed analytical approach. Copyright 2005 ACM.