39 resultados para wired best-effort networks

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their non-deterministic performance. Although CAMs are favoured by technology vendors due to their deterministic high lookup rates, they suffer from the problems of high power dissipation and high silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multi-level cutting the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their nondeterministic performance. Although content addressable memories (CAMs) are favoured by technology vendors due to their deterministic high-lookup rates, they suffer from the problems of high-power consumption and high-silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multilevel cutting of the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we introduce a statistical data-correction framework that aims at improving the DSP system performance in presence of unreliable memories. The proposed signal processing framework implements best-effort error mitigation for signals that are corrupted by defects in unreliable storage arrays using a statistical correction function extracted from the signal statistics, a data-corruption model, and an application-specific cost function. An application example to communication systems demonstrates the efficacy of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new packet scheduling scheme called agent-based WFQ to control and maintain QoS parameters in virtual private networks (VPNs) within the confines of adaptive networks. Future networks are expected to be open heterogeneous environments consisting of more than one network operator. In this adaptive environment, agents act on behalf of users or third-party operators to obtain the best service for their clients and maintain those services through the modification of the scheduling scheme in routers and switches spanning the VPN. In agent-based WFQ, an agent on the router monitors the accumulated queuing delay for each service. In order to control and to keep the end-to-end delay within the bounds, the weights for services are adjusted dynamically by agents on the routers spanning the VPN. If there is an increase or decrease in queuing delay of a service, an agent on a downstream router informs the upstream routers to adjust the weights of their queues. This keeps the end-to-end delay of services within the specified bounds and offers better QoS compared to VPNs using static WFQ. This paper also describes the algorithm for agent-based WFQ, and presents simulation results. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The divide-and-conquer approach of local model (LM) networks is a common engineering approach to the identification of a complex nonlinear dynamical system. The global representation is obtained from the weighted sum of locally valid, simpler sub-models defined over small regions of the operating space. Constructing such networks requires the determination of appropriate partitioning and the parameters of the LMs. This paper focuses on the structural aspect of LM networks. It compares the computational requirements and performances of the Johansen and Foss (J&F) and LOLIMOT tree-construction algorithms. Several useful and important modifications to each algorithm are proposed. The modelling performances are evaluated using real data from a pilot plant of a pH neutralization process. Results show that while J&F achieves a more accurate nonlinear representation of the pH process, LOLIMOT requires significantly less computational effort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel methodology is proposed for the development of neural network models for complex engineering systems exhibiting nonlinearity. This method performs neural network modeling by first establishing some fundamental nonlinear functions from a priori engineering knowledge, which are then constructed and coded into appropriate chromosome representations. Given a suitable fitness function, using evolutionary approaches such as genetic algorithms, a population of chromosomes evolves for a certain number of generations to finally produce a neural network model best fitting the system data. The objective is to improve the transparency of the neural networks, i.e. to produce physically meaningful

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using seven strategically placed, time-synchronized bodyworn receivers covering the head, upper front and back torso, and the limbs, we have investigated the effect of user state: stationary or mobile and local environment: anechoic chamber, open office area and hallway upon first and second order statistics for on-body fading channels. Three candidate models were considered: Nakagami, Rice and lognormal. Using maximum likelihood estimation and the Akaike information criterion it was established that the Nakagami-m distribution best described small-scale fading for the majority of on-body channels over all the measurement scenarios. When the user was stationary, Nakagami-m parameters were found to be much greater than 1, irrespective of local surroundings. For mobile channels, Nakagami-m parameters significantly decreased, with channels in the open office area and hallway experiencing the worst fading conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this letter, we investigate the distribution of the phase component of the complex received signal observed in practical experiments using body area networks. Two phase distributions, the recently proposed kappa-mu and eta-mu probability densities, which together encompass the most widely used fading models, namely Semi-Gaussian, Rayleigh, Hoyt, Rice, and Nakagami-m, have been compared with measurement data. The kappa-mu distribution has been found to provide the best fit over a range of on-body links, while the user was mobile. The experiments were carried out in two dissimilar indoor environments at opposite ends of the multipath spectrum. It has also been found that the uniform phase distribution has not arisen in anyone of the experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background
Inferring gene regulatory networks from large-scale expression data is an important problem that received much attention in recent years. These networks have the potential to gain insights into causal molecular interactions of biological processes. Hence, from a methodological point of view, reliable estimation methods based on observational data are needed to approach this problem practically.

Results
In this paper, we introduce a novel gene regulatory network inference (GRNI) algorithm, called C3NET. We compare C3NET with four well known methods, ARACNE, CLR, MRNET and RN, conducting in-depth numerical ensemble simulations and demonstrate also for biological expression data from E. coli that C3NET performs consistently better than the best known GRNI methods in the literature. In addition, it has also a low computational complexity. Since C3NET is based on estimates of mutual information values in conjunction with a maximization step, our numerical investigations demonstrate that our inference algorithm exploits causal structural information in the data efficiently.

Conclusions
For systems biology to succeed in the long run, it is of crucial importance to establish methods that extract large-scale gene networks from high-throughput data that reflect the underlying causal interactions among genes or gene products. Our method can contribute to this endeavor by demonstrating that an inference algorithm with a neat design permits not only a more intuitive and possibly biological interpretation of its working mechanism but can also result in superior results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present an approach to quantum cloning with unmodulated spin networks. The cloner is realized by a proper design of the network and a choice of the coupling between the qubits. We show that in the case of phase covariant cloner the XY coupling gives the best results. In the 1 -> 2 cloning we find that the value for the fidelity of the optimal cloner is achieved, and values comparable to the optimal ones in the general N -> M case can be attained. If a suitable set of network symmetries are satisfied, the output fidelity of the clones does not depend on the specific choice of the graph. We show that spin network cloning is robust against the presence of static imperfections. Moreover, in the presence of noise, it outperforms the conventional approach. In this case the fidelity exceeds the corresponding value obtained by quantum gates even for a very small amount of noise. Furthermore, we show how to use this method to clone qutrits and qudits. By means of the Heisenberg coupling it is also possible to implement the universal cloner although in this case the fidelity is 10% off that of the optimal cloner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Usage of anticoagulant rodenticides (ARs) is an integral component of modern agriculture and is essential for the control of commensal rodent populations. However, the extensive deployment of ARs has led to widespread exposure of a range of non-target predatory birds and mammals to some compounds, in particular the second-generation anticoagulant rodenticides (SCARS). As a result, there has been considerable effort placed into devising voluntary best practice guidelines that increase the efficacy of rodent control and reduce the risk of non-target exposure. Currently, there is limited published information on actual practice amongst users or implementation of best practice. We assessed the behaviour of a typical group of users using an on-farm questionnaire survey. Most baited for rodents every year using SGARs. Most respondents were apparently aware of the risks of non-target exposure and adhered to some of the best practice recommendations but total compliance was rare. Our questionnaire revealed that users of first generation anticoagulant rodenticides rarely protected or checked bait stations, and so took little effort to prevent primary exposure of non-targets. Users almost never searched for and removed poisoned carcasses and many baited for prolonged periods or permanently. These factors are all likely to enhance the likelihood of primary and secondary exposure of non-target species. (C) 2010 Published by Elsevier Ltd.