58 resultados para network performance
Resumo:
This paper compares two methods to predict in°ation rates in Europe. One method uses a standard back propagation neural network and the other uses an evolutionary approach, where the network weights and the network architecture are evolved. Results indicate that back propagation produces superior results. However, the evolving network still produces reasonable results with the advantage that the experimental set-up is minimal. Also of interest is the fact that the Divisia measure of money is superior as a predictive tool over simple sum.
Resumo:
Medium access control (MAC) protocols have a large impact on the achievable system performance for wireless ad hoc networks. Because of the limitations of existing analytical models for ad hoc networks, many researchers have opted to study the impact of MAC protocols via discreteevent simulations. However, as the network scenarios, traffic patterns and physical layer techniques may change significantly, simulation alone is not efficient to get insights into the impacts of MAC protocols on system performance. In this paper, we analyze the performance of IEEE 802.11 distributed coordination function (DCF) in multihop network scenario. We are particularly interested in understanding how physical layer techniques may affect the MAC protocol performance. For this purpose, the features of interference range is studied and taken into account of the analytical model. Simulations with OPNET show the effectiveness of the proposed analytical approach. Copyright 2005 ACM.
Resumo:
Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use.
Resumo:
This article proposes a Bayesian neural network approach to determine the risk of re-intervention after endovascular aortic aneurysm repair surgery. The target of proposed technique is to determine which patients have high chance to re-intervention (high-risk patients) and which are not (low-risk patients) after 5 years of the surgery. Two censored datasets relating to the clinical conditions of aortic aneurysms have been collected from two different vascular centers in the United Kingdom. A Bayesian network was first employed to solve the censoring issue in the datasets. Then, a back propagation neural network model was built using the uncensored data of the first center to predict re-intervention on the second center and classify the patients into high-risk and low-risk groups. Kaplan-Meier curves were plotted for each group of patients separately to show whether there is a significant difference between the two risk groups. Finally, the logrank test was applied to determine whether the neural network model was capable of predicting and distinguishing between the two risk groups. The results show that the Bayesian network used for uncensoring the data has improved the performance of the neural networks that were built for the two centers separately. More importantly, the neural network that was trained with uncensored data of the first center was able to predict and discriminate between groups of low risk and high risk of re-intervention after 5 years of endovascular aortic aneurysm surgery at center 2 (p = 0.0037 in the logrank test).
Resumo:
This paper investigates neural network-based probabilistic decision support system to assess drivers' knowledge for the objective of developing a renewal policy of driving licences. The probabilistic model correlates drivers' demographic data to their results in a simulated written driving exam (SWDE). The probabilistic decision support system classifies drivers' into two groups of passing and failing a SWDE. Knowledge assessment of drivers within a probabilistic framework allows quantifying and incorporating uncertainty information into the decision-making system. The results obtained in a Jordanian case study indicate that the performance of the probabilistic decision support systems is more reliable than conventional deterministic decision support systems. Implications of the proposed probabilistic decision support systems on the renewing of the driving licences decision and the possibility of including extra assessment methods are discussed.
Resumo:
We propose an artificial neural network (ANN) equalizer for transmission performance enhancement of coherent optical OFDM (C-OOFDM) signals. The ANN equalizer showed more efficiency in combating both chromatic dispersion (CD) and single-mode fibre (SMF)-induced non-linearities compared to the least mean square (LMS). The equalizer can offer a 1.5 dB improvement in optical signal-to-noise ratio (OSNR) compared to LMS algorithm for 40 Gbit/s C-OOFDM signals when considering only CD. It is also revealed that ANN can double the transmission distance up to 320 km of SMF compared to the case of LMS, providing a nonlinearity tolerance improvement of ∼0.7 dB OSNR.
Resumo:
This paper examines the extent to which both network structure and spatial factors impact on the organizational performance of universities as measured by the generation of industrial research income. Drawing on data concerning the interactions of universities in the UK with large research and development (R&D)-intensive firms, the paper employs both social network analysis and regression analysis. It is found that the structural position of a university within networks with large R&D-intensive firms is significantly associated with the level of research income gained from industry. Spatial factors, on the other hand, are not found to be clearly associated with performance, suggesting that universities operate on a level playing field across regional environments once other factors are controlled for.
Resumo:
Background Lifelong surveillance after endovascular repair (EVAR) of abdominal aortic aneurysms (AAA) is considered mandatory to detect potentially life-threatening endograft complications. A minority of patients require reintervention but cannot be predictively identified by existing methods. This study aimed to improve the prediction of endograft complications and mortality, through the application of machine-learning techniques. Methods Patients undergoing EVAR at 2 centres were studied from 2004-2010. Pre-operative aneurysm morphology was quantified and endograft complications were recorded up to 5 years following surgery. An artificial neural networks (ANN) approach was used to predict whether patients would be at low- or high-risk of endograft complications (aortic/limb) or mortality. Centre 1 data were used for training and centre 2 data for validation. ANN performance was assessed by Kaplan-Meier analysis to compare the incidence of aortic complications, limb complications, and mortality; in patients predicted to be low-risk, versus those predicted to be high-risk. Results 761 patients aged 75 +/- 7 years underwent EVAR. Mean follow-up was 36+/- 20 months. An ANN was created from morphological features including angulation/length/areas/diameters/ volume/tortuosity of the aneurysm neck/sac/iliac segments. ANN models predicted endograft complications and mortality with excellent discrimination between a low-risk and high-risk group. In external validation, the 5-year rates of freedom from aortic complications, limb complications and mortality were 95.9% vs 67.9%; 99.3% vs 92.0%; and 87.9% vs 79.3% respectively (p0.001) Conclusion This study presents ANN models that stratify the 5-year risk of endograft complications or mortality using routinely available pre-operative data.
Resumo:
In wireless sensor networks where nodes are powered by batteries, it is critical to prolong the network lifetime by minimizing the energy consumption of each node. In this paper, the cooperative multiple-input-multiple-output (MIMO) and data-aggregation techniques are jointly adopted to reduce the energy consumption per bit in wireless sensor networks by reducing the amount of data for transmission and better using network resources through cooperative communication. For this purpose, we derive a new energy model that considers the correlation between data generated by nodes and the distance between them for a cluster-based sensor network by employing the combined techniques. Using this model, the effect of the cluster size on the average energy consumption per node can be analyzed. It is shown that the energy efficiency of the network can significantly be enhanced in cooperative MIMO systems with data aggregation, compared with either cooperative MIMO systems without data aggregation or data-aggregation systems without cooperative MIMO, if sensor nodes are properly clusterized. Both centralized and distributed data-aggregation schemes for the cooperating nodes to exchange and compress their data are also proposed and appraised, which lead to diverse impacts of data correlation on the energy performance of the integrated cooperative MIMO and data-aggregation systems.
Resumo:
The multiple-input multiple-output (MIMO) technique can be used to improve the performance of ad hoc networks. Various medium access control (MAC) protocols with multiple contention slots have been proposed to exploit spatial multiplexing for increasing the transport throughput of MIMO ad hoc networks. However, the existence of multiple request-to-send/clear-to-send (RTS/CTS) contention slots represents a severe overhead that limits the improvement on transport throughput achieved by spatial multiplexing. In addition, when the number of contention slots is fixed, the efficiency of RTS/CTS contention is affected by the transmitting power of network nodes. In this study, a joint optimisation scheme on both transmitting power and contention slots number for maximising the transport throughput is presented. This includes the establishment of an analytical model of a simplified MAC protocol with multiple contention slots, the derivation of transport throughput as a function of both transmitting power and the number of contention slots, and the optimisation process based on the transport throughput formula derived. The analytical results obtained, verified by simulation, show that much higher transport throughput can be achieved using the joint optimisation scheme proposed, compared with the non-optimised cases and the results previously reported.
Resumo:
Purpose: This paper aims to explore the role of internal and external knowledgebased linkages across the supply chain in achieving better operational performance. It investigates how knowledge is accumulated, shared, and applied to create organization-specific knowledge resources that increase and sustain the organization's competitive advantage. Design/methodology/approach: This paper uses a single case study with multiple, embedded units of analysis, and the social network analysis (SNA) to demonstrate the impact of internal and external knowledge-based linkages across multiple tiers in the supply chain on the organizational operational performance. The focal company of the case study is an Italian manufacturer supplying rubber components to European automotive enterprises. Findings: With the aid of the SNA, the internal knowledge-based linkages can be mapped and visualized. We found that the most central nodes having the most connections with other nodes in the linkages are the most crucial members in terms of knowledge exploration and exploitation within the organization. We also revealed that the effective management of external knowledge-based linkages, such as buyer company, competitors, university, suppliers, and subcontractors, can help improve the operational performance. Research limitations/implications: First, our hypothesis was tested on a single case. The analysis of multiple case studies using SNA would provide a deeper understanding of the relationship between the knowledge-based linkages at all levels of the supply chain and the integration of knowledge. Second, the static nature of knowledge flows was studied in this research. Future research could also consider ongoing monitoring of dynamic linkages and the dynamic characteristic of knowledge flows. Originality/value: To the best of our knowledge, the phrase 'knowledge-based linkages' has not been used in the literature and there is lack of investigation on the relationship between the management of internal and external knowledge-based linkages and the operational performance. To bridge the knowledge gap, this paper will show the importance of understanding the composition and characteristics of knowledge-based linkages and their knowledge nodes. In addition, this paper will show that effective management of knowledge-based linkages leads to the creation of new knowledge and improves organizations' operational performance.
Resumo:
Erasure control coding has been exploited in communication networks with an aim to improve the end-to-end performance of data delivery across the network. To address the concerns over the strengths and constraints of erasure coding schemes in this application, we examine the performance limits of two erasure control coding strategies, forward erasure recovery and adaptive erasure recovery. Our investigation shows that the throughput of a network using an (n, k) forward erasure control code is capped by r =k/n when the packet loss rate p ≤ (te/n) and by k(l-p)/(n-te) when p > (t e/n), where te is the erasure control capability of the code. It also shows that the lower bound of the residual loss rate of such a network is (np-te)/(n-te) for (te/n) < p ≤ 1. Especially, if the code used is maximum distance separable, the Shannon capacity of the erasure channel, i.e. 1-p, can be achieved and the residual loss rate is lower bounded by (p+r-1)/r, for (1-r) < p ≤ 1. To address the requirements in real-time applications, we also investigate the service completion time of different schemes. It is revealed that the latency of the forward erasure recovery scheme is fractionally higher than that of the scheme without erasure control coding or retransmission mechanisms (using UDP), but much lower than that of the adaptive erasure scheme when the packet loss rate is high. Results on comparisons between the two erasure control schemes exhibit their advantages as well as disadvantages in the role of delivering end-to-end services. To show the impact of the bounds derived on the end-to-end performance of a TCP/IP network, a case study is provided to demonstrate how erasure control coding could be used to maximize the performance of practical systems. © 2010 IEEE.
Resumo:
In this paper, the problem of semantic place categorization in mobile robotics is addressed by considering a time-based probabilistic approach called dynamic Bayesian mixture model (DBMM), which is an improved variation of the dynamic Bayesian network. More specifically, multi-class semantic classification is performed by a DBMM composed of a mixture of heterogeneous base classifiers, using geometrical features computed from 2D laserscanner data, where the sensor is mounted on-board a moving robot operating indoors. Besides its capability to combine different probabilistic classifiers, the DBMM approach also incorporates time-based (dynamic) inferences in the form of previous class-conditional probabilities and priors. Extensive experiments were carried out on publicly available benchmark datasets, highlighting the influence of the number of time-slices and the effect of additive smoothing on the classification performance of the proposed approach. Reported results, under different scenarios and conditions, show the effectiveness and competitive performance of the DBMM.