26 resultados para scale-free networks
em Aston University Research Archive
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analysing network failures caused by hardware faults or overload. There network reaction was modelled as rerouting of traffic away from failed or congested elements. Here we model network reaction to congestion on much shorter time scales when the input traffic rate through congested routes is reduced. As an example we consider the Internet where local mismatch between demand and capacity results in traffic losses. We describe the onset of congestion as a phase transition characterised by strong, albeit relatively short-lived, fluctuations of losses caused by noise in input traffic and exacerbated by the heterogeneous nature of the network manifested in a power-law load distribution. The fluctuations may result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © 2013 IEEE.
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analyzing network failures caused by hardware faults or overload, where the network reaction was modeled as rerouting of traffic away from failed or congested elements. Here we model another type of the network reaction to congestion - a sharp reduction of the input traffic rate through congested routes which occurs on much shorter time scales. We consider the onset of congestion in the Internet where local mismatch between demand and capacity results in traffic losses and show that it can be described as a phase transition characterized by strong non-Gaussian loss fluctuations at a mesoscopic time scale. The fluctuations, caused by noise in input traffic, are exacerbated by the heterogeneous nature of the network manifested in a scale-free load distribution. They result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © Copyright EPLA, 2012.
Resumo:
Dedicated short range communications (DSRC) has been regarded as one of the most promising technologies to provide robust communications for large scale vehicle networks. It is designed to support both road safety and commercial applications. Road safety applications will require reliable and timely wireless communications. However, as the medium access control (MAC) layer of DSRC is based on the IEEE 802.11 distributed coordination function (DCF), it is well known that the random channel access based MAC cannot provide guaranteed quality of services (QoS). It is very important to understand the quantitative performance of DSRC, in order to make better decisions on its adoption, control, adaptation, and improvement. In this paper, we propose an analytic model to evaluate the DSRC-based inter-vehicle communication. We investigate the impacts of the channel access parameters associated with the different services including arbitration inter-frame space (AIFS) and contention window (CW). Based on the proposed model, we analyze the successful message delivery ratio and channel service delay for broadcast messages. The proposed analytical model can provide a convenient tool to evaluate the inter-vehicle safety applications and analyze the suitability of DSRC for road safety applications.
Resumo:
In this paper, we study the localization problem in large-scale Underwater Wireless Sensor Networks (UWSNs). Unlike in the terrestrial positioning, the global positioning system (GPS) can not work efficiently underwater. The limited bandwidth, the severely impaired channel and the cost of underwater equipment all makes the localization problem very challenging. Most current localization schemes are not well suitable for deep underwater environment. We propose a hierarchical localization scheme to address the challenging problems. The new scheme mainly consists of four types of nodes, which are surface buoys, Detachable Elevator Transceivers (DETs), anchor nodes and ordinary nodes. Surface buoy is assumed to be equipped with GPS on the water surface. A DET is attached to a surface buoy and can rise and down to broadcast its position. The anchor nodes can compute their positions based on the position information from the DETs and the measurements of distance to the DETs. The hierarchical localization scheme is scalable, and can be used to make balances on the cost and localization accuracy. Initial simulation results show the advantages of our proposed scheme. © 2009 IEEE.
Resumo:
In this paper, we study an area localization problem in large scale Underwater Wireless Sensor Networks (UWSNs). The limited bandwidth, the severely impaired channel and the cost of underwater equipment all makes the underwater localization problem very challenging. Exact localization is very difficult for UWSNs in deep underwater environment. We propose a Mobile DETs based efficient 3D multi-power Area Localization Scheme (3D-MALS) to address the challenging problem. In the proposed scheme, the ideas of 2D multi-power Area Localization Scheme(2D-ALS) [6] and utilizing Detachable Elevator Transceiver (DET) are used to achieve the simplicity, location accuracy, scalability and low cost performances. The DET can rise and down to broadcast its position. And it is assumed that all the underwater nodes underwater have pressure sensors and know their z coordinates. The simulation results show that our proposed scheme is very efficient. © 2009 IEEE.
Resumo:
In recent years, we have witnessed the mushrooming of pro- democracy and protest movements not only in the Arab world, but also within Europe and the Americas. Such movements have ranged from popular upheavals, like in Tunisia and Egypt, to the organization of large- scale demonstrations against unpopular policies, as in Spain, Greece and Poland. What connects these different events are not only their democratic aspirations, but also their innovative forms of communication and organization through online means, which are sometimes considered to be outside of the State’s control. At the same time, however, it has become more and more apparent that countries are attempting to increase their understanding of, and control over, their citizens’ actions in the digital sphere. This involves striving to develop surveillance instruments, control mechanisms and processes engineered to dominate the digital public sphere, which necessitates the assistance and support of private actors such as Internet intermediaries. Examples include the growing use of Internet surveillance technology with which online data traffic is analysed, and the extensive monitoring of social networks. Despite increased media attention, academic debate on the ambivalence of these technologies, mechanisms and techniques remains relatively limited, as is discussion of the involvement of corporate actors. The purpose of this edited volume is to reflect on how Internet-related technologies, mechanisms and techniques may be used as a means to enable expression, but also to restrict speech, manipulate public debate and govern global populaces.
Resumo:
This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a Tokamak fusion experiment. The Tokamak is currently the principal experimental device for research into the magnetic confinement approach to controlled fusion. In the Tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a time-scale of a few tens of microseconds. Software simulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most Tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multi-layer perceptron, using a hybrid of digital and analogue technology, has been developed.
Resumo:
Interpenetrating polymer networks (lPN's), have been defined as a combination of two polymers each in network form, at least one of which has been synthesised and / or crosslinked in the presence of the other. A semi-lPN, is formed when only one of the polymers in the system is crosslinked, the other being linear. lPN's have potential advantages over homogeneous materials presently used in biomedical applications, in that their composite nature gives them a useful combination of properties. Such materials have potential uses in the biomedical field, specifically for use in hard tissue replacements, rigid gas permeable contact lenses and dental materials. Work on simply two or three component systems in both low water containing lPN's supplemented by the study of hydrogels (water swollen hydrophilic polymers) can provide information useful in the future development of more complex systems. A range of copolymers have been synthesised using a variety of methacrylates and acrylates. Hydrogels were obtained by the addition of N-vinyl pyrrolidone to these copolymers. A selection of interpenetrants were incorporated into the samples and their effect on the copolymer properties was investigated. By studying glass transition temperatures, mechanical, surface, water binding and oxygen permeability properties samples were assessed for their suitability for use as biomaterials. In addition copolymers containing tris-(trimethylsiloxy)-y-methacryloxypropyl silane, commonly abbreviated to 'TRlS', have been investigated. This material has been shown to enhance oxygen permeability, a desirable property when considering the design of contact lenses. However, 'TRIS' has a low polar component of surface free energy and hence low wettability. Copolymerisation with a range of methacrylates has shown that significant increases in surface wettability can be obtained without a detrimental effect on oxygen permeability. To further enhance to surface wettability 4-methacryloxyethyl trimellitic anhydride was incorporated into a range of promising samples. This study has shown that by careful choice of monomers it is possible to synthesise polymers that possess a range of properties desirable in biomedical applications.
Resumo:
A number of researchers have investigated the application of neural networks to visual recognition, with much of the emphasis placed on exploiting the network's ability to generalise. However, despite the benefits of such an approach it is not at all obvious how networks can be developed which are capable of recognising objects subject to changes in rotation, translation and viewpoint. In this study, we suggest that a possible solution to this problem can be found by studying aspects of visual psychology and in particular, perceptual organisation. For example, it appears that grouping together lines based upon perceptually significant features can facilitate viewpoint independent recognition. The work presented here identifies simple grouping measures based on parallelism and connectivity and shows how it is possible to train multi-layer perceptrons (MLPs) to detect and determine the perceptual significance of any group presented. In this way, it is shown how MLPs which are trained via backpropagation to perform individual grouping tasks, can be brought together into a novel, large scale network capable of determining the perceptual significance of the whole input pattern. Finally the applicability of such significance values for recognition is investigated and results indicate that both the NILP and the Kohonen Feature Map can be trained to recognise simple shapes described in terms of perceptual significances. This study has also provided an opportunity to investigate aspects of the backpropagation algorithm, particularly the ability to generalise. In this study we report the results of various generalisation tests. In applying the backpropagation algorithm to certain problems, we found that there was a deficiency in performance with the standard learning algorithm. An improvement in performance could however, be obtained when suitable modifications were made to the algorithm. The modifications and consequent results are reported here.
Resumo:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains information relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of concept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network approach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the presence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear techniques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
Resumo:
Diagnosing faults in wastewater treatment, like diagnosis of most problems, requires bi-directional plausible reasoning. This means that both predictive (from causes to symptoms) and diagnostic (from symptoms to causes) inferences have to be made, depending on the evidence available, in reasoning for the final diagnosis. The use of computer technology for the purpose of diagnosing faults in the wastewater process has been explored, and a rule-based expert system was initiated. It was found that such an approach has serious limitations in its ability to reason bi-directionally, which makes it unsuitable for diagnosing tasks under the conditions of uncertainty. The probabilistic approach known as Bayesian Belief Networks (BBNS) was then critically reviewed, and was found to be well-suited for diagnosis under uncertainty. The theory and application of BBNs are outlined. A full-scale BBN for the diagnosis of faults in a wastewater treatment plant based on the activated sludge system has been developed in this research. Results from the BBN show good agreement with the predictions of wastewater experts. It can be concluded that the BBNs are far superior to rule-based systems based on certainty factors in their ability to diagnose faults and predict systems in complex operating systems having inherently uncertain behaviour.
Resumo:
In order to study the structure and function of a protein, it is generally required that the protein in question is purified away from all others. For soluble proteins, this process is greatly aided by the lack of any restriction on the free and independent diffusion of individual protein particles in three dimensions. This is not the case for membrane proteins, as the membrane itself forms a continuum that joins the proteins within the membrane with one another. It is therefore essential that the membrane is disrupted in order to allow separation and hence purification of membrane proteins. In the present review, we examine recent advances in the methods employed to separate membrane proteins before purification. These approaches move away from solubilization methods based on the use of small surfactants, which have been shown to suffer from significant practical problems. Instead, the present review focuses on methods that stem from the field of nanotechnology and use a range of reagents that fragment the membrane into nanometre-scale particles containing the protein complete with the local membrane environment. In particular, we examine a method employing the amphipathic polymer poly(styrene-co-maleic acid), which is able to reversibly encapsulate the membrane protein in a 10 nm disc-like structure ideally suited to purification and further biochemical study.
Resumo:
Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.
Resumo:
This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.
Resumo:
Existing wireless systems are normally regulated by a fixed spectrum assignment strategy. This policy leads to an undesirable situation that some systems may only use the allocated spectrum to a limited extent while others have very serious spectrum insufficiency situation. Dynamic Spectrum Access (DSA) is emerging as a promising technology to address this issue such that the unused licensed spectrum can be opportunistically accessed by the unlicensed users. To enable DSA, the unlicensed user shall have the capability of detecting the unoccupied spectrum, controlling its spectrum access in an adaptive manner, and coexisting with other unlicensed users automatically. In this article, we propose a radio system Transmission Opportunity-based spectrum access control protocol with the aim to improve spectrum access fairness and ensure safe coexistence of multiple heterogeneous unlicensed radio systems. In the scheme, multiple radio systems will coexist and dynamically use available free spectrum without interfering with licensed users. Simulation is carried out to evaluate the performance of the proposed scheme with respect to spectrum utilisation, fairness and scalability. Comparing with the existed studies, our strategy is able to achieve higher scalability and controllability without degrading spectrum utilisation and fairness performance.