19 resultados para Non-autonomous Schr odinger-Poisson systems
em Aston University Research Archive
Resumo:
The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.
Bit-error rate performance of 20 Gbit/s WDM RZ-DPSK non-slope matched submarine transmission systems
Resumo:
Applying direct error counting, we assess the performance of 20 Gbit/s wavelength-division multiplexing return-to-zero differential phase-shift keying (RZ-DPSK) transmission at 0.4 bit/(s Hz) spectral efficiency for application on installed non-zero dispersion-shifted fibre based transoceanic submarine systems. The impact of the pulse duty cycle on the system performance is investigated and the reliability of the existing theoretical approaches to the BER estimation for the RZ-DPSK format is discussed.
Bit-error rate performance of 20 Gbit/s WDM RZ-DPSK non-slope matched submarine transmission systems
Resumo:
Applying direct error counting, we assess the performance of 20 Gbit/s wavelength-division multiplexing return-to-zero differential phase-shift keying (RZ-DPSK) transmission at 0.4 bit/(s Hz) spectral efficiency for application on installed non-zero dispersion-shifted fibre based transoceanic submarine systems. The impact of the pulse duty cycle on the system performance is investigated and the reliability of the existing theoretical approaches to the BER estimation for the RZ-DPSK format is discussed.
Resumo:
Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.
Resumo:
We present a novel market-based method, inspired by retail markets, for resource allocation in fully decentralised systems where agents are self-interested. Our market mechanism requires no coordinating node or complex negotiation. The stability of outcome allocations, those at equilibrium, is analysed and compared for three buyer behaviour models. In order to capture the interaction between self-interested agents, we propose the use of competitive coevolution. Our approach is both highly scalable and may be tuned to achieve specified outcome resource allocations. We demonstrate the behaviour of our approach in simulation, where evolutionary market agents act on behalf of service providing nodes to adaptively price their resources over time, in response to market conditions. We show that this leads the system to the predicted outcome resource allocation. Furthermore, the system remains stable in the presence of small changes in price, when buyers' decision functions degrade gracefully. © 2009 The Author(s).
Resumo:
Recently, there has been a considerable research activity in extending topographic maps of vectorial data to more general data structures, such as sequences or trees. However, the representational capabilities and internal representations of the models are not well understood. We rigorously analyze a generalization of the Self-Organizing Map (SOM) for processing sequential data, Recursive SOM (RecSOM [1]), as a non-autonomous dynamical system consisting off a set of fixed input maps. We show that contractive fixed input maps are likely to produce Markovian organizations of receptive fields o the RecSOM map. We derive bounds on parameter $\beta$ (weighting the importance of importing past information when processing sequences) under which contractiveness of the fixed input maps is guaranteed.
Resumo:
We develop a multi-agent based model to simulate a population which comprises of two ethnic groups and a peacekeeping force. We investigate the effects of different strategies for civilian movement to the resulting violence in this bi-communal population. Specifically, we compare and contrast random and race-based migration strategies. Race-based migration leads the formation of clusters. Previous work in this area has shown that same-race clustering instigates violent behavior in otherwise passive segments of the population. Our findings confirm this. Furthermore, we show that in settings where only one of the two races adopts race-based migration it is a winning strategy especially in violently predisposed populations. On the other hand, in relatively peaceful settings clustering is a restricting factor which causes the race that adopts it to drift into annihilation. Finally, we show that when race-based migration is adopted as a strategy by both ethnic groups it results in peaceful co-existence even in the most violently predisposed populations.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.
Resumo:
This paper concerns the problem of agent trust in an electronic market place. We maintain that agent trust involves making decisions under uncertainty and therefore the phenomenon should be modelled probabilistically. We therefore propose a probabilistic framework that models agent interactions as a Hidden Markov Model (HMM). The observations of the HMM are the interaction outcomes and the hidden state is the underlying probability of a good outcome. The task of deciding whether to interact with another agent reduces to probabilistic inference of the current state of that agent given all previous interaction outcomes. The model is extended to include a probabilistic reputation system which involves agents gathering opinions about other agents and fusing them with their own beliefs. Our system is fully probabilistic and hence delivers the following improvements with respect to previous work: (a) the model assumptions are faithfully translated into algorithms; our system is optimal under those assumptions, (b) It can account for agents whose behaviour is not static with time (c) it can estimate the rate with which an agent's behaviour changes. The system is shown to significantly outperform previous state-of-the-art methods in several numerical experiments. Copyright © 2010, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Resumo:
Existing wireless systems are normally regulated by a fixed spectrum assignment strategy. This policy leads to an undesirable situation that some systems may only use the allocated spectrum to a limited extent while others have very serious spectrum insufficiency situation. Dynamic Spectrum Access (DSA) is emerging as a promising technology to address this issue such that the unused licensed spectrum can be opportunistically accessed by the unlicensed users. To enable DSA, the unlicensed user shall have the capability of detecting the unoccupied spectrum, controlling its spectrum access in an adaptive manner, and coexisting with other unlicensed users automatically. In this article, we propose a radio system Transmission Opportunity-based spectrum access control protocol with the aim to improve spectrum access fairness and ensure safe coexistence of multiple heterogeneous unlicensed radio systems. In the scheme, multiple radio systems will coexist and dynamically use available free spectrum without interfering with licensed users. Simulation is carried out to evaluate the performance of the proposed scheme with respect to spectrum utilisation, fairness and scalability. Comparing with the existed studies, our strategy is able to achieve higher scalability and controllability without degrading spectrum utilisation and fairness performance.
Resumo:
Lock-in is observed in real world markets of experience goods; experience goods are goods whose characteristics are difficult to determine in advance, but ascertained upon consumption. We create an agent-based simulation of consumers choosing between two experience goods available in a virtual market. We model consumers in a grid representing the spatial network of the consumers. Utilising simple assumptions, including identical distributions of product experience and consumers having a degree of follower tendency, we explore the dynamics of the model through simulations. We conduct simulations to create a lock-in before testing several hypotheses upon how to break an existing lock-in; these include the effect of advertising and free give-away. Our experiments show that the key to successfully breaking a lock-in required the creation of regions in a consumer population. Regions arise due to the degree of local conformity between agents within the regions, which spread throughout the population when a mildly superior competitor was available. These regions may be likened to a niche in a market, which gains in popularity to transition into the mainstream.
Resumo:
Link quality-based rate adaptation has been widely used for IEEE 802.11 networks. However, network performance is affected by both link quality and random channel access. Selection of transmit modes for optimal link throughput can cause medium access control (MAC) throughput loss. In this paper, we investigate this issue and propose a generalised cross-layer rate adaptation algorithm. It considers jointly link quality and channel access to optimise network throughput. The objective is to examine the potential benefits by cross-layer design. An efficient analytic model is proposed to evaluate rate adaptation algorithms under dynamic channel and multi-user access environments. The proposed algorithm is compared to link throughput optimisation-based algorithm. It is found rate adaptation by optimising link layer throughput can result in large performance loss, which cannot be compensated by the means of optimising MAC access mechanism alone. Results show cross-layer design can achieve consistent and considerable performance gains of up to 20%. It deserves to be exploited in practical design for IEEE 802.11 networks.
Resumo:
Typical Double Auction (DA) models assume that trading agents are one-way traders. With this limitation, they cannot directly reflect the fact individual traders in financial markets (the most popular application of double auction) choose their trading directions dynamically. To address this issue, we introduce the Bi-directional Double Auction (BDA) market which is populated by two-way traders. Based on experiments under both static and dynamic settings, we find that the allocative efficiency of a static continuous BDA market comes from rational selection of trading directions and is negatively related to the intelligence of trading strategies. Moreover, we introduce Kernel trading strategy designed based on probability density estimation for general DA market. Our experiments show it outperforms some intelligent DA market trading strategies. Copyright © 2013, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Resumo:
The design and synthesis of safe efficient non-viral vectors for gene delivery has attracted significant attention in recent years due primarily to the severe side-effect profile reported with the use of their viral counterparts. Previous experiments have revealed that the strong interaction between the carriers and nucleic acid may well hinder the release of the gene from the complex in the cytosol adversely affecting transfection efficiency. However, incorporating reducible disulfide bonds within the delivery systems themselves which are then cleaved in the glutathione-rich intracellular environment may help in solving this puzzle. This review focuses on recent development of these reducible carriers. The biological rationale and approaches to the synthesis of reducible vectors are discussed in detail. The in vitro and in vivo evaluations of reducible carriers are also summarized and it is evident that they offer a promising approach in non-viral gene delivery system design.