957 resultados para Non ideal dynamic system
Resumo:
We present a model for detection of the states of a coupled quantum dots (qubit) by a quantum point contact. Most proposals for measurements of states of quantum systems are idealized. However in a real laboratory the measurements cannot be perfect due to practical devices and circuits. The models using ideal devices are not sufficient for describing the detection information of the states of the quantum systems. Our model therefore includes the extension to a non-ideal measurement device case using an equivalent circuit. We derive a quantum trajectory that describes the stochastic evolution of the state of the system of the qubit and the measuring device. We calculate the noise power spectrum of tunnelling events in an ideal and a non-ideal quantum point contact measurement respectively. We found that, for the strong coupling case it is difficult to obtain information of the quantum processes in the qubit by measurements using a non-ideal quantum point contact. The noise spectra can also be used to estimate the limits of applicability of the ideal model.
Resumo:
Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.
Resumo:
In the analysis and prediction of many real-world time series, the assumption of stationarity is not valid. A special form of non-stationarity, where the underlying generator switches between (approximately) stationary regimes, seems particularly appropriate for financial markets. We introduce a new model which combines a dynamic switching (controlled by a hidden Markov model) and a non-linear dynamical system. We show how to train this hybrid model in a maximum likelihood approach and evaluate its performance on both synthetic and financial data.
Resumo:
The use of digital communication systems is increasing very rapidly. This is due to lower system implementation cost compared to analogue transmission and at the same time, the ease with which several types of data sources (data, digitised speech and video, etc.) can be mixed. The emergence of packet broadcast techniques as an efficient type of multiplexing, especially with the use of contention random multiple access protocols, has led to a wide-spread application of these distributed access protocols in local area networks (LANs) and a further extension of them to radio and mobile radio communication applications. In this research, a proposal for a modified version of the distributed access contention protocol which uses the packet broadcast switching technique has been achieved. The carrier sense multiple access with collision avoidance (CSMA/CA) is found to be the most appropriate protocol which has the ability to satisfy equally the operational requirements for local area networks as well as for radio and mobile radio applications. The suggested version of the protocol is designed in a way in which all desirable features of its precedents is maintained. However, all the shortcomings are eliminated and additional features have been added to strengthen its ability to work with radio and mobile radio channels. Operational performance evaluation of the protocol has been carried out for the two types of non-persistent and slotted non-persistent, through mathematical and simulation modelling of the protocol. The results obtained from the two modelling procedures validate the accuracy of both methods, which compares favourably with its precedent protocol CSMA/CD (with collision detection). A further extension of the protocol operation has been suggested to operate with multichannel systems. Two multichannel systems based on the CSMA/CA protocol for medium access are therefore proposed. These are; the dynamic multichannel system, which is based on two types of channel selection, the random choice (RC) and the idle choice (IC), and the sequential multichannel system. The latter has been proposed in order to supress the effect of the hidden terminal, which always represents a major problem with the usage of the contention random multiple access protocols with radio and mobile radio channels. Verification of their operation performance evaluation has been carried out using mathematical modelling for the dynamic system. However, simulation modelling has been chosen for the sequential system. Both systems are found to improve system operation and fault tolerance when compared to single channel operation.
Resumo:
We have simulated the performance of various apertures used in Coded Aperture Imaging - optically. Coded pictures of extended and continuous-tone planar objects from the Annulus, Twin Annulus, Fresnel Zone Plate and the Uniformly Redundant Array have been decoded using a noncoherent correlation process. We have compared the tomographic capabilities of the Twin Annulus with the Uniformly Redundant Arrays based on quadratic residues and m-sequences. We discuss the ways of reducing the 'd. c.' background of the various apertures used. The non-ideal System-Point-Spread-Function inherent in a noncoherent optical correlation process produces artifacts in the reconstruction. Artifacts are also introduced as a result of unwanted cross-correlation terms from out-of-focus planes. We find that the URN based on m-sequences exhibits good spatial resolution and out-of-focus behaviour when imaging extended objects.
Resumo:
The Alborz Mountain range separates the northern part of Iran from the southern part. It also isolates a narrow coastal strip to the south of the Caspian Sea from the Central Iran plateau. Communication between the south and north until the 1950's was via two roads and one rail link. In 1963 work was completed on a major access road via the Haraz Valley (the most physically hostile area in the region). From the beginning the road was plagued by accidents resulting from unstable slopes on either side of the valley. Heavy casualties persuaded the government to undertake major engineering works to eliminate ''black spots" and make the road safe. However, despite substantial and prolonged expenditure the problems were not solved and casualties increased steadily due to the increase in traffic using the road. Another road was built to bypass the Haraz road and opened to traffic in 1983. But closure of the Haraz road was still impossible because of the growth of settlements along the route and the need for access to other installations such as the Lar Dam. The aim of this research was to explore the possibility of applying Landsat MSS imagery to locating black spots along the road and the instability problems. Landsat data had not previously been applied to highway engineering problems in the study area. Aerial photographs are better in general than satellite images for detailed mapping, but Landsat images are superior for reconnaissance and adequate for mapping at the 1 :250,000 scale. The broad overview and lack of distortion in the Landsat imagery make the images ideal for structural interpretation. The results of Landsat digital image analysis showed that certain rock types and structural features can be delineated and mapped. The most unstable areas comprising steep slopes, free of vegetation cover can be identified using image processing techniques. Structural lineaments revealed from the image analysis led to improved results (delineation of unstable features). Damavand Quaternary volcanics were found to be the dominant rock type along a 40 km stretch of the road. These rock types are inherently unstable and partly responsible for the difficulties along the road. For more detailed geological and morphological interpretation a sample of small subscenes was selected and analysed. A special developed image analysis package was designed at Aston for use on a non specialized computing system. Using this package a new and unique method for image classification was developed, allowing accurate delineation of the critical features of the study area.
Resumo:
Horizontal Subsurface Flow Treatment Wetlands (HSSF TWs) are used by Severn Trent Water as a low-cost tertiary wastewater treatment for rural locations. Experience has shown that clogging is a major operational problem that reduces HSSF TW lifetime. Clogging is caused by an accumulation of secondary wastewater solids from upstream processes and decomposing leaf litter. Clogging occurs as a sludge layer where wastewater is loaded on the surface of the bed at the inlet. Severn Trent systems receive relatively high hydraulic loading rates, which causes overland flow and reduces the ability to mineralise surface sludge accumulations. A novel apparatus and method, the Aston Permeameter, was created to measure hydraulic conductivity in situ. Accuracy is ±30 %, which was considered adequate given that conductivity in clogged systems varies by several orders of magnitude. The Aston Permeameter was used to perform 20 separate tests on 13 different HSSF TWs in the UK and the US. The minimum conductivity measured was 0.03 m/d at Fenny Compton (compared with 5,000 m/d clean conductivity), which was caused by an accumulation of construction fines in one part of the bed. Most systems displayed a 2 to 3 order of magnitude variation in conductivity in each dimension. Statistically significant transverse variations in conductivity were found in 70% of the systems. Clogging at the inlet and outlet was generally highest where flow enters the influent distribution and exits the effluent collection system, respectively. Surface conductivity was lower in systems with dense vegetation because plant canopies reduce surface evapotranspiration and decelerate sludge mineralisation. An equation was derived to describe how the water table profile is influenced by overland flow, spatial variations in conductivity and clogging. The equation is calibrated using a single parameter, the Clog Factor (CF), which represents the equivalent loss of porosity that would reproduce measured conductivity according to the Kozeny-Carman Equation. The CF varies from 0 for ideal conditions to 1 for completely clogged conditions. Minimum CF was 0.54 for a system that had recently been refurbished, which represents the deviation from ideal conditions due to characteristics of non-ideal media such as particle size distribution and morphology. Maximum CF was 0.90 for a 15 year old system that exhibited sludge accumulation and overland flow across the majority of the bed. A Finite Element Model of a 15 m long HSSF TW was used to indicate how hydraulics and hydrodynamics vary as CF increases. It was found that as CF increases from 0.55 to 0.65 the subsurface wetted area increases, which causes mean hydraulic residence time to increase from 0.16 days to 0.18 days. As CF increases from 0.65 to 0.90, the extent of overland flow increases from 1.8 m to 13.1 m, which reduces hydraulic efficiency from 37 % to 12 % and reduces mean residence time to 0.08 days.
Resumo:
A multistage distillation column in which mass transfer and a reversible chemical reaction occurred simultaneously, has been investigated to formulate a technique by which this process can be analysed or predicted. A transesterification reaction between ethyl alcohol and butyl acetate, catalysed by concentrated sulphuric acid, was selected for the investigation and all the components were analysed on a gas liquid chromatograph. The transesterification reaction kinetics have been studied in a batch reactor for catalyst concentrations of 0.1 - 1.0 weight percent and temperatures between 21.4 and 85.0 °C. The reaction was found to be second order and dependent on the catalyst concentration at a given temperature. The vapour liquid equilibrium data for six binary, four ternary and one quaternary systems are measured at atmospheric pressure using a modified Cathala dynamic equilibrium still. The systems with the exception of ethyl alcohol - butyl alcohol mixtures, were found to be non-ideal. Multicomponent vapour liquid equilibrium compositions were predicted by a computer programme which utilised the Van Laar constants obtained from the binary data sets. Good agreement was obtained between the predicted and experimental quaternary equilibrium vapour compositions. Continuous transesterification experiments were carried out in a six stage sieve plate distillation column. The column was 3" in internal diameter and of unit construction in glass. The plates were 8" apart and had a free area of 7.7%. Both the liquid and vapour streams were analysed. The component conversion was dependent on the boilup rate and the reflux ratio. Because of the presence of the reaction, the concentration of one of the lighter components increased below the feed plate. In the same region a highly developed foam was formed due to the presence of the catalyst. The experimental results were analysed by the solution of a series of simultaneous enthalpy and mass equations. Good agreement was obtained between the experimental and calculated results.
Resumo:
An antagonistic differential game of hyperbolic type with a separable linear vector pay-off function is considered. The main result is the description of all ε-Slater saddle points consisting of program strategies, program ε-Slater maximins and minimaxes for each ε ∈ R^N > for this game. To this purpose, the considered differential game is reduced to find the optimal program strategies of two multicriterial problems of hyperbolic type. The application of approximation enables us to relate these problems to a problem of optimal program control, described by a system of ordinary differential equations, with a scalar pay-off function. It is found that the result of this problem is not changed, if the players use positional or program strategies. For the considered differential game, it is interesting that the ε-Slater saddle points are not equivalent and there exist two ε-Slater saddle points for which the values of all components of the vector pay-off function at one of them are greater than the respective components of the other ε-saddle point.
Resumo:
Use of modern object-oriented methods of designing of information systems (IS) both descriptions of interrelations IS and automated with its help business-processes of the enterprises leads to necessity of construction uniform complete IS on the basis of set of local models of such system. As a result of use of such approach there are the contradictions caused by inconsistency of actions of separate developers IS with each other and that is much more important, inconsistency of the points of view of separate users IS. Besides similar contradictions arise while in service IS at the enterprise because of constant change separate business- processes of the enterprise. It is necessary to note also, that now overwhelming majority IS is developed and maintained as set of separate functional modules. Each of such modules can function as independent IS. However the problem of integration of separate functional modules in uniform system can lead to a lot of problems. Among these problems it is possible to specify, for example, presence in modules of functions which are not used by the enterprise to destination, to complexity of information and program integration of modules of various manufacturers, etc. In most cases these contradictions and the reasons, their caused, are consequence of primary representation IS as equilibrium steady system. In work [1] representation IS as dynamic multistable system which is capable to carry out following actions has been considered:
Resumo:
Limited literature regarding parameter estimation of dynamic systems has been identified as the central-most reason for not having parametric bounds in chaotic time series. However, literature suggests that a chaotic system displays a sensitive dependence on initial conditions, and our study reveals that the behavior of chaotic system: is also sensitive to changes in parameter values. Therefore, parameter estimation technique could make it possible to establish parametric bounds on a nonlinear dynamic system underlying a given time series, which in turn can improve predictability. By extracting the relationship between parametric bounds and predictability, we implemented chaos-based models for improving prediction in time series. ^ This study describes work done to establish bounds on a set of unknown parameters. Our research results reveal that by establishing parametric bounds, it is possible to improve the predictability of any time series, although the dynamics or the mathematical model of that series is not known apriori. In our attempt to improve the predictability of various time series, we have established the bounds for a set of unknown parameters. These are: (i) the embedding dimension to unfold a set of observation in the phase space, (ii) the time delay to use for a series, (iii) the number of neighborhood points to use for avoiding detection of false neighborhood and, (iv) the local polynomial to build numerical interpolation functions from one region to another. Using these bounds, we are able to get better predictability in chaotic time series than previously reported. In addition, the developments of this dissertation can establish a theoretical framework to investigate predictability in time series from the system-dynamics point of view. ^ In closing, our procedure significantly reduces the computer resource usage, as the search method is refined and efficient. Finally, the uniqueness of our method lies in its ability to extract chaotic dynamics inherent in non-linear time series by observing its values. ^
Resumo:
A functional nervous system requires the precise arrangement of all nerve cells and their neurites. To achieve this correct assembly, a myriad of molecular guidance cues works together to direct the outgrowth of neurites to their correct positions. The small nematode C. elegans provides the ideal model system to study the complex mechanisms of neurite guidance due to its relatively simple nervous system, composed of 302 neurons. I used two mechanosensory neurons, called the posterior lateral microtubule (PLM), to investigate the role of the ephrin and Eph receptor protein family in neurite termination in C. elegans. Activation of the C. elegans Eph receptor VAB-1 on the PLM growth cone is sufficient to cause PLM termination, but the identity and location of the activating ligand has not been established. In my thesis I investigated the ability of the ephrin ligand EFN-1 to activate VAB-1 to cause PLM termination when expressed on the same cell (in cis) and on opposing cells (in trans) to the receptor. I showed that EFN-1 is able to activate VAB-1 in cis and in trans to cause PLM termination. I also assessed the hypodermal seam cells as the source of the ephrin stop cue using fluorescently labelled and seam cell mutant transgenic worms. I found that although the PLM shows consistent termination on the seam cell V2 in wild type worms independent of PLM length, this process is not significantly disrupted in seam cell mutants. With this information I have created a new hypothesis that the PLM neurite is able the provide a positional cue for the developing seam cells, and have created a new transgenic strain which can be used to assess the impact of PLM and ALM cell ablation on seam cell position. My research is the first to demonstrate the ability of an ephrin ligand to activate its ephrin receptor in cis, and further research can investigate if this finding has in vivo applications.
Resumo:
Scottish sandstone buildings are now suffering the long-term effects of salt-crystallisation damage, owing in part to the repeated deposition of de-icing salts during winter months. The use of de-icing salts is necessary in order to maintain safe road and pavement conditions during cold weather, but their use comes at a price. Sodium chloride (NaCl), which is used as the primary de-icing salt throughout the country, is a salt known to be damaging to sandstone masonry. However, there remains a range of alternative, commercially available de-icing salts. It is unknown however, what effect these salts have on porous building materials, such as sandstone. In order to protect our built heritage against salt-induced decay, it is vital to understand the effects of these different salts on the range of sandstone types that we see within the historic buildings of Scotland. Eleven common types of sandstone were characterised using a suite of methods in order to understand their mineralogy, pore structure and their response to moisture movement, which are vital properties that govern a stone’s response to weathering and decay. Sandstones were then placed through a range of durability tests designed to measure their resistance to various weathering processes. Three salt crystallisation tests were undertaken on the sandstones over a range of 16 to 50 cycles, which tested their durability to NaCl, CaCl2, MgCl2 and a chloride blend salt. Samples were primarily analysed by measuring their dry weight loss after each cycle, visually after each cycle and by other complimentary methods in order to understand their changing response to moisture uptake after salt treatment. Salt crystallisation was identified as the primary mechanism of decay across each salt, with the extent of damage in each sandstone influenced by environmental conditions and pore-grain properties of the stone. Damage recorded in salt crystallisation tests was ultimately caused by the generation of high crystallisation pressures within the confined pore networks of each stone. Stone and test-specific parameters controlled the location and magnitude of damage, with the amount of micro-pores, their spatial distribution, the water absorption coefficient and the drying efficiency of each stone being identified as the most important stone-specific properties influencing salt-induced decay. Strong correlations were found between the dry weight loss of NaCl treated samples and the proportion of pores <1µm in diameter. Crystallisation pressures are known to scale inversely with pore size, while the spatial distribution of these micro-pores is thought to influence the rate, overall extent and type of decay within the stone by concentrating crystallisation pressures in specific regions of the stone. The water absorption determines the total amount of moisture entering into the stone, which represents the total amount of void space for salt crystallisation. The drying parameters on the other hand, ultimately control the distribution of salt crystallisation. Those stones that were characterised by a combination of a high proportion of micro-pores, high water absorption values and slow drying kinetics were shown to be most vulnerable to NaCl-induced decay. CaCl2 and MgCl2 are shown to have similar crystallisation behaviour, forming thin crystalline sheets under low relative humidity and/or high temperature conditions. Distinct differences in their behaviour that are influenced by test specific criteria were identified. The location of MgCl2 crystallisation close to the stone surface, as influenced by prolonged drying under moderate temperature drying conditions, was identified as the main factor that caused substantial dry weight loss in specific stone types. CaCl2 solutions remained unaffected under these conditions and only crystallised under high temperatures. Homogeneous crystallisation of CaCl2 throughout the stone produced greater internal change, with little dry weight loss recorded. NaCl formed distinctive isometric hopper crystals that caused damage through the non-equilibrium growth of salts in trapped regions of the stone. Damage was sustained as granular decay and contour scaling across most stone types. The pore network and hydric properties of the stones continually evolve in response to salt crystallisation, creating a dynamic system whereby the initial, known properties of clean quarried stone will not continually govern the processes of salt crystallisation, nor indeed can they continually predict the behaviour of stone to salt-induced decay.
Resumo:
Embedded software systems in vehicles are of rapidly increasing commercial importance for the automotive industry. Current systems employ a static run-time environment; due to the difficulty and cost involved in the development of dynamic systems in a high-integrity embedded control context. A dynamic system, referring to the system configuration, would greatly increase the flexibility of the offered functionality and enable customised software configuration for individual vehicles, adding customer value through plug-and-play capability, and increased quality due to its inherent ability to adjust to changes in hardware and software. We envisage an automotive system containing a variety of components, from a multitude of organizations, not necessarily known at development time. The system dynamically adapts its configuration to suit the run-time system constraints. This paper presents our vision for future automotive control systems that will be regarded in an EU research project, referred to as DySCAS (Dynamically Self-Configuring Automotive Systems). We propose a self-configuring vehicular control system architecture, with capabilities that include automatic discovery and inclusion of new devices, self-optimisation to best-use the processing, storage and communication resources available, self-diagnostics and ultimately self-healing. Such an architecture has benefits extending to reduced development and maintenance costs, improved passenger safety and comfort, and flexible owner customisation. Specifically, this paper addresses the following issues: The state of the art of embedded software systems in vehicles, emphasising the current limitations arising from fixed run-time configurations; and the benefits and challenges of dynamic configuration, giving rise to opportunities for self-healing, self-optimisation, and the automatic inclusion of users’ Consumer Electronic (CE) devices. Our proposal for a dynamically reconfigurable automotive software system platform is outlined and a typical use-case is presented as an example to exemplify the benefits of the envisioned dynamic capabilities.
Resumo:
Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.