855 resultados para SYSTEMATIC-ERROR CORRECTION
Resumo:
We show that the classification of bipartite pure entangled states when local quantum operations are restricted yields a structure that is analogous in many respects to that of mixed-state entanglement. Specifically, we develop this analogy by restricting operations through local superselection rules, and show that such exotic phenomena as bound entanglement and activation arise using pure states in this setting. This analogy aids in resolving several conceptual puzzles in the study of entanglement under restricted operations. In particular, we demonstrate that several types of quantum optical states that possess confusing entanglement properties are analogous to bound entangled states. Also, the classification of pure-state entanglement under restricted operations can be much simpler than for mixed-state entanglement. For instance, in the case of local Abelian superselection rules all questions concerning distillability can be resolved.
Resumo:
The problem of distributed compression for correlated quantum sources is considered. The classical version of this problem was solved by Slepian and Wolf, who showed that distributed compression could take full advantage of redundancy in the local sources created by the presence of correlations. Here it is shown that, in general, this is not the case for quantum sources, by proving a lower bound on the rate sum for irreducible sources of product states which is stronger than the one given by a naive application of Slepian-Wolf. Nonetheless, strategies taking advantage of correlation do exist for some special classes of quantum sources. For example, Devetak and Winter demonstrated the existence of such a strategy when one of the sources is classical. Optimal nontrivial strategies for a different extreme, sources of Bell states, are presented here. In addition, it is explained how distributed compression is connected to other problems in quantum information theory, including information-disturbance questions, entanglement distillation and quantum error correction.
Resumo:
This paper reinvestigates the energy consumption-GDP growth nexus in a panel error correction model using data on 20 net energy importers and exporters from 1971 to 2002. Among the energy exporters, there was bidirectional causality between economic growth and energy consumption in the developed countries in both the short and long run, while in the developing countries energy consumption stimulates growth only in the short run. The former result is also found for energy importers and the latter result exists only for the developed countries within this category. In addition, compared to the developing countries, the developed countries' elasticity response in terms of economic growth from an increase in energy consumption is larger although its income elasticity is lower and less than unitary. Lastly. the implications for energy policy calling for a more holistic approach are discussed. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
This study focuses on: (i) the responsiveness of the U.S. financial sector stock indices to foreign exchange (FX) and interest rate changes; and, (ii) the extent to which good model specification can enhance the forecasts from the associated models. Three models are considered. Only the error-correction model (ECM) generated efficient and consistent coefficient estimates. Furthermore, a simple zero lag model in differences which is clearly mis-specified, generated forecasts that are better than those of the ECM, even if the ECM depicts relationships that are more consistent with economic theory. In brief, FX and interest rate changes do not impact on the return-generating process of the stock indices in any substantial way. Most of the variation in the sector stock indices is associated with past variation in the indices themselves and variation in the market-wide stock index. These results have important implications for financial and economic policies.
Resumo:
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven-variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non-stationary, stationary and error-correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non-stationary specification outperformed those of the stationary and error-correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error-correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak.
Resumo:
An astigmatic scheme of a laser wavelength meter based on a single air-gap Fizeau interferometer is described. For a multimode laser, the accuracy in determining the center of gravity of a spectrum is within 1GHz. Two complementary testing techniques are proposed for the instrument. By using them, it was shown for the first time that, for this type of meters, a systematic error arises and increases with a decrease in the radiation-spectrum width. The effect is periodic in the lasing frequency and results from a weak beam that is brought about by a reflection from the front surface of the interferometer. Moreover, in the previously designed optical schemes, this effect is so strong that unambiguous determination of the wavelength of a single-frequency radiation is impossible. The use of an astigmatic scheme helps additionally attenuate the influence of the third beam, thus eliminating the ambiguity in the results and reducing the absolute error to a value of ±1.5 GHz.
Resumo:
This thesis addresses the viability of automatic speech recognition for control room systems; with careful system design, automatic speech recognition (ASR) devices can be useful means for human computer interaction in specific types of task. These tasks can be defined as complex verbal activities, such as command and control, and can be paired with spatial tasks, such as monitoring, without detriment. It is suggested that ASR use be confined to routine plant operation, as opposed the critical incidents, due to possible problems of stress on the operators' speech. It is proposed that using ASR will require operators to adapt a commonly used skill to cater for a novel use of speech. Before using the ASR device, new operators will require some form of training. It is shown that a demonstration by an experienced user of the device can lead to superior performance than instructions. Thus, a relatively cheap and very efficient form of operator training can be supplied by demonstration by experienced ASR operators. From a series of studies into speech based interaction with computers, it is concluded that the interaction be designed to capitalise upon the tendency of operators to use short, succinct, task specific styles of speech. From studies comparing different types of feedback, it is concluded that operators be given screen based feedback, rather than auditory feedback, for control room operation. Feedback will take two forms: the use of the ASR device will require recognition feedback, which will be best supplied using text; the performance of a process control task will require task feedback integrated into the mimic display. This latter feedback can be either textual or symbolic, but it is suggested that symbolic feedback will be more beneficial. Related to both interaction style and feedback is the issue of handling recognition errors. These should be corrected by simple command repetition practices, rather than use error handling dialogues. This method of error correction is held to be non intrusive to primary command and control operations. This thesis also addresses some of the problems of user error in ASR use, and provides a number of recommendations for its reduction.
Resumo:
This thesis investigates the pricing-to-market (PTM) behaviour of the UK export sector. Unlike previous studies, this study econometrically tests for seasonal unit roots in the export prices prior to estimating PTM behaviour. Prior studies have seasonally adjusted the data automatically. This study’s results show that monthly export prices contain very little seasonal unit roots implying that there is a loss of information in the data generating process of the series when estimating PTM using seasonally-adjusted data. Prior studies have also ignored the econometric properties of the data despite the existence of ARCH effects in such data. The standard approach has been to estimate PTM models using Ordinary Least Square (OLS). For this reason, both EGARCH and GJR-EGARCH (hereafter GJR) estimation methods are used to estimate both a standard and an Error Correction model (ECM) of PTM. The results indicate that PTM behaviour varies across UK sectors. The variables used in the PTM models are co-integrated and an ECM is a valid representation of pricing behaviour. The study also finds that the price adjustment is slower when the analysis is performed on real prices, i.e., data that are adjusted for inflation. There is strong evidence of auto-regressive condition heteroscedasticity (ARCH) effects – meaning that the PTM parameter estimates of prior studies have been ineffectively estimated. Surprisingly, there is very little evidence of asymmetry. This suggests that exporters appear to PTM at a relatively constant rate. This finding might also explain the failure of prior studies to find evidence of asymmetric exposure in foreign exchange (FX) rates. This study also provides a cross sectional analysis to explain the implications of the observed PTM of producers’ marginal cost, market share and product differentiation. The cross-sectional regressions are estimated using OLS, Generalised Method of Moment (GMM) and Logit estimations. Overall, the results suggest that market share affects PTM positively.Exporters with smaller market share are more likely to operate PTM. Alternatively, product differentiation is negatively associated with PTM. So industries with highly differentiated products are less likely to adjust their prices. However, marginal costs seem not to be significantly associated with PTM. Exporters perform PTM to limit the FX rate effect pass-through to their foreign customers, but they also avoided exploiting PTM to the full, since to do so can substantially reduce their profits.
Resumo:
The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.
Resumo:
Motivated by the increasing demand and challenges of video streaming in this thesis, we investigate methods by which the quality of the video can be improved. We utilise overlay networks that have been created by implemented relay nodes to produce path diversity, and show through analytical and simulation models for which environments path diversity can improve the packet loss probability. We take the simulation and analytical models further by implementing a real overlay network on top of Planetlab, and show that when the network conditions remain constant the video quality received by the client can be improved. In addition, we show that in the environments where path diversity improves the video quality forward error correction can be used to further enhance the quality. We then investigate the effect of IEEE 802.11e Wireless LAN standard with quality of service enabled on the video quality received by a wireless client. We find that assigning all the video to a single class outperforms a cross class assignment scheme proposed by other researchers. The issue of virtual contention at the access point is also examined. We increase the intelligence of our relay nodes and enable them to cache video, in order to maximise the usefulness of these caches. For this purpose, we introduce a measure, called the PSNR profit, and present an optimal caching method for achieving the maximum PSNR profit at the relay nodes where partitioned video contents are stored and provide an enhanced quality for the client. We also show that the optimised cache the degradation in the video quality received by the client becomes more graceful than the non-optimised system when the network experiences packet loss or is congested.
Resumo:
This thesis presents experimental investigation of different effects/techniques that can be used to upgrade legacy WDM communication systems. The main issue in upgrading legacy systems is that the fundamental setup, including components settings such as EDFA gains, does not need to be altered thus the improvement must be carried out at the network terminal. A general introduction to optical fibre communications is given at the beginning, including optical communication components and system impairments. Experimental techniques for performing laboratory optical transmission experiments are presented before the experimental work of this thesis. These techniques include optical transmitter and receiver designs as well as the design and operation of the recirculating loop. The main experimental work includes three different studies. The first study involves a development of line monitoring equipment that can be reliably used to monitor the performance of optically amplified long-haul undersea systems. This equipment can provide instant finding of the fault locations along the legacy communication link which in tum enables rapid repair execution to be performed hence upgrading the legacy system. The second study investigates the effect of changing the number of transmitted 1s and Os on the performance of WDM system. This effect can, in reality, be seen in some coding systems, e.g. forward-error correction (FEC) technique, where the proportion of the 1s and Os are changed at the transmitter by adding extra bits to the original bit sequence. The final study presents transmission results after all-optical format conversion from NRZ to CSRZ and from RZ to CSRZ using semiconductor optical amplifier in nonlinear optical loop mirror (SOA-NOLM). This study is mainly based on the fact that the use of all-optical processing, including format conversion, has become attractive for the future data networks that are proposed to be all-optical. The feasibility of the SOA-NOLM device for converting single and WDM signals is described. The optical conversion bandwidth and its limitations for WDM conversion are also investigated. All studies of this thesis employ 10Gbit/s single or WDM signals being transmitted over dispersion managed fibre span in the recirculating loop. The fibre span is composed of single-mode fibres (SMF) whose losses and dispersion are compensated using erbium-doped fibre amplifiers (EDFAs) and dispersion compensating fibres (DCFs), respectively. Different configurations of the fibre span are presented in different parts.
Resumo:
Improving bit error rates in optical communication systems is a difficult and important problem. The error correction must take place at high speed and be extremely accurate. We show the feasibility of using hardware implementable machine learning techniques. This may enable some error correction at the speed required.
Resumo:
Optical data communication systems are prone to a variety of processes that modify the transmitted signal, and contribute errors in the determination of 1s from 0s. This is a difficult, and commercially important, problem to solve. Errors must be detected and corrected at high speed, and the classifier must be very accurate; ideally it should also be tunable to the characteristics of individual communication links. We show that simple single layer neural networks may be used to address these problems, and examine how different input representations affect the accuracy of bit error correction. Our results lead us to conclude that a system based on these principles can perform at least as well as an existing non-trainable error correction system, whilst being tunable to suit the individual characteristics of different communication links.
Resumo:
The impact of hybrid erbium-doped fiber amplifier (EDFA)/Raman amplification on a spectrally efficient coherent-wavelength-division-multiplexed (CoWDM) optical communication system is experimentally studied and modeled. Simulations suggested that 23-dB Raman gain over an unrepeatered span of 124 km single-mode fiber would allow a decrease of the mean input power of ~6 dB for a fixed bit-error rate (BER). Experimentally we demonstrated 1.2-dB Q-factor improvement for a 2-Tb/s seven-band CoWDM with backward Raman amplification. The system delivered an optical signal-to-noise ratio of 35 dB at the output of the receiver preamplifier providing a worst-case BER of 2 × 10 -6 over 49 subcarriers at 42.8 Gbaud, leaving a system margin (in terms of Q -factor) of ~4 dB from the forward-error correction threshold.