76 resultados para Classification error rate
Resumo:
We present experimental results for wavelength-division multiplexed (WDM) transmission performance using unbalanced proportions of 1s and 0s in pseudo-random bit sequence (PRBS) data. This investigation simulates the effect of local, in time, data unbalancing which occurs in some coding systems such as forward error correction when extra bits are added to the WDM data stream. We show that such local unbalancing, which would practically give a time-dependent error-rate, can be employed to improve the legacy long-haul WDM system performance if the system is allowed to operate in the nonlinear power region. We use a recirculating loop to simulate a long-haul fibre system.
Resumo:
In this work we introduce the periodic nonlinear Fourier transform (PNFT) and propose a proof-of-concept communication system based on it by using a simple waveform with known nonlinear spectrum (NS). We study the performance (addressing the bit-error-rate (BER), as a function of the propagation distance) of the transmission system based on the use of the PNFT processing method and show the benefits of the latter approach. By analysing our simulation results for the system with lumped amplification, we demonstrate the decent potential of the new processing method.
Resumo:
The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.
Resumo:
In this paper we propose the design of communication systems based on using periodic nonlinear Fourier transform (PNFT), following the introduction of the method in the Part I. We show that the famous "eigenvalue communication" idea [A. Hasegawa and T. Nyu, J. Lightwave Technol. 11, 395 (1993)] can also be generalized for the PNFT application: In this case, the main spectrum attributed to the PNFT signal decomposition remains constant with the propagation down the optical fiber link. Therefore, the main PNFT spectrum can be encoded with data in the same way as soliton eigenvalues in the original proposal. The results are presented in terms of the bit-error rate (BER) values for different modulation techniques and different constellation sizes vs. the propagation distance, showing a good potential of the technique.
Resumo:
We quantify the error statistics and patterning effects in a 5x 40 Gbit/s WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/EDFA amplification. We propose an adaptive constrained coding for the suppression of errors due to patterning effects. It is established, that this coding technique can greatly reduce the bit error rate (BER) value even for large BER (BER > 101). The proposed approach can be used in the combination with the forward error correction schemes (FEC) to correct the errors even when real channel BER is outside the FEC workspace.
Resumo:
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
We derive a mean field algorithm for binary classification with Gaussian processes which is based on the TAP approach originally proposed in Statistical Physics of disordered systems. The theory also yields an approximate leave-one-out estimator for the generalization error which is computed with no extra computational cost. We show that from the TAP approach, it is possible to derive both a simpler 'naive' mean field theory and support vector machines (SVM) as limiting cases. For both mean field algorithms and support vectors machines, simulation results for three small benchmark data sets are presented. They show 1. that one may get state of the art performance by using the leave-one-out estimator for model selection and 2. the built-in leave-one-out estimators are extremely precise when compared to the exact leave-one-out estimate. The latter result is a taken as a strong support for the internal consistency of the mean field approach.
Resumo:
This article investigates the behaviour of exchange rates across different regimes for a post-Bretton Woods period. The exchange rate regime classification is based on the classification of Frankel et al. (2004) who condensed the 10 categories of exchange rate regimes reported by the International Monetary Fund (IMF) into three categories. Panel unitroot tests and panel cointegration are used to examine the Purchasing Power Parity (PPP) hypothesis. The latter test is used to check for both the weak and strong forms of PPP. The panel unit-root tests show no evidence of PPP and suggest there is no difference in the behaviour of exchange rates across different regimes. However, failure to detect PPP across any of the regimes could be due to structural breaks. This assumption is reinforced by the results of cointegration tests, which suggest that there exists at least a weak form of PPP for the different regimes. The evidence for strong PPP decreases as the exchange rate regime moves away from a flexible exchange rate regime.
Resumo:
Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.
Resumo:
This thesis presents an investigation into the application of methods of uncertain reasoning to the biological classification of river water quality. Existing biological methods for reporting river water quality are critically evaluated, and the adoption of a discrete biological classification scheme advocated. Reasoning methods for managing uncertainty are explained, in which the Bayesian and Dempster-Shafer calculi are cited as primary numerical schemes. Elicitation of qualitative knowledge on benthic invertebrates is described. The specificity of benthic response to changes in water quality leads to the adoption of a sensor model of data interpretation, in which a reference set of taxa provide probabilistic support for the biological classes. The significance of sensor states, including that of absence, is shown. Novel techniques of directly eliciting the required uncertainty measures are presented. Bayesian and Dempster-Shafer calculi were used to combine the evidence provided by the sensors. The performance of these automatic classifiers was compared with the expert's own discrete classification of sampled sites. Variations of sensor data weighting, combination order and belief representation were examined for their effect on classification performance. The behaviour of the calculi under evidential conflict and alternative combination rules was investigated. Small variations in evidential weight and the inclusion of evidence from sensors absent from a sample improved classification performance of Bayesian belief and support for singleton hypotheses. For simple support, inclusion of absent evidence decreased classification rate. The performance of Dempster-Shafer classification using consonant belief functions was comparable to Bayesian and singleton belief. Recommendations are made for further work in biological classification using uncertain reasoning methods, including the combination of multiple-expert opinion, the use of Bayesian networks, and the integration of classification software within a decision support system for water quality assessment.
Resumo:
The present thesis investigates pattern glare susceptibility following stroke and the immediate and prolonged impact of prescribing optimal spectral filters on reading speed, accuracy and visual search performance. Principal observations: A case report has shown that visual stress can occur following stroke. The use of spectral filters and precision tinted lenses proved to be a successful intervention in this case, although the parameters required modification following a further stroke episode. Stroke subjects demonstrate elevated levels of pattern glare compared to normative data values and a control group. Initial use of an optimal spectral filter in a stroke cohort increased reading speed by ~6% and almost halved error scores, findings not replicated in a control group. With the removal of migraine subjects reading speed increased by ~8% with an optimal filter and error scores almost halved. Prolonged use of an optimal spectral filter for stroke subjects, increased reading speed by >9% and error scores more than halved. When the same subjects switched to prolonged use of a grey filter, reading speed reduced by ~4% and error scores increased marginally. When a second group of stroke subjects used a grey filter first, reading speed decreased by ~3% but increased by ~3% with prolonged use of an optimal filter, with error scores almost halving; these findings persisted with migraine subjects excluded. Initial use of an optimal spectral filter improved visual search response time but not error scores in a stroke cohort with migraine subjects excluded. Neither prolonged use of an optimal nor grey filter improved response time or reduced error scores in a stroke group; these findings persisted with the exclusion of migraine subjects.
Resumo:
Through extensive direct modelling we quantify the error statistics and patterning effects in a WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/ EDFA amplification at 40 Gbit/s channel rate. We examine the BER improvement through skewed channel pre-coding reducing the frequency of appearance of the triplets 101 and 010 in a long data stream. © 2007 Elsevier B.V. All rights reserved.
Resumo:
Summary form only given. In this paper an important new example of a system with strong and nontrivial patterning effects is presented. There has been much interest lately in the implementation of the differential phase shift-keying (PSK) modulation format for long-haul and ultra long-haul fibre communications and, in particular, the differential binary PSK (DBPSK) modulation format, where data is encoded into the optical phase. The results of a direct computation of the error statistics for an SMF/DCF RZ-DBPSK 5-channel WDM RZ-DBPSK link with hybrid Raman/EDFA amplification at 40 Gbit/s per channel, with a channel separation of 100 GHz are presented. The statistics of bit triplets and quantify strong pattern-dependent ISI are obtained.
Resumo:
In this paper, the authors use an exponential generalized autoregressive conditional heteroscedastic (EGARCH) error-correction model (ECM), that is, EGARCH-ECM, to estimate the pass-through effects of foreign exchange (FX) rates and producers’ prices for 20 U.K. export sectors. The long-run adjustment of export prices to FX rates and producers’ prices is within the range of -1.02% (for the Textiles sector) and -17.22% (for the Meat sector). The contemporaneous pricing-to-market (PTM) coefficient is within the range of -72.84% (for the Fuels sector) and -8.05% (for the Textiles sector). Short-run FX rate pass-through is not complete even after several months. Rolling EGARCH-ECMs show that the short and long-run effects of FX rate and producers’ prices fluctuate substantially as are asymmetry and volatility estimates before equilibrium is achieved.