835 resultados para Error correction codes


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis addresses the viability of automatic speech recognition for control room systems; with careful system design, automatic speech recognition (ASR) devices can be useful means for human computer interaction in specific types of task. These tasks can be defined as complex verbal activities, such as command and control, and can be paired with spatial tasks, such as monitoring, without detriment. It is suggested that ASR use be confined to routine plant operation, as opposed the critical incidents, due to possible problems of stress on the operators' speech.  It is proposed that using ASR will require operators to adapt a commonly used skill to cater for a novel use of speech. Before using the ASR device, new operators will require some form of training. It is shown that a demonstration by an experienced user of the device can lead to superior performance than instructions. Thus, a relatively cheap and very efficient form of operator training can be supplied by demonstration by experienced ASR operators. From a series of studies into speech based interaction with computers, it is concluded that the interaction be designed to capitalise upon the tendency of operators to use short, succinct, task specific styles of speech. From studies comparing different types of feedback, it is concluded that operators be given screen based feedback, rather than auditory feedback, for control room operation. Feedback will take two forms: the use of the ASR device will require recognition feedback, which will be best supplied using text; the performance of a process control task will require task feedback integrated into the mimic display. This latter feedback can be either textual or symbolic, but it is suggested that symbolic feedback will be more beneficial. Related to both interaction style and feedback is the issue of handling recognition errors. These should be corrected by simple command repetition practices, rather than use error handling dialogues. This method of error correction is held to be non intrusive to primary command and control operations. This thesis also addresses some of the problems of user error in ASR use, and provides a number of recommendations for its reduction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis investigates the pricing-to-market (PTM) behaviour of the UK export sector. Unlike previous studies, this study econometrically tests for seasonal unit roots in the export prices prior to estimating PTM behaviour. Prior studies have seasonally adjusted the data automatically. This study’s results show that monthly export prices contain very little seasonal unit roots implying that there is a loss of information in the data generating process of the series when estimating PTM using seasonally-adjusted data. Prior studies have also ignored the econometric properties of the data despite the existence of ARCH effects in such data. The standard approach has been to estimate PTM models using Ordinary Least Square (OLS). For this reason, both EGARCH and GJR-EGARCH (hereafter GJR) estimation methods are used to estimate both a standard and an Error Correction model (ECM) of PTM. The results indicate that PTM behaviour varies across UK sectors. The variables used in the PTM models are co-integrated and an ECM is a valid representation of pricing behaviour. The study also finds that the price adjustment is slower when the analysis is performed on real prices, i.e., data that are adjusted for inflation. There is strong evidence of auto-regressive condition heteroscedasticity (ARCH) effects – meaning that the PTM parameter estimates of prior studies have been ineffectively estimated. Surprisingly, there is very little evidence of asymmetry. This suggests that exporters appear to PTM at a relatively constant rate. This finding might also explain the failure of prior studies to find evidence of asymmetric exposure in foreign exchange (FX) rates. This study also provides a cross sectional analysis to explain the implications of the observed PTM of producers’ marginal cost, market share and product differentiation. The cross-sectional regressions are estimated using OLS, Generalised Method of Moment (GMM) and Logit estimations. Overall, the results suggest that market share affects PTM positively.Exporters with smaller market share are more likely to operate PTM. Alternatively, product differentiation is negatively associated with PTM. So industries with highly differentiated products are less likely to adjust their prices. However, marginal costs seem not to be significantly associated with PTM. Exporters perform PTM to limit the FX rate effect pass-through to their foreign customers, but they also avoided exploiting PTM to the full, since to do so can substantially reduce their profits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The contributions in this research are split in to three distinct, but related, areas. The focus of the work is based on improving the efficiency of video content distribution in the networks that are liable to packet loss, such as the Internet. Initially, the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP) is presented. Since added FEC can be used to reduce the number of retransmissions, the requirement for TCP to deal with any losses is greatly reduced. When real-time applications are needed, delay must be kept to a minimum, and retransmissions not desirable. A balance, therefore, between additional bandwidth and delays due to retransmissions must be struck. This is followed by the proposal of a hybrid transport, specifically for H.264 encoded video, as a compromise between the delay-prone TCP and the loss-prone UDP. It is argued that the playback quality at the receiver often need not be 100% perfect, providing a certain level is assured. Reliable TCP is used to transmit and guarantee delivery of the most important packets. The delay associated with the proposal is measured, and the potential for use as an alternative to the conventional methods of transporting video by either TCP or UDP alone is demonstrated. Finally, a new objective measurement is investigated for assessing the playback quality of video transported using TCP. A new metric is defined to characterise the quality of playback in terms of its continuity. Using packet traces generated from real TCP connections in a lossy environment, simulating the playback of a video is possible, whilst monitoring buffer behaviour to calculate pause intensity values. Subjective tests are conducted to verify the effectiveness of the metric introduced and show that the results of objective and subjective scores made are closely correlated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motivated by the increasing demand and challenges of video streaming in this thesis, we investigate methods by which the quality of the video can be improved. We utilise overlay networks that have been created by implemented relay nodes to produce path diversity, and show through analytical and simulation models for which environments path diversity can improve the packet loss probability. We take the simulation and analytical models further by implementing a real overlay network on top of Planetlab, and show that when the network conditions remain constant the video quality received by the client can be improved. In addition, we show that in the environments where path diversity improves the video quality forward error correction can be used to further enhance the quality. We then investigate the effect of IEEE 802.11e Wireless LAN standard with quality of service enabled on the video quality received by a wireless client. We find that assigning all the video to a single class outperforms a cross class assignment scheme proposed by other researchers. The issue of virtual contention at the access point is also examined. We increase the intelligence of our relay nodes and enable them to cache video, in order to maximise the usefulness of these caches. For this purpose, we introduce a measure, called the PSNR profit, and present an optimal caching method for achieving the maximum PSNR profit at the relay nodes where partitioned video contents are stored and provide an enhanced quality for the client. We also show that the optimised cache the degradation in the video quality received by the client becomes more graceful than the non-optimised system when the network experiences packet loss or is congested.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents experimental investigation of different effects/techniques that can be used to upgrade legacy WDM communication systems. The main issue in upgrading legacy systems is that the fundamental setup, including components settings such as EDFA gains, does not need to be altered thus the improvement must be carried out at the network terminal. A general introduction to optical fibre communications is given at the beginning, including optical communication components and system impairments. Experimental techniques for performing laboratory optical transmission experiments are presented before the experimental work of this thesis. These techniques include optical transmitter and receiver designs as well as the design and operation of the recirculating loop. The main experimental work includes three different studies. The first study involves a development of line monitoring equipment that can be reliably used to monitor the performance of optically amplified long-haul undersea systems. This equipment can provide instant finding of the fault locations along the legacy communication link which in tum enables rapid repair execution to be performed hence upgrading the legacy system. The second study investigates the effect of changing the number of transmitted 1s and Os on the performance of WDM system. This effect can, in reality, be seen in some coding systems, e.g. forward-error correction (FEC) technique, where the proportion of the 1s and Os are changed at the transmitter by adding extra bits to the original bit sequence. The final study presents transmission results after all-optical format conversion from NRZ to CSRZ and from RZ to CSRZ using semiconductor optical amplifier in nonlinear optical loop mirror (SOA-NOLM). This study is mainly based on the fact that the use of all-optical processing, including format conversion, has become attractive for the future data networks that are proposed to be all-optical. The feasibility of the SOA-NOLM device for converting single and WDM signals is described. The optical conversion bandwidth and its limitations for WDM conversion are also investigated. All studies of this thesis employ 10Gbit/s single or WDM signals being transmitted over dispersion managed fibre span in the recirculating loop. The fibre span is composed of single-mode fibres (SMF) whose losses and dispersion are compensated using erbium-doped fibre amplifiers (EDFAs) and dispersion compensating fibres (DCFs), respectively. Different configurations of the fibre span are presented in different parts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Improving bit error rates in optical communication systems is a difficult and important problem. The error correction must take place at high speed and be extremely accurate. We show the feasibility of using hardware implementable machine learning techniques. This may enable some error correction at the speed required.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optical data communication systems are prone to a variety of processes that modify the transmitted signal, and contribute errors in the determination of 1s from 0s. This is a difficult, and commercially important, problem to solve. Errors must be detected and corrected at high speed, and the classifier must be very accurate; ideally it should also be tunable to the characteristics of individual communication links. We show that simple single layer neural networks may be used to address these problems, and examine how different input representations affect the accuracy of bit error correction. Our results lead us to conclude that a system based on these principles can perform at least as well as an existing non-trainable error correction system, whilst being tunable to suit the individual characteristics of different communication links.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The impact of hybrid erbium-doped fiber amplifier (EDFA)/Raman amplification on a spectrally efficient coherent-wavelength-division-multiplexed (CoWDM) optical communication system is experimentally studied and modeled. Simulations suggested that 23-dB Raman gain over an unrepeatered span of 124 km single-mode fiber would allow a decrease of the mean input power of ~6 dB for a fixed bit-error rate (BER). Experimentally we demonstrated 1.2-dB Q-factor improvement for a 2-Tb/s seven-band CoWDM with backward Raman amplification. The system delivered an optical signal-to-noise ratio of 35 dB at the output of the receiver preamplifier providing a worst-case BER of 2 × 10 -6 over 49 subcarriers at 42.8 Gbaud, leaving a system margin (in terms of Q -factor) of ~4 dB from the forward-error correction threshold.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study examines the relationship between executive directors’ remuneration and the financial performance and corporate governance arrangements of the UK and Spanish listed firms. These countries’ corporate governance framework has been shaped by differences in legal origin, culture and backgrounds. For example, the UK legal arrangements can be defined as to be constituted in common-law, whereas for Spanish firms, the legal arrangement is based on civil law. We estimate both static and dynamic regression models to test our hypotheses and we estimate our regression using Ordinary Least Squares (OLS) and the Generalised Method of Moments (GMM). Estimated results for both countries show that directors’ remuneration levels are positively related with measures of firm value and financial performance. This means that remuneration levels do not lead to a point whereby firm value is reduced due to excessive remuneration. These results hold for our long-run estimates. That is, estimates based on panel cointegration and panel error correction. Measures of corporate governance also impacts on the level of executive pay. Our results have important implications for existing corporate governance arrangements and how the interests of stakeholders are protected. For example, long-run results suggest that directors’ remuneration adjusts in a way to capture variation in financial performance

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This investigation aimed to pinpoint the elements of motor timing control that are responsible for the increased variability commonly found in children with developmental dyslexia on paced or unpaced motor timing tasks (Chapter 3). Such temporal processing abilities are thought to be important for developing the appropriate phonological representations required for the development of literacy skills. Similar temporal processing difficulties arise in other developmental disorders such as Attention Deficit Hyperactivity Disorder (ADHD). Motor timing behaviour in developmental populations was examined in the context of models of typical human timing behaviour, in particular the Wing-Kristofferson model, allowing estimation of the contribution of different timing control systems, namely timekeeper and implementation systems (Chapter 2 and Methods Chapters 4 and 5). Research examining timing in populations with dyslexia and ADHD has been inconsistent in the application of stimulus parameters and so the first investigation compared motor timing behaviour across different stimulus conditions (Chapter 6). The results question the suitability of visual timing tasks which produced greater performance variability than auditory or bimodal tasks. Following an examination of the validity of the Wing-Kristofferson model (Chapter 7) the model was applied to time series data from an auditory timing task completed by children with reading difficulties and matched control groups (Chapter 8). Expected group differences in timing performance were not found, however, associations between performance and measures of literacy and attention were present. Results also indicated that measures of attention and literacy dissociated in their relationships with components of timing, with literacy ability being correlated with timekeeper variance and attentional control with implementation variance. It is proposed that these timing deficits associated with reading difficulties are attributable to central timekeeping processes and so the contribution of error correction to timing performance was also investigated (Chapter 9). Children with lower scores on measures of literacy and attention were found to have a slower or failed correction response to phase errors in timing behaviour. Results from the series of studies suggest that the motor timing difficulty in poor reading children may stem from failures in the judgement of synchrony due to greater tolerance of uncertainty in the temporal processing system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optical data communication systems are prone to a variety of processes that modify the transmitted signal, and contribute errors in the determination of 1s from 0s. This is a difficult, and commercially important, problem to solve. Errors must be detected and corrected at high speed, and the classifier must be very accurate; ideally it should also be tunable to the characteristics of individual communication links. We show that simple single layer neural networks may be used to address these problems, and examine how different input representations affect the accuracy of bit error correction. Our results lead us to conclude that a system based on these principles can perform at least as well as an existing non-trainable error correction system, whilst being tunable to suit the individual characteristics of different communication links.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Improving bit error rates in optical communication systems is a difficult and important problem. The error correction must take place at high speed and be extremely accurate. We show the feasibility of using hardware implementable machine learning techniques. This may enable some error correction at the speed required.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This letter experimentally demonstrates a visible light communication system using a 350-kHz polymer lightemitting diode operating at a total bit rate of 19 Mb/s with a bit error rate (BER) of 10-6and 20 Mb/s at the forward error correction limit for the first time. This represents a remarkable net data rate gain of ~55 times. The modulation format adopted is ON-OFF keying in conjunction with an artificial neural network classifier implemented as an equalizer. The number of neurons used in the experiment is varied from the set N = {5, 10, 20, 30, 40} with 40 neurons offering the best performance at 19 Mb/s and the BER of 10-6.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We describe a free space quantum cryptography system which is designed to allow continuous unattended key exchanges for periods of several days, and over ranges of a few kilometres. The system uses a four-laser faint-pulse transmission system running at a pulse rate of 10MHz to generate the required four alternative polarization states. The receiver module similarly automatically selects a measurement basis and performs polarization measurements with four avalanche photodiodes. The controlling software can implement the full key exchange including sifting, error correction, and privacy amplification required to generate a secure key.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We determine the critical noise level for decoding low-density parity check error-correcting codes based on the magnetization enumerator (M), rather than on the weight enumerator (W) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived using the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.