937 resultados para Error correction model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The accuracy of altimetrically derived oceanographic and geophysical information is limited by the precision of the radial component of the satellite ephemeris. A non-dynamic technique is proposed as a method of reducing the global radial orbit error of altimetric satellites. This involves the recovery of each coefficient of an analytically derived radial error correction through a refinement of crossover difference residuals. The crossover data is supplemented by absolute height measurements to permit the retrieval of otherwise unobservable geographically correlated and linearly combined parameters. The feasibility of the radial reduction procedure is established upon application to the three day repeat orbit of SEASAT. The concept of arc aggregates is devised as a means of extending the method to incorporate longer durations, such as the 35 day repeat period of ERS-1. A continuous orbit is effectively created by including the radial misclosure between consecutive long arcs as an infallible observation. The arc aggregate procedure is validated using a combination of three successive SEASAT ephemerides. A complete simulation of the 501 revolution per 35 day repeat orbit of ERS-1 is derived and the recovery of the global radial orbit error over the full repeat period is successfully accomplished. The radial reduction is dependent upon the geographical locations of the supplementary direct height data. Investigations into the respective influences of various sites proposed for the tracking of ERS-1 by ground-based transponders are carried out. The potential effectiveness on the radial orbital accuracy of locating future tracking sites in regions of high latitudinal magnitude is demonstrated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This investigation aimed to pinpoint the elements of motor timing control that are responsible for the increased variability commonly found in children with developmental dyslexia on paced or unpaced motor timing tasks (Chapter 3). Such temporal processing abilities are thought to be important for developing the appropriate phonological representations required for the development of literacy skills. Similar temporal processing difficulties arise in other developmental disorders such as Attention Deficit Hyperactivity Disorder (ADHD). Motor timing behaviour in developmental populations was examined in the context of models of typical human timing behaviour, in particular the Wing-Kristofferson model, allowing estimation of the contribution of different timing control systems, namely timekeeper and implementation systems (Chapter 2 and Methods Chapters 4 and 5). Research examining timing in populations with dyslexia and ADHD has been inconsistent in the application of stimulus parameters and so the first investigation compared motor timing behaviour across different stimulus conditions (Chapter 6). The results question the suitability of visual timing tasks which produced greater performance variability than auditory or bimodal tasks. Following an examination of the validity of the Wing-Kristofferson model (Chapter 7) the model was applied to time series data from an auditory timing task completed by children with reading difficulties and matched control groups (Chapter 8). Expected group differences in timing performance were not found, however, associations between performance and measures of literacy and attention were present. Results also indicated that measures of attention and literacy dissociated in their relationships with components of timing, with literacy ability being correlated with timekeeper variance and attentional control with implementation variance. It is proposed that these timing deficits associated with reading difficulties are attributable to central timekeeping processes and so the contribution of error correction to timing performance was also investigated (Chapter 9). Children with lower scores on measures of literacy and attention were found to have a slower or failed correction response to phase errors in timing behaviour. Results from the series of studies suggest that the motor timing difficulty in poor reading children may stem from failures in the judgement of synchrony due to greater tolerance of uncertainty in the temporal processing system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work, we determine the coset weight spectra of all binary cyclic codes of lengths up to 33, ternary cyclic and negacyclic codes of lengths up to 20 and of some binary linear codes of lengths up to 33 which are distance-optimal, by using some of the algebraic properties of the codes and a computer assisted search. Having these weight spectra the monotony of the function of the undetected error probability after t-error correction P(t)ue (C,p) could be checked with any precision for a linear time. We have used a programm written in Maple to check the monotony of P(t)ue (C,p) for the investigated codes for a finite set of points of p € [0, p/(q-1)] and in this way to determine which of them are not proper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The emergence of digital imaging and of digital networks has made duplication of original artwork easier. Watermarking techniques, also referred to as digital signature, sign images by introducing changes that are imperceptible to the human eye but easily recoverable by a computer program. Usage of error correcting codes is one of the good choices in order to correct possible errors when extracting the signature. In this paper, we present a scheme of error correction based on a combination of Reed-Solomon codes and another optimal linear code as inner code. We have investigated the strength of the noise that this scheme is steady to for a fixed capacity of the image and various lengths of the signature. Finally, we compare our results with other error correcting techniques that are used in watermarking. We have also created a computer program for image watermarking that uses the newly presented scheme for error correction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Malapropism is a semantic error that is hardly detectable because it usually retains syntactical links between words in the sentence but replaces one content word by a similar word with quite different meaning. A method of automatic detection of malapropisms is described, based on Web statistics and a specially defined Semantic Compatibility Index (SCI). For correction of the detected errors, special dictionaries and heuristic rules are proposed, which retains only a few highly SCI-ranked correction candidates for the user’s selection. Experiments on Web-assisted detection and correction of Russian malapropisms are reported, demonstrating efficacy of the described method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

* Work done under partial support of Mexican Government (CONACyT, SNI), IPN (CGPI, COFAA) and Korean Government (KIPA Professorship for Visiting Faculty Positions). The second author is currently on Sabbatical leave at Chung-Ang University.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Forward error correction (FEC) plays a vital role in coherent optical systems employing multi-level modulation. However, much of coding theory assumes that additive white Gaussian noise (AWGN) is dominant, whereas coherent optical systems have significant phase noise (PN) in addition to AWGN. This changes the error statistics and impacts FEC performance. In this paper, we propose a novel semianalytical method for dimensioning binary Bose-Chaudhuri-Hocquenghem (BCH) codes for systems with PN. Our method involves extracting statistics from pre-FEC bit error rate (BER) simulations. We use these statistics to parameterize a bivariate binomial model that describes the distribution of bit errors. In this way, we relate pre-FEC statistics to post-FEC BER and BCH codes. Our method is applicable to pre-FEC BER around 10-3 and any post-FEC BER. Using numerical simulations, we evaluate the accuracy of our approach for a target post-FEC BER of 10-5. Codes dimensioned with our bivariate binomial model meet the target within 0.2-dB signal-to-noise ratio.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.

This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.

In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the results of a combined study, using cosmogenic 36Cl exposure dating and terrestrial digital photogrammetry, of the Palliser Rockslide located in the southeastern Canadian Rocky Mountains. This site is particularly well-suited to demonstrate how this multi-disciplinary approach can be used to differentiate distinct rocksliding events, estimate their volume, and establish their chronology and recurrence interval. Observations suggest that rocksliding has been ongoing since the late Pleistocene deglaciation. Two major rockslide events have been dated at 10.0 ± 1.2 kyr and 7.7 ± 0.8 kyr before present, with failure volumes of 40 Mm3 and 8 Mm3, respectively. The results have important implications concerning our understanding of the temporal distribution of paraglacial rockslides and rock avalanches; they provide a better understanding of the volumes and failure mechanisms of recurrent failure events; and they represent the first absolute ages of a prehistoric high magnitude event in the Canadian Rocky Mountains.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research aims to investigate the Hedge Efficiency and Optimal Hedge Ratio for the future market of cattle, coffee, ethanol, corn and soybean. This paper uses the Optimal Hedge Ratio and Hedge Effectiveness through multivariate GARCH models with error correction, attempting to the possible phenomenon of Optimal Hedge Ratio differential during the crop and intercrop period. The Optimal Hedge Ratio must be bigger in the intercrop period due to the uncertainty related to a possible supply shock (LAZZARINI, 2010). Among the future contracts studied in this research, the coffee, ethanol and soybean contracts were not object of this phenomenon investigation, yet. Furthermore, the corn and ethanol contracts were not object of researches which deal with Dynamic Hedging Strategy. This paper distinguishes itself for including the GARCH model with error correction, which it was never considered when the possible Optimal Hedge Ratio differential during the crop and intercrop period were investigated. The commodities quotation were used as future price in the market future of BM&FBOVESPA and as spot market, the CEPEA index, in the period from May 2010 to June 2013 to cattle, coffee, ethanol and corn, and to August 2012 to soybean, with daily frequency. Similar results were achieved for all the commodities. There is a long term relationship among the spot market and future market, bicausality and the spot market and future market of cattle, coffee, ethanol and corn, and unicausality of the future price of soybean on spot price. The Optimal Hedge Ratio was estimated from three different strategies: linear regression by MQO, BEKK-GARCH diagonal model, and BEKK-GARCH diagonal with intercrop dummy. The MQO regression model, pointed out the Hedge inefficiency, taking into consideration that the Optimal Hedge presented was too low. The second model represents the strategy of dynamic hedge, which collected time variations in the Optimal Hedge. The last Hedge strategy did not detect Optimal Hedge Ratio differential between the crop and intercrop period, therefore, unlikely what they expected, the investor do not need increase his/her investment in the future market during the intercrop

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research aims to investigate the Hedge Efficiency and Optimal Hedge Ratio for the future market of cattle, coffee, ethanol, corn and soybean. This paper uses the Optimal Hedge Ratio and Hedge Effectiveness through multivariate GARCH models with error correction, attempting to the possible phenomenon of Optimal Hedge Ratio differential during the crop and intercrop period. The Optimal Hedge Ratio must be bigger in the intercrop period due to the uncertainty related to a possible supply shock (LAZZARINI, 2010). Among the future contracts studied in this research, the coffee, ethanol and soybean contracts were not object of this phenomenon investigation, yet. Furthermore, the corn and ethanol contracts were not object of researches which deal with Dynamic Hedging Strategy. This paper distinguishes itself for including the GARCH model with error correction, which it was never considered when the possible Optimal Hedge Ratio differential during the crop and intercrop period were investigated. The commodities quotation were used as future price in the market future of BM&FBOVESPA and as spot market, the CEPEA index, in the period from May 2010 to June 2013 to cattle, coffee, ethanol and corn, and to August 2012 to soybean, with daily frequency. Similar results were achieved for all the commodities. There is a long term relationship among the spot market and future market, bicausality and the spot market and future market of cattle, coffee, ethanol and corn, and unicausality of the future price of soybean on spot price. The Optimal Hedge Ratio was estimated from three different strategies: linear regression by MQO, BEKK-GARCH diagonal model, and BEKK-GARCH diagonal with intercrop dummy. The MQO regression model, pointed out the Hedge inefficiency, taking into consideration that the Optimal Hedge presented was too low. The second model represents the strategy of dynamic hedge, which collected time variations in the Optimal Hedge. The last Hedge strategy did not detect Optimal Hedge Ratio differential between the crop and intercrop period, therefore, unlikely what they expected, the investor do not need increase his/her investment in the future market during the intercrop

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As condições de ambiente térmico e aéreo, no interior de instalações para animais, alteram-se durante o dia, devido à influência do ambiente externo. Para que análises estatísticas e geoestatísticas sejam representativas, uma grande quantidade de pontos distribuídos espacialmente na área da instalação deve ser monitorada. Este trabalho propõe que a variação no tempo das variáveis ambientais de interesse para a produção animal, monitoradas no interior de instalações para animais, pode ser modelada com precisão a partir de registros discretos no tempo. O objetivo deste trabalho foi desenvolver um método numérico para corrigir as variações temporais dessas variáveis ambientais, transformando os dados para que tais observações independam do tempo gasto durante a aferição. O método proposto aproximou os valores registrados com retardos de tempo aos esperados no exato momento de interesse, caso os dados fossem medidos simultaneamente neste momento em todos os pontos distribuídos espacialmente. O modelo de correção numérica para variáveis ambientais foi validado para o parâmetro ambiental temperatura do ar, sendo que os valores corrigidos pelo método não diferiram pelo teste Tukey, a 5% de probabilidade dos valores reais registrados por meio de dataloggers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Establishing the ground control point (GCP) network is a pre-requisite for georeferencing raw image data. Given current typical digital spatial database quality, much interest among users is about the accuracy of the geometric correction model that yields the final product. This paper reports an approach to optimizing GCP assembly using a genetic/evolution algorithm. The paper also suggests an optimal criterion for accuracy assessment through appraisal of global accuracy of the transformation, which is computed at each point of the image space. Experimental results demonstrate that the proposed approach has a great potential for selection of the best GCPs, and considerable improvement to the accuracy of geometric correction models can be expected when it is implemented.