80 resultados para Error correction methods
Resumo:
In this work, we present an adaptive unequal loss protection (ULP) scheme for H264/AVC video transmission over lossy networks. This scheme combines erasure coding, H.264/AVC error resilience techniques and importance measures in video coding. The unequal importance of the video packets is identified in the group of pictures (GOP) and the H.264/AVC data partitioning levels. The presented method can adaptively assign unequal amount of forward error correction (FEC) parity across the video packets according to the network conditions, such as the available network bandwidth, packet loss rate and average packet burst loss length. A near optimal algorithm is developed to deal with the FEC assignment for optimization. The simulation results show that our scheme can effectively utilize network resources such as bandwidth, while improving the quality of the video transmission. In addition, the proposed ULP strategy ensures graceful degradation of the received video quality as the packet loss rate increases. © 2010 IEEE.
Resumo:
We develop a framework for estimating the quality of transmission (QoT) of a new lightpath before it is established, as well as for calculating the expected degradation it will cause to existing lightpaths. The framework correlates the QoT metrics of established lightpaths, which are readily available from coherent optical receivers that can be extended to serve as optical performance monitors. Past similar studies used only space (routing) information and thus neglected spectrum, while they focused on oldgeneration noncoherent networks. The proposed framework accounts for correlation in both the space and spectrum domains and can be applied to both fixed-grid wavelength division multiplexing (WDM) and elastic optical networks. It is based on a graph transformation that exposes and models the interference between spectrum-neighboring channels. Our results indicate that our QoT estimates are very close to the actual performance data, that is, to having perfect knowledge of the physical layer. The proposed estimation framework is shown to provide up to 4 × 10-2 lower pre-forward error correction bit error ratio (BER) compared to theworst-case interference scenario,which overestimates the BER. The higher accuracy can be harvested when lightpaths are provisioned with low margins; our results showed up to 47% reduction in required regenerators, a substantial savings in equipment cost.
Resumo:
We quantify the error statistics and patterning effects in a 5x 40 Gbit/s WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/EDFA amplification. We propose an adaptive constrained coding for the suppression of errors due to patterning effects. It is established, that this coding technique can greatly reduce the bit error rate (BER) value even for large BER (BER > 101). The proposed approach can be used in the combination with the forward error correction schemes (FEC) to correct the errors even when real channel BER is outside the FEC workspace.
Resumo:
Background There is a paucity of data describing the prevalence of childhood refractive error in the United Kingdom. The Northern Ireland Childhood Errors of Refraction study, along with its sister study the Aston Eye Study, are the first population-based surveys of children using both random cluster sampling and cycloplegic autorefraction to quantify levels of refractive error in the United Kingdom. Methods Children aged 6–7 years and 12–13 years were recruited from a stratified random sample of primary and post-primary schools, representative of the population of Northern Ireland as a whole. Measurements included assessment of visual acuity, oculomotor balance, ocular biometry and cycloplegic binocular open-field autorefraction. Questionnaires were used to identify putative risk factors for refractive error. Results 399 (57%) of 6–7 years and 669 (60%) of 12–13 years participated. School participation rates did not vary statistically significantly with the size of the school, whether the school is urban or rural, or whether it is in a deprived/non-deprived area. The gender balance, ethnicity and type of schooling of participants are reflective of the Northern Ireland population. Conclusions The study design, sample size and methodology will ensure accurate measures of the prevalence of refractive errors in the target population and will facilitate comparisons with other population-based refractive data.
Resumo:
Presbyopia is an age-related eye condition where one of the signs is the reduction in the amplitude of accommodation, resulting in the loss of ability to change the eye's focus from far to near. It is the most common age-related ailments affecting everyone around their mid-40s. Methods for the correction of presbyopia include contact lens and spectacle options but the surgical correction of presbyopia still remains a significant challenge for refractive surgeons. Surgical strategies for dealing with presbyopia may be extraocular (corneal or scleral) or intraocular (removal and replacement of the crystalline lens or some type of treatment on the crystalline lens itself). There are however a number of limitations and considerations that have limited the widespread acceptance of surgical correction of presbyopia. Each surgical strategy presents its own unique set of advantages and disadvantages. For example, lens removal and replacement with an intraocular lens may not be preferable in a young patient with presbyopia without a refractive error. Similarly treatment on the crystalline lens may not be a suitable choice for a patient with early signs of cataract. This article is a review of the options available and those that are in development stages and are likely to be available in the near future for the surgical correction of presbyopia.
Resumo:
1. Fitting a linear regression to data provides much more information about the relationship between two variables than a simple correlation test. A goodness of fit test of the line should always be carried out. Hence, r squared estimates the strength of the relationship between Y and X, ANOVA whether a statistically significant line is present, and the ‘t’ test whether the slope of the line is significantly different from zero. 2. Always check whether the data collected fit the assumptions for regression analysis and, if not, whether a transformation of the Y and/or X variables is necessary. 3. If the regression line is to be used for prediction, it is important to determine whether the prediction involves an individual y value or a mean. Care should be taken if predictions are made close to the extremities of the data and are subject to considerable error if x falls beyond the range of the data. Multiple predictions require correction of the P values. 3. If several individual regression lines have been calculated from a number of similar sets of data, consider whether they should be combined to form a single regression line. 4. If the data exhibit a degree of curvature, then fitting a higher-order polynomial curve may provide a better fit than a straight line. In this case, a test of whether the data depart significantly from a linear regression should be carried out.
Resumo:
PURPOSE: The Bonferroni correction adjusts probability (p) values because of the increased risk of a type I error when making multiple statistical tests. The routine use of this test has been criticised as deleterious to sound statistical judgment, testing the wrong hypothesis, and reducing the chance of a type I error but at the expense of a type II error; yet it remains popular in ophthalmic research. The purpose of this article was to survey the use of the Bonferroni correction in research articles published in three optometric journals, viz. Ophthalmic & Physiological Optics, Optometry & Vision Science, and Clinical & Experimental Optometry, and to provide advice to authors contemplating multiple testing. RECENT FINDINGS: Some authors ignored the problem of multiple testing while others used the method uncritically with no rationale or discussion. A variety of methods of correcting p values were employed, the Bonferroni method being the single most popular. Bonferroni was used in a variety of circumstances, most commonly to correct the experiment-wise error rate when using multiple 't' tests or as a post-hoc procedure to correct the family-wise error rate following analysis of variance (anova). Some studies quoted adjusted p values incorrectly or gave an erroneous rationale. SUMMARY: Whether or not to use the Bonferroni correction depends on the circumstances of the study. It should not be used routinely and should be considered if: (1) a single test of the 'universal null hypothesis' (Ho ) that all tests are not significant is required, (2) it is imperative to avoid a type I error, and (3) a large number of tests are carried out without preplanned hypotheses.
Resumo:
In this letter, we experimentally study the statistical properties of a received QPSK modulated signal and compare various bit error rate (BER) estimation methods for coherent optical orthogonal frequency division multiplexing transmission. We show that the statistical BER estimation method based on the probability density function of the received QPSK symbols offers the most accurate estimate of the system performance.
Resumo:
Coherent optical orthogonal frequency division multiplexing (CO-OFDM) is an attractive transmission technique to virtually eliminate intersymbol interference caused by chromatic dispersion and polarization-mode dispersion. Design, development, and operation of CO-OFDM systems require simple, efficient, and reliable methods of their performance evaluation. In this paper, we demonstrate an accurate bit error rate estimation method for QPSK CO-OFDM transmission based on the probability density function of the received QPSK symbols. By comparing with other known approaches, including data-aided and nondata-aided error vector magnitude, we show that the proposed method offers the most accurate estimate of the system performance for both single channel and wavelength division multiplexing QPSK CO-OFDM transmission systems. © 2014 IEEE.
Resumo:
Purpose: To investigate the relationship between pupil diameter and refractive error and how refractive correction, target luminance, and accommodation modulate this relationship. Methods: Sixty emmetropic, myopic, and hyperopic subjects (age range, 18 to 35 years) viewed an illuminated target (luminance: 10, 100, 200, 400, 1000, 2000, and 4100 cd/m2) within a Badal optical system, at 0 diopters (D) and −3 D vergence, with and without refractive correction. Refractive error was corrected using daily disposable contact lenses. Pupil diameter and accommodation were recorded continuously using a commercially available photorefractor. Results: No significant difference in pupil diameter was found between the refractive groups at 0 D or −3 D target vergence, in the corrected or uncorrected conditions. As expected, pupil diameter decreased with increasing luminance. Target vergence had no significant influence on pupil diameter. In the corrected condition, at 0 D target vergence, the accommodation response was similar in all refractive groups. At −3 D target vergence, the emmetropic and myopic groups accommodated significantly more than the hyperopic group at all luminance levels. There was no correlation between accommodation response and pupil diameter or refractive error in any refractive group. In the uncorrected condition, the accommodation response was significantly greater in the hyperopic group than in the myopic group at all luminance levels, particularly for near viewing. In the hyperopic group, the accommodation response was significantly correlated with refractive error but not pupil diameter. In the myopic group, accommodation response level was not correlated with refractive error or pupil diameter. Conclusions: Refractive error has no influence on pupil diameter, irrespective of refractive correction or accommodative demand. This suggests that the pupil is controlled by the pupillary light reflex and is not driven by retinal blur.
Resumo:
We demonstrate an accurate BER estimation method for QPSK CO-OFDM transmission based on the probability density function of the received QPSK symbols. Using a 112Gbs QPSK CO-OFDM transmission as an example, we show that this method offers the most accurate estimate of the system's performance in comparison with other known approaches.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
The performance of Gallager's error-correcting code is investigated via methods of statistical physics. In this method, the transmitted codeword comprises products of the original message bits selected by two randomly-constructed sparse matrices; the number of non-zero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity is saturated for many of the codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered by employing the TAP approach which is identical to the commonly used belief-propagation-based decoding.
Resumo:
We employ the methods presented in the previous chapter for decoding corrupted codewords, encoded using sparse parity check error correcting codes. We show the similarity between the equations derived from the TAP approach and those obtained from belief propagation, and examine their performance as practical decoding methods.
Resumo:
Statistical physics is employed to evaluate the performance of error-correcting codes in the case of finite message length for an ensemble of Gallager's error correcting codes. We follow Gallager's approach of upper-bounding the average decoding error rate, but invoke the replica method to reproduce the tightest general bound to date, and to improve on the most accurate zero-error noise level threshold reported in the literature. The relation between the methods used and those presented in the information theory literature are explored.