927 resultados para Error correction coding
Resumo:
We have studied the use of adenovirus-mediated gene transfer to reverse the pathologic changes of lysosomal storage disease caused by beta-glucuronidase deficiency in the eyes of mice with mucopolysaccharidosis VII. A recombinant adenovirus carrying the human beta-glucuronidase cDNA coding region under the control of a non-tissue-specific promoter was injected intravitreally or subretinally into the eyes of mice with mucopolysaccharidosis VII. At 1-3 weeks after injection, the treated and control eyes were examined histochemically for beta-glucuronidase expression and histologically for phenotypic correction of the lysosomal storage defect. Enzymatic expression was detected 1-3 weeks after injection. Storage vacuoles in the retinal pigment epithelium (RPE) were still present 1 week after gene transfer but were reduced to undetectable levels by 3 weeks in both intravitreally and subretinally injected eyes. There was minimal evidence of ocular pathology associated with the viral injection. These data indicate that adenovirus-mediated gene transfer to the eye may provide for adjunctive therapy for lysosomal storage diseases affecting the RPE in conjunction with enzyme replacement and/or gene therapies for correction of systemic disease manifestations. The data also support the view that recombinant adenovirus may be useful as a gene therapy vector for retinal degenerations that result from a primary genetic defect in the RPE cells.
Resumo:
In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.
Resumo:
Purpose: To calculate theoretically the errors in the estimation of corneal power when using the keratometric index (nk) in eyes that underwent laser refractive surgery for the correction of myopia and to define and validate clinically an algorithm for minimizing such errors. Methods: Differences between corneal power estimation by using the classical nk and by using the Gaussian equation in eyes that underwent laser myopic refractive surgery were simulated and evaluated theoretically. Additionally, an adjusted keratometric index (nkadj) model dependent on r1c was developed for minimizing these differences. The model was validated clinically by retrospectively using the data from 32 myopic eyes [range, −1.00 to −6.00 diopters (D)] that had undergone laser in situ keratomileusis using a solid-state laser platform. The agreement between Gaussian (PGaussc) and adjusted keratometric (Pkadj) corneal powers in such eyes was evaluated. Results: It was found that overestimations of corneal power up to 3.5 D were possible for nk = 1.3375 according to our simulations. The nk value to avoid the keratometric error ranged between 1.2984 and 1.3297. The following nkadj models were obtained: nkadj= −0.0064286r1c + 1.37688 (Gullstrand eye model) and nkadj = −0.0063804r1c + 1.37806 (Le Grand). The mean difference between Pkadj and PGaussc was 0.00 D, with limits of agreement of −0.45 and +0.46 D. This difference correlated significantly with the posterior corneal radius (r = −0.94, P < 0.01). Conclusions: The use of a single nk for estimating the corneal power in eyes that underwent a laser myopic refractive surgery can lead to significant errors. These errors can be minimized by using a variable nk dependent on r1c.
Resumo:
Purpose: To evaluate the predictability of the refractive correction achieved with a positional accommodating intraocular lenses (IOL) and to develop a potential optimization of it by minimizing the error associated with the keratometric estimation of the corneal power and by developing a predictive formula for the effective lens position (ELP). Materials and Methods: Clinical data from 25 eyes of 14 patients (age range, 52–77 years) and undergoing cataract surgery with implantation of the accommodating IOL Crystalens HD (Bausch and Lomb) were retrospectively reviewed. In all cases, the calculation of an adjusted IOL power (PIOLadj) based on Gaussian optics considering the residual refractive error was done using a variable keratometric index value (nkadj) for corneal power estimation with and without using an estimation algorithm for ELP obtained by multiple regression analysis (ELPadj). PIOLadj was compared to the real IOL power implanted (PIOLReal, calculated with the SRK-T formula) and also to the values estimated by the Haigis, HofferQ, and Holladay I formulas. Results: No statistically significant differences were found between PIOLReal and PIOLadj when ELPadj was used (P = 0.10), with a range of agreement between calculations of 1.23 D. In contrast, PIOLReal was significantly higher when compared to PIOLadj without using ELPadj and also compared to the values estimated by the other formulas. Conclusions: Predictable refractive outcomes can be obtained with the accommodating IOL Crystalens HD using a variable keratometric index for corneal power estimation and by estimating ELP with an algorithm dependent on anatomical factors and age.
Resumo:
Vita.
Resumo:
Vol. 2 has imprint: New York: Printed and published by Isaac Riley, Wall-street, 1807.
Resumo:
The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.
Resumo:
Very few empirically validated interventions for improving metacognitive skills (i.e., self-awareness and self-regulation) and functional outcomes have been reported. This single-case experimental study presents JM, a 36-year-old man with a very severe traumatic brain injury (TBI) who demonstrated long-term awareness deficits. Treatment at four years post-injury involved a metacognitive contextual intervention based on a conceptualization of neuro-cognitive, psychological, and socio-environmental factors contributing to his awareness deficits. The 16-week intervention targeted error awareness and self-correction in two real life settings: (a) cooking at home: and (b) volunteer work. Outcome measures included behavioral observation of error behavior and standardized awareness measures. Relative to baseline performance in the cooking setting, JM demonstrated a 44% reduction in error frequency and increased self-correction. Although no spontaneous generalization was evident in the volunteer work setting, specific training in this environment led to a 39% decrease in errors. JM later gained paid employment and received brief metacognitive training in his work environment. JM's global self-knowledge of deficits assessed by self-report was unchanged after the program. Overall, the study provides preliminary support for a metacognitive contextual approach to improve error awareness and functional Outcome in real life settings.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.
Resumo:
Presbyopia is an age-related eye condition where one of the signs is the reduction in the amplitude of accommodation, resulting in the loss of ability to change the eye's focus from far to near. It is the most common age-related ailments affecting everyone around their mid-40s. Methods for the correction of presbyopia include contact lens and spectacle options but the surgical correction of presbyopia still remains a significant challenge for refractive surgeons. Surgical strategies for dealing with presbyopia may be extraocular (corneal or scleral) or intraocular (removal and replacement of the crystalline lens or some type of treatment on the crystalline lens itself). There are however a number of limitations and considerations that have limited the widespread acceptance of surgical correction of presbyopia. Each surgical strategy presents its own unique set of advantages and disadvantages. For example, lens removal and replacement with an intraocular lens may not be preferable in a young patient with presbyopia without a refractive error. Similarly treatment on the crystalline lens may not be a suitable choice for a patient with early signs of cataract. This article is a review of the options available and those that are in development stages and are likely to be available in the near future for the surgical correction of presbyopia.
Resumo:
We present information-theory analysis of the tradeoff between bit-error rate improvement and the data-rate loss using skewed channel coding to suppress pattern-dependent errors in digital communications. Without loss of generality, we apply developed general theory to the particular example of a high-speed fiber communication system with a strong patterning effect. © 2007 IEEE.
Resumo:
PURPOSE: The Bonferroni correction adjusts probability (p) values because of the increased risk of a type I error when making multiple statistical tests. The routine use of this test has been criticised as deleterious to sound statistical judgment, testing the wrong hypothesis, and reducing the chance of a type I error but at the expense of a type II error; yet it remains popular in ophthalmic research. The purpose of this article was to survey the use of the Bonferroni correction in research articles published in three optometric journals, viz. Ophthalmic & Physiological Optics, Optometry & Vision Science, and Clinical & Experimental Optometry, and to provide advice to authors contemplating multiple testing. RECENT FINDINGS: Some authors ignored the problem of multiple testing while others used the method uncritically with no rationale or discussion. A variety of methods of correcting p values were employed, the Bonferroni method being the single most popular. Bonferroni was used in a variety of circumstances, most commonly to correct the experiment-wise error rate when using multiple 't' tests or as a post-hoc procedure to correct the family-wise error rate following analysis of variance (anova). Some studies quoted adjusted p values incorrectly or gave an erroneous rationale. SUMMARY: Whether or not to use the Bonferroni correction depends on the circumstances of the study. It should not be used routinely and should be considered if: (1) a single test of the 'universal null hypothesis' (Ho ) that all tests are not significant is required, (2) it is imperative to avoid a type I error, and (3) a large number of tests are carried out without preplanned hypotheses.
Resumo:
For more than a century it has been known that the eye is not a perfect optical system, but rather a system that suffers from aberrations beyond conventional prescriptive descriptions of defocus and astigmatism. Whereas traditional refraction attempts to describe the error of the eye with only two parameters, namely sphere and cylinder, measurements of wavefront aberrations depict the optical error with many more parameters. What remains questionable is the impact these additional parameters have on visual function. Some authors have argued that higher-order aberrations have a considerable effect on visual function and in certain cases this effect is significant enough to induce amblyopia. This has been referred to as ‘higher-order aberration-associated amblyopia’. In such cases, correction of higher-order aberrations would not restore visual function. Others have reported that patients with binocular asymmetric aberrations display an associated unilateral decrease in visual acuity and, if the decline in acuity results from the aberrations alone, such subjects may have been erroneously diagnosed as amblyopes. In these cases, correction of higher-order aberrations would restore visual function. This refractive entity has been termed ‘aberropia’. In order to investigate these hypotheses, the distribution of higher-order aberrations in strabismic, anisometropic and idiopathic amblyopes, and in a group of visual normals, was analysed both before and after wavefront-guided laser refractive correction. The results show: (i) there is no significant asymmetry in higher-order aberrations between amblyopic and fixing eyes prior to laser refractive treatment; (ii) the mean magnitude of higher-order aberrations is similar within the amblyopic and visually normal populations; (iii) a significant improvement in visual acuity can be realised for adult amblyopic patients utilising wavefront-guided laser refractive surgery and a modest increase in contrast sensitivity was observed for the amblyopic eye of anisometropes following treatment (iv) an overall trend towards increased higher-order aberrations following wavefront-guided laser refractive treatment was observed for both visually normal and amblyopic eyes. In conclusion, while the data do not provide any direct evidence for the concepts of either ‘aberropia’ or ‘higher-order aberration-associated amblyopia’, it is clear that gains in visual acuity and contrast sensitivity may be realised following laser refractive treatment of the amblyopic adult eye. Possible mechanisms by which these gains are realised are discussed.